Science.gov

Sample records for offices simulations measurements

  1. Office ergonomics. Measurements for success.

    PubMed

    Martin, C; Andrew-Tuthill, D M

    1999-10-01

    The successful implementation of an ergonomics program requires collecting data on worksite history repetitive motion injuries and assessing the corporate ergonomic needs. It is important to solicit management, department, and employee support. Program success depends on creating process, training program, skilled assessors, and an accountability method. A thorough understanding and application of neutral posture and the three seated chair positions is essential. Evaluation of the entire individual work-space is essential. Precise measurements of the individual, the office chair, and the work surfaces should be completed, and heights adjusted to meet individual needs. Functional, totally adjustable office chairs are a necessity. Vendor selection criteria and vendor contracts for chairs and office equipment assure consistent specifications and cost control.

  2. Simulated Office Education Guidelines for Washington.

    ERIC Educational Resources Information Center

    Nelson, Frank; And Others

    The principles for use of the insurance, mercantile, and Lester-Hill simulation techniques in a program of office education are provided in this document. Guidelines for the development and use of such techniques in the classroom are suggested. The chapters outline the basic philosophy of student and business needs, curricular implications,…

  3. Survey of Characteristics of Measurement Services Offices.

    ERIC Educational Resources Information Center

    Erwin, T. Dary; And Others

    This paper briefly summarizes the findings from a national survey for the Measurement Services Association (MSA) of testing and measurement offices regarding their organizational characteristics and their activities and services. Questionnaires were mailed to all testing offices (n=146) at colleges and universities on the MSA Newsletter mailing…

  4. FLAIR: A Simulation Based on a Real Buying Office

    ERIC Educational Resources Information Center

    Mosich, Doris

    1974-01-01

    A simulation of a buying service for a number of retail stores, termed FLAIR, is described as a flexible instructional vehicle meeting the needs of business procedures trainees attending the Southern California Regional Occupational Center. Training objectives, office organization, and the importance of real-world simulation in developing…

  5. Automated office blood pressure measurement in primary care

    PubMed Central

    Myers, Martin G.; Kaczorowski, Janusz; Dawes, Martin; Godwin, Marshall

    2014-01-01

    Abstract Objective To provide FPs with detailed knowledge of automated office blood pressure (AOBP) measurement, its potential role in primary care, and its proper use in the diagnosis and management of hypertension. Sources of information Comprehensive monitoring and collection of scientific articles on AOBP by the authors since its introduction. Main message Automated office blood pressure measurement maintains a role for blood pressure (BP) readings taken in the office setting. Clinical research studies have reported a substantially stronger relationship between awake ambulatory BP measurement and AOBP measurement compared with manual BP recorded during routine visits to the patient’s physician. Automated office blood pressure measurement produces mean BP values comparable to awake ambulatory BP and home BP values. Compared with routine manual office BP measurement, AOBP correlates more strongly with awake ambulatory BP measurement, shows less digit preference, is more consistent from visit to visit, is similar both within and outside of the physician’s office, virtually eliminates office-induced hypertension, and is associated with less masked hypertension. It is estimated that more than 25% of Canadian primary care physicians are now using AOBP measurement in their office practices. The use of AOBP to diagnose hypertension has been recommended by the Canadian Hypertension Education Program since 2010. Conclusion There is now sufficient evidence to incorporate AOBP measurement into primary care as an alternative to manual BP measurement. PMID:24522674

  6. Practicum for Simulated Methods in Office Occupation Education. Final Report.

    ERIC Educational Resources Information Center

    Hanson, Garth A.

    Thirty-six participants and four observers representing 34 states attended the practicum at the Utah State University campus in Logan, July 8-19, 1968. The purpose was to provide high school business teachers with practical knowledge, experience, and materials for designing and operating simulated business offices in their classrooms. The…

  7. Simulated Office Education: Course of Study: Teacher's Manual and Student's Manual.

    ERIC Educational Resources Information Center

    Utah State Board for Vocational Education, Salt Lake City.

    Two separate manuals give detailed instructions for setting up and carrying out simulated office practice. The simulation design covers all office skills and all kinds of office situations, from management decisions to ground rules for coffee breaks and includes handling rush jobs. Procedures and roles for seven office positions, from vice…

  8. Adaptive thinking & leadership simulation game training for special forces officers.

    SciTech Connect

    Raybourn, Elaine Marie; Mendini, Kip; Heneghan, Jerry; Deagle, Edwin

    2005-07-01

    multiplayer simulation game is successfully used in the Special Forces Officer training program.

  9. The Patriot Company: A Simulated Office. Parkview High School, Little Rock, Arkansas.

    ERIC Educational Resources Information Center

    Smith, Phyllis W.

    The document is a student manual and teacher's manual for a simulated office practice class designed to give students training in a business office on school premises. In the simulation, students perform as office personnel and as customers and creditors. The first part of the guide, directed to students, contains: general information on the…

  10. Measuring Noncommissioned Officer Knowledge and Experience to Enable Tailored Training

    DTIC Science & Technology

    2011-11-01

    U.S. Army Research Institute for the Behavioral and Social Sciences Research Report 1952 Measuring Noncommissioned Officer...Knowledge and Experience to Enable Tailored Training Peter S. Schaefer U.S. Army Research Institute Paul N. Blankenbeckler Northrop...Grumman Corporation Christopher J. Brogdon Mercer University Consortium Research Fellows Program November 2011 Approved for

  11. Photocopy of measured drawing (from First Coast Guard District Office, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of measured drawing (from First Coast Guard District Office, John F. Kennedy Federal Building, Government Center, Boston, Massachusetts) designed by Edward P. Adams and Royal Luther, 1890 "PLAN FOR FRAMED DOUBLE DWELLING AT PORTLAND HEAD, ME., LIGHT STATION" - Portland Head Light, Portland Head, approximately 1/2 mile East of Shore Road, Cape Elizabeth, Cumberland County, ME

  12. Why use automated office blood pressure measurements in clinical practice?

    PubMed

    Andreadis, Emmanuel A; Angelopoulos, Epameinondas T; Agaliotis, Gerasimos D; Tsakanikas, Athanasios P; Mousoulis, George P

    2011-09-01

    Automated office blood pressure (AOBP) measurement with the patient resting alone in a quiet examining room can eliminate the white-coat effect associated with conventional readings taken by manual sphygmomanometer. The key to reducing the white-coat response appears to be multiple blood pressure (BP) readings taken in a non-observer office setting, thus eliminating any interaction that could provoke an office-induced increase in BP. Furthermore, AOBP readings have shown a higher correlation with the mean awake ambulatory BP compared with BP readings recorded in routine clinical practice. Although there is a paucity of studies connecting AOBP with organ damage, AOBP values were recently found to be equally associated with left ventricular mass index as those of ambulatory BP. This concludes that in contrast to routine manual office BP, AOBP readings compare favourably with 24-hour ambulatory BP measurements in the appraisal of cardiac remodelling and, as such, could be complementary to ambulatory readings in a way similar to home BP measurements.

  13. Novel Uses of Office-Based Measures of Arterial Compliance

    PubMed Central

    Townsend, Raymond R.

    2015-01-01

    Office-based blood pressure monitoring has been the primary way of managing the cardiovascular risk associated with a diagnosis of hypertension. As research unfolds the nature in which the pulse waveform is generated, additional insights beyond standard measures of systolic and diastolic blood pressure have emerged to help reclassify the cardiovascular risk of patients or point out patterns that have, in longitudinal cohort studies, shown promise as predictors of outcomes such as heart failure. In this review, we focus on the pressure profile in the proximal aorta that can be obtained easily and noninvasively from the radial or brachial artery during a clinical office encounter and the potential value of these measures in outcomes such as left ventricular hypertrophy and heart failure. PMID:27057290

  14. Simulation Based Acquisition for NASA's Office of Exploration Systems

    NASA Technical Reports Server (NTRS)

    Hale, Joe

    2004-01-01

    In January 2004, President George W. Bush unveiled his vision for NASA to advance U.S. scientific, security, and economic interests through a robust space exploration program. This vision includes the goal to extend human presence across the solar system, starting with a human return to the Moon no later than 2020, in preparation for human exploration of Mars and other destinations. In response to this vision, NASA has created the Office of Exploration Systems (OExS) to develop the innovative technologies, knowledge, and infrastructures to explore and support decisions about human exploration destinations, including the development of a new Crew Exploration Vehicle (CEV). Within the OExS organization, NASA is implementing Simulation Based Acquisition (SBA), a robust Modeling & Simulation (M&S) environment integrated across all acquisition phases and programs/teams, to make the realization of the President s vision more certain. Executed properly, SBA will foster better informed, timelier, and more defensible decisions throughout the acquisition life cycle. By doing so, SBA will improve the quality of NASA systems and speed their development, at less cost and risk than would otherwise be the case. SBA is a comprehensive, Enterprise-wide endeavor that necessitates an evolved culture, a revised spiral acquisition process, and an infrastructure of advanced Information Technology (IT) capabilities. SBA encompasses all project phases (from requirements analysis and concept formulation through design, manufacture, training, and operations), professional disciplines, and activities that can benefit from employing SBA capabilities. SBA capabilities include: developing and assessing system concepts and designs; planning manufacturing, assembly, transport, and launch; training crews, maintainers, launch personnel, and controllers; planning and monitoring missions; responding to emergencies by evaluating effects and exploring solutions; and communicating across the OEx

  15. Simulation of realistic retinoscopic measurement

    NASA Astrophysics Data System (ADS)

    Tan, Bo; Chen, Ying-Ling; Baker, K.; Lewis, J. W.; Swartz, T.; Jiang, Y.; Wang, M.

    2007-03-01

    Realistic simulation of ophthalmic measurements on normal and diseased eyes is presented. We use clinical data of ametropic and keratoconus patients to construct anatomically accurate three-dimensional eye models and simulate the measurement of a streak retinoscope with all the optical elements. The results show the clinical observations including the anomalous motion in high myopia and the scissors reflex in keratoconus. The demonstrated technique can be applied to other ophthalmic instruments and to other and more extensively abnormal eye conditions. It provides promising features for medical training and for evaluating and developing ocular instruments.

  16. Measurements of chlorinated volatile organic compounds emitted from office printers and photocopiers.

    PubMed

    Kowalska, Joanna; Szewczyńska, Małgorzata; Pośniak, Małgorzata

    2015-04-01

    Office devices can release volatile organic compounds (VOCs) partly generated by toners and inks, as well as particles of paper. The aim of the presented study is to identify indoor emissions of volatile halogenated organic compounds into the office workspace environment. Mixtures of organic pollutants emitted by seven office devices, i.e. printers and copiers, were analyzed by taking samples in laboratory conditions during the operation of these appliances. Tests of volatile organic compound emissions from selected office devices were conducted in a simulated environment (test chamber). Samples of VOCs were collected using three-layered thermal desorption tubes. Separation and identification of organic pollutant emissions were made using thermal desorption combined with gas chromatography coupled to mass spectrometry. Test chamber studies indicated that operation of the office printer and copier would contribute to the significant concentration level of VOCs in typical office indoor air. Among the determined volatile halogenated compounds, only chlorinated organic compounds were identified, inter alia: trichloroethylene - carcinogenic - and tetrachloroethylene - possibly carcinogenic to human. The results show that daily exposure of an office worker to chemical factors released by the tested printing and copying units can be variable in terms of concentrations of VOCs. The highest emissions in the test chamber during printing were measured for ethylbenzene up to 41.3 μg m(-3), xylenes up to 40.5 μg m(-3) and in case of halogenated compounds the highest concentration for chlorobenzene was 6.48 μg m(-3). The study included the comparison of chamber concentrations and unit-specific emission rates of selected VOCs and the identified halogenated compounds. The highest amount of total VOCs was emitted while copying with device D and was rated above 1235 μg m(-3) and 8400 μg unit(-1) h(-1) on average.

  17. Expanding the Lester Hill Experience: A Report on Two 'Branch Office' Simulations

    ERIC Educational Resources Information Center

    Melvin, Opal B.

    1976-01-01

    Describes use of the Lester Hill Office Simulation, a program taught at the Tishomingo County Area Vocational-Technical Center in Mississippi. A fictitious company which provides students with the opportunity to gain realistic office experience in a classroom setting. Suggested ideas and optional activities can be used by teachers as a starting…

  18. Measurement of particle concentrations in a dental office.

    PubMed

    Sotiriou, Maria; Ferguson, Stephen F; Davey, Mark; Wolfson, Jack M; Demokritou, Philip; Lawrence, Joy; Sax, Sonja N; Koutrakis, Petros

    2008-02-01

    Particles in a dental office can be generated by a number of instruments, such as air-turbine handpieces, low-speed handpieces, ultrasonic scalers, bicarbonate polishers, polishing cups, as well as drilling and air sprays inside the oral cavity. This study examined the generation of particles during dental drilling and measured particle size, mass, and trace elements. The air sampling techniques included both continuous and integrated methods. The following particle continuous measurements were taken every minute: (1) size-selective particle number concentration (Climet); (2) total particle number concentration (PTRAK), and; (3) particle mass concentration (DustTrak). Integrated particle samples were collected for about 5 h on each of five sampling days, using a PM(2.5) sampler (ChemComb) for elemental/organic carbon analysis, and a PM(10) sampler (Harvard Impactor) for mass and elemental analyses. There was strong evidence that these procedures result in particle concentrations above background. The dental procedures produced number concentrations of relatively small particles (<0.5 microm) that were much higher than concentrations produced for the relatively larger particles (>0.5 microm). Also, these dental procedures caused significant elevation above background of certain trace elements (measured by X-ray fluorescence) but did not cause any elevation of elemental carbon (measured by thermal optical reflectance). Dental drilling procedures aerosolize saliva and products of drilling, producing particles small enough to penetrate deep into the lungs. The potential health impacts of the exposure of dental personnel to such particles need to be evaluated. Increased ventilation and personal breathing protection could be used to minimize harmful effects.

  19. Measurement outcomes from hip simulators.

    PubMed

    de Villiers, Danielle; Shelton, Julia C

    2016-05-01

    Simulation of wear in total hip replacements has been recognised as an important factor in determining the likelihood of clinical success. However, accurate measurement of wear can be problematic with factors such as number and morphology of wear particles produced as well as ion release proving more important in the biological response to hip replacements than wear volume or wear rate alone. In this study, hard-on-hard (CoCr alloy, AgCrN coating) and hard-on-soft (CoCr alloy and CrN coating on vitamin E blended highly cross-linked polyethylene) bearing combinations were tested in an orbital hip simulator under standard and some adverse conditions. Gravimetric wear rates were determined for all bearings, with cobalt and where applicable, silver release determined throughout testing. Isolation of wear particles from the lubricating fluid was used to determine the influence of different bearing combinations and wear conditions on particle morphology. It was found that cobalt and silver could be measured in the lubricating fluid even when volumetric wear was not detectable. In hard-on-hard bearings, Pearson's correlation of 0.98 was established between metal release into the lubricating fluid and wear volume. In hard-on-soft bearings, coating the head did not influence the polyethylene wear rates measured under standard conditions but did influence the cobalt release; the diameter influenced both polyethylene wear and cobalt release, and the introduction of adverse testing generated smaller polyethylene particles. While hip simulators can be useful to assess the wear performance of a new material or design, measurement of other outcomes may yield greater insight into the clinical behaviour of the bearings in vivo.

  20. Simmons Insurance Agency. A Clerk-Typist Position Simulation. Student Packet III. Office Occupations.

    ERIC Educational Resources Information Center

    Georgia State Univ., Atlanta.

    This is the third of five student packets forming part of a position simulation developed for use in an office applications laboratory at the postsecondary level. The purpose of the simulation is to give the student an opportunity to become familiar with the tasks and duties performed by a clerk-typist working for an independent insurance agency.…

  1. Simmons Insurance Agency. A Clerk-Typist Position Simulation. Student Packet II. Office Occupations.

    ERIC Educational Resources Information Center

    Georgia State Univ., Atlanta.

    This is the second of five student packets forming part of a position simulation developed for use in an office applications laboratory at the postsecondary level. The purpose of the simulation is to give the student an opportunity to become familiar with the tasks and duties performed by a clerk-typist working for an independent insurance agency.…

  2. Simmons Insurance Agency. A Clerk-Typist Position Simulation. Student Packet IV. Office Occupations.

    ERIC Educational Resources Information Center

    Georgia State Univ., Atlanta.

    This is the fourth of five student packets forming part of a position simulation developed for use in an office applications laboratory at the postsecondary level. The purpose of the simulation is to give the student an opportunity to become familiar with the tasks and duties performed by a clerk-typist working for an independent insurance agency.…

  3. Development of a Web-Based Periscope Simulator for Submarine Officer Training

    DTIC Science & Technology

    2014-09-01

    release; distribution is unlimited DEVELOPMENT OF A WEB -BASED PERISCOPE SIMULATOR FOR SUBMARINE OFFICER TRAINING by Ricardo S. Bastos September...2. REPORT DATE September 2014 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE DEVELOPMENT OF A WEB -BASED PERISCOPE...simulators. This work suggests and explores the use of web -based simulation as a tool to diminish this gap, applying the concept of part-task training

  4. Numerical Simulations of Instabilities in Single-Hole Office Elements

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Hitt, Matthew A.; Lineberry, David M.

    2013-01-01

    An orifice element is commonly used in liquid rocket engine test facilities either as a flow metering device, a damper for acoustic resonance or to provide a large reduction in pressure over a very small distance in the piping system. While the orifice as a device is largely effective in stepping down pressure, it is also susceptible to a wake-vortex type instability that generates pressure fluctuations that propagate downstream and interact with other elements of the test facility resulting in structural vibrations. Furthermore in piping systems an unstable feedback loop can exist between the vortex shedding and acoustic perturbations from upstream components resulting in an amplification of the modes convecting downstream. Such was the case in several tests conducted at NASA as well as in the Ariane 5 strap-on P230 engine in a static firing test where pressure oscillations of 0.5% resulted in 5% thrust oscillations. Exacerbating the situation in cryogenic test facilities, is the possibility of the formation of vapor clouds when the pressure in the wake falls below the vapor pressure leading to a cavitation instability that has a lower frequency than the primary wake-vortex instability. The cavitation instability has the potential for high amplitude fluctuations that can cause catastrophic damage in the facility. In this paper high-fidelity multi-phase numerical simulations of an orifice element are used to characterize the different instabilities, understand the dominant instability mechanisms and identify the tonal content of the instabilities.

  5. Principles of Blood Pressure Measurement - Current Techniques, Office vs Ambulatory Blood Pressure Measurement.

    PubMed

    Vischer, Annina S; Burkard, Thilo

    2016-07-15

    Blood pressure measurement has a long history and a crucial role in clinical medicine. Manual measurement using a mercury sphygmomanometer and a stethoscope remains the Gold Standard. However, this technique is technically demanding and commonly leads to faulty values. Automatic devices have helped to improve and simplify the technical aspects, but a standardised procedure of obtaining comparable measurements remains problematic and may therefore limit their validity in clinical practice. This underlines the importance of less error-prone measurement methods such as ambulatory or home blood pressure measurements and automated office blood pressure measurements. These techniques may help to uncover patients with otherwise unrecognised or overestimated arterial hypertension. Additionally these techniques may yield a better prognostic value.

  6. Measuring Officer Knowledge and Experience to Enable Tailored Training

    DTIC Science & Technology

    2011-11-01

    experience scales. Schmidt, Hunter, and Outerbridge (1986) found that a simple index of experience (e.g., total months/ years ) within a domain...two years as compared to the eight days that the SGIs had to observe their officers in this research. Fourth, the work samples from Schmidt et al...studies which simply asked individuals how long (e.g., years and months) they had been involved in a given job domain. In this research, we asked

  7. Energy Savings Modeling of Standard Commercial Building Re-tuning Measures: Large Office Buildings

    SciTech Connect

    Fernandez, Nicholas; Katipamula, Srinivas; Wang, Weimin; Huang, Yunzhi; Liu, Guopeng

    2012-06-01

    Today, many large commercial buildings use sophisticated building automation systems (BASs) to manage a wide range of building equipment. While the capabilities of BASs have increased over time, many buildings still do not fully use the BAS's capabilities and are not properly commissioned, operated or maintained, which leads to inefficient operation, increased energy use, and reduced lifetimes of the equipment. This report investigates the energy savings potential of several common HVAC system retuning measures on a typical large office building prototype model, using the Department of Energy's building energy modeling software, EnergyPlus. The baseline prototype model uses roughly as much energy as an average large office building in existing building stock, but does not utilize any re-tuning measures. Individual re-tuning measures simulated against this baseline include automatic schedule adjustments, damper minimum flow adjustments, thermostat adjustments, as well as dynamic resets (set points that change continuously with building and/or outdoor conditions) to static pressure, supply air temperature, condenser water temperature, chilled and hot water temperature, and chilled and hot water differential pressure set points. Six combinations of these individual measures have been formulated - each designed to conform to limitations to implementation of certain individual measures that might exist in typical buildings. All of these measures and combinations were simulated in 16 cities representative of specific U.S. climate zones. The modeling results suggest that the most effective energy savings measures are those that affect the demand-side of the building (air-systems and schedules). Many of the demand-side individual measures were capable of reducing annual HVAC system energy consumption by over 20% in most cities that were modeled. Supply side measures affecting HVAC plant conditions were only modestly successful (less than 5% annual HVAC energy savings for

  8. Energy savings modelling of re-tuning energy conservation measures in large office buildings

    SciTech Connect

    Fernandez, Nick; Katipamula, Srinivas; Wang, Weimin; Huang, Yunzhi; Liu, Guopeng

    2014-10-20

    Today, many large commercial buildings use sophisticated building automation systems (BASs) to manage a wide range of building equipment. While the capabilities of BASs have increased over time, many buildings still do not fully use the BAS’s capabilities and are not properly commissioned, operated or maintained, which leads to inefficient operation, increased energy use, and reduced lifetimes of the equipment. This paper investigates the energy savings potential of several common HVAC system re-tuning measures on a typical large office building, using the Department of Energy’s building energy modeling software, EnergyPlus. The baseline prototype model uses roughly as much energy as an average large office building in existing building stock, but does not utilize any re-tuning measures. Individual re-tuning measures simulated against this baseline include automatic schedule adjustments, damper minimum flow adjustments, thermostat adjustments, as well as dynamic resets (set points that change continuously with building and/or outdoor conditions) to static pressure, supply-air temperature, condenser water temperature, chilled and hot water temperature, and chilled and hot water differential pressure set points. Six combinations of these individual measures have been formulated – each designed to conform to limitations to implementation of certain individual measures that might exist in typical buildings. All the individual measures and combinations were simulated in 16 climate locations representative of specific U.S. climate zones. The modeling results suggest that the most effective energy savings measures are those that affect the demand-side of the building (air-systems and schedules). Many of the demand-side individual measures were capable of reducing annual total HVAC system energy consumption by over 20% in most cities that were modeled. Supply side measures affecting HVAC plant conditions were only modestly successful (less than 5% annual HVAC energy

  9. Measures for assessing architectural speech security (privacy) of closed offices and meeting rooms

    NASA Astrophysics Data System (ADS)

    Gover, Bradford N.; Bradley, John S.

    2004-12-01

    Objective measures were investigated as predictors of the speech security of closed offices and rooms. A new signal-to-noise type measure is shown to be a superior indicator for security than existing measures such as the Articulation Index, the Speech Intelligibility Index, the ratio of the loudness of speech to that of noise, and the A-weighted level difference of speech and noise. This new measure is a weighted sum of clipped one-third-octave-band signal-to-noise ratios; various weightings and clipping levels are explored. Listening tests had 19 subjects rate the audibility and intelligibility of 500 English sentences, filtered to simulate transmission through various wall constructions, and presented along with background noise. The results of the tests indicate that the new measure is highly correlated with sentence intelligibility scores and also with three security thresholds: the threshold of intelligibility (below which speech is unintelligible), the threshold of cadence (below which the cadence of speech is inaudible), and the threshold of audibility (below which speech is inaudible). The ratio of the loudness of speech to that of noise, and simple A-weighted level differences are both shown to be well correlated with these latter two thresholds (cadence and audibility), but not well correlated with intelligibility. .

  10. Measuring Satisfaction in the Program Manager, Procuring Contracting Officer Relationship

    DTIC Science & Technology

    1997-12-01

    44 5. Attaining Customer Satisfaction * 44 6. Inhibitors of Customer Satisfaction 45 Vlll 7. Measuring Customer Satisfaction 47 a. Preconditions...customer satisfaction issues, there has been little research to determine the optimal method for measuring customer satisfaction in the PM - PCO...business. As a result, for Executive departments of the Federal Government, measuring customer satisfaction is no longer the exception but the rule

  11. A study of ventilation measurement in an office building

    SciTech Connect

    Dols, W.S.; Persily, A.K.

    1995-09-01

    The National Institute of Standards and Technology has conducted a study of ventilation and ventilation measurement techniques in the Bonneville Power Administration (BPA) Building in Portland, Oregon. The project involved the use of the following outdoor air ventilation measurement techniques: tracer gas decay measurements of whole-building air change rates, the determination of air change rates based on peak carbon dioxide (CO{sub 2}) concentrations, the determination of percent outdoor air intake using tracer gas (sulfur hexafluoride and occupant-generated CO{sub 2}), and direct airflow rate measurements within the air handling system. In addition, air change rate measurements made approximately three years apart with an automated tracer gas decay system were compared. Airflow rates were measured in the air handling system ductwork using pitot tube, hot-wire anemometer, and vane anemometer traverses, and good agreement was obtained between the different techniques. While accurate determinations of percent outdoor air intake were achieved using tracer gas techniques, the use of CO{sub 2} detector tubes yielded unreliable results. Reliable determinations of ventilation rates per person were made based on SF{sub 6} decay and direct airflow rate measurements, but the use of peak CO{sub 2} concentrations led to overestimations of building air change rates. The measured values of the whole-building air change rates, and their dependence on outdoor air temperature, did not change significantly over a three-year period. The whole-building air change rate under minimum outdoor air intake conditions was determined to be twice the outdoor air intake rate provided by the minimum outdoor air intake fans due to leakage through the main outdoor air intake dampers.

  12. Impact of human presence on secondary organic aerosols derived from ozone-initiated chemistry in a simulated office environment.

    PubMed

    Fadeyi, Moshood O; Weschler, Charles J; Tham, Kwok W; Wu, Wei Y; Sultan, Zuraimi M

    2013-04-16

    Several studies have documented reductions in indoor ozone levels that occur as a consequence of its reactions with the exposed skin, hair and clothing of human occupants. One would anticipate that consumption of ozone via such reactions would impact co-occurring products derived from ozone's reactions with various indoor pollutants. The present study examines this possibility for secondary organic aerosols (SOA) derived from ozone-initiated chemistry with limonene, a commonly occurring indoor terpene. The experiments were conducted at realistic ozone and limonene concentrations in a 240 m(3) chamber configured to simulate a typical open office environment. During an experiment the chamber was either unoccupied or occupied with 18-20 workers. Ozone and particle levels were continuously monitored using a UV photometric ozone analyzer and a fast mobility particle sizer (FMPS), respectively. Under otherwise identical conditions, when workers were present in the simulated office the ozone concentrations were approximately two-thirds and the SOA mass concentrations were approximately one-half of those measured when the office was unoccupied. This was observed whether new or used filters were present in the air handling system. These results illustrate the importance of accounting for occupancy when estimating human exposure to pollutants in various indoor settings.

  13. Short-Duration Simulations from Measurements.

    SciTech Connect

    Mitchell, Dean J.; Enghauser, Michael

    2014-08-01

    A method is presented that ascribes proper statistical variability to simulations that are derived from longer-duration measurements. This method is applicable to simulations of either real-value or integer-value data. An example is presented that demonstrates the applicability of this technique to the synthesis of gamma-ray spectra.

  14. Power levels in office equipment: Measurements of new monitors and personal computers

    SciTech Connect

    Roberson, Judy A.; Brown, Richard E.; Nordman, Bruce; Webber, Carrie A.; Homan, Gregory H.; Mahajan, Akshay; McWhinney, Marla; Koomey, Jonathan G.

    2002-05-14

    Electronic office equipment has proliferated rapidly over the last twenty years and is projected to continue growing in the future. Efforts to reduce the growth in office equipment energy use have focused on power management to reduce power consumption of electronic devices when not being used for their primary purpose. The EPA ENERGY STAR[registered trademark] program has been instrumental in gaining widespread support for power management in office equipment, and accurate information about the energy used by office equipment in all power levels is important to improving program design and evaluation. This paper presents the results of a field study conducted during 2001 to measure the power levels of new monitors and personal computers. We measured off, on, and low-power levels in about 60 units manufactured since July 2000. The paper summarizes power data collected, explores differences within the sample (e.g., between CRT and LCD monitors), and discusses some issues that arise in m etering office equipment. We also present conclusions to help improve the success of future power management programs.Our findings include a trend among monitor manufacturers to provide a single very low low-power level, and the need to standardize methods for measuring monitor on power, to more accurately estimate the annual energy consumption of office equipment, as well as actual and potential energy savings from power management.

  15. A Web-Based Lean Simulation Game for Office Operations: Training the Other Side of a Lean Enterprise

    ERIC Educational Resources Information Center

    Kuriger, Glenn W.; Wan, Huang-da; Mirehei, S. Moussa; Tamma, Saumya; Chen, F. Frank

    2010-01-01

    This research proposes a Web-based version of a lean office simulation game (WeBLOG). The game is designed to be used to train lean concepts to office and administrative personnel. This group belongs to the frequently forgotten side of a lean enterprise. Over four phases, the game presents the following seven lean tools: one-piece flow,…

  16. Office, ambulatory and home blood pressure measurement in children and adolescents.

    PubMed

    Karpettas, Nikos; Kollias, Anastasios; Vazeou, Andriani; Stergiou, George S

    2010-11-01

    There is an increasing interest in pediatric hypertension, the prevalence of which is rising in parallel with the obesity epidemic. Traditionally the assessment of hypertension in children has relied on office blood pressure (BP) measurements by the physician. However, as in adults, office BP might be misleading in children mainly due to the white coat and masked hypertension phenomena. Thus, out-of-office BP assessment, using ambulatory or home monitoring, has gained ground for the accurate diagnosis of hypertension and decision-making. Ambulatory monitoring is regarded as indispensable for the evaluation of pediatric hypertension. Preliminary data support the usefulness of home monitoring, yet more evidence is needed. Office, ambulatory and home BP normalcy tables providing thresholds for diagnosis have been published and should be used for the assessment of elevated BP in children.

  17. Generation of Requirements for Simulant Measurements

    NASA Technical Reports Server (NTRS)

    Rickman, D. L.; Schrader, C. M.; Edmunson, J. E.

    2010-01-01

    This TM presents a formal, logical explanation of the parameters selected for the figure of merit (FoM) algorithm. The FoM algorithm is used to evaluate lunar regolith simulant. The objectives, requirements, assumptions, and analysis behind the parameters are provided. A requirement is derived to verify and validate simulant performance versus lunar regolith from NASA s objectives for lunar simulants. This requirement leads to a specification that comparative measurements be taken the same way on the regolith and the simulant. In turn, this leads to a set of nine criteria with which to evaluate comparative measurements. Many of the potential measurements of interest are not defensible under these criteria. For example, many geotechnical properties of interest were not explicitly measured during Apollo and they can only be measured in situ on the Moon. A 2005 workshop identified 32 properties of major interest to users. Virtually all of the properties are tightly constrained, though not predictable, if just four parameters are controlled. Three parameters (composition, size, and shape) are recognized as being definable at the particle level. The fourth parameter (density) is a bulk property. In recent work, a fifth parameter (spectroscopy) has been identified, which will need to be added to future releases of the FoM.

  18. Statistical Analysis and Modeling of Occupancy Patterns in Open-Plan Offices using Measured Lighting-Switch Data

    SciTech Connect

    Chang, Wen-Kuei; Hong, Tianzhen

    2013-01-01

    Occupancy profile is one of the driving factors behind discrepancies between the measured and simulated energy consumption of buildings. The frequencies of occupants leaving their offices and the corresponding durations of absences have significant impact on energy use and the operational controls of buildings. This study used statistical methods to analyze the occupancy status, based on measured lighting-switch data in five-minute intervals, for a total of 200 open-plan (cubicle) offices. Five typical occupancy patterns were identified based on the average daily 24-hour profiles of the presence of occupants in their cubicles. These statistical patterns were represented by a one-square curve, a one-valley curve, a two-valley curve, a variable curve, and a flat curve. The key parameters that define the occupancy model are the average occupancy profile together with probability distributions of absence duration, and the number of times an occupant is absent from the cubicle. The statistical results also reveal that the number of absence occurrences decreases as total daily presence hours decrease, and the duration of absence from the cubicle decreases as the frequency of absence increases. The developed occupancy model captures the stochastic nature of occupants moving in and out of cubicles, and can be used to generate a more realistic occupancy schedule. This is crucial for improving the evaluation of the energy saving potential of occupancy based technologies and controls using building simulations. Finally, to demonstrate the use of the occupancy model, weekday occupant schedules were generated and discussed.

  19. Relationships Among Stress Measures, Risk Factors, and Inflammatory Biomarkers in Law Enforcement Officers

    PubMed Central

    Ramey, Sandra L.; Downing, Nancy R.; Franke, Warren D.; Perkhounkova, Yelena; Alasagheirin, Mohammad H.

    2011-01-01

    Law enforcement officers suffer higher morbidity and mortality rates from all causes than the general population. Cardiovascular disease (CVD) accounts for a significant portion of the excess illness, with a reported prevalence as high as 1.7 times that of the general population. To determine which occupational hazards cause this increased risk and morbidity, it is imperative to study law enforcement officers before they retire. The long-range goal of our research is to reduce the incidence of CVD-related illness and death among aging law enforcement officers. The purpose of the present study was to measure pro- and anti-atherogenic inflammatory markers in blood samples from law enforcement officers (n = 71) and determine what types of occupation-related stress correlate with differences in these markers. For each outcome variable of interest, we developed separate regression models. Two groups of potential predictors were examined for inclusion in the models. Selected measures of stress were examined for inclusion in the models, in addition to general covariates, such as gender, ethnicity, years in law enforcement, and body mass index. Our results revealed statistically significant relationships between several physiologic variables and measures of stress. PMID:21362637

  20. A Simulation Tool for the Duties of Computer Specialist Non-Commissioned Officers on a Turkish Air Force Base

    DTIC Science & Technology

    2009-09-01

    at the MOVES Institute A SIMULATION TOOL FOR THE DUTIES OF COMPUTER SPECIALIST NON-COMMISSIONED OFFICERS ON A TURKISH AIR FORCE BASE by...REPORT DATE September 2009 1. AGENCY USE ONLY (Leave blank) 4. TITLE AND SUBTITLE A Simulation Tool for the Duties of Computer Specialist...simulation tool by using a prototypical model of the computer system specialist non-commissioned officers’ jobs on a Turkish Air Force Base, and to

  1. TE/TM Simulations of Interferometric Measurements

    NASA Technical Reports Server (NTRS)

    Houshmand, Bijan

    2000-01-01

    Interferometric synthetic aperture radar (IFSAR) measurements at X-, C-, L-, and P-band are used to derive ground topography at meter level resolution. Interpretation of the derived topography requires attention due to the complex interaction of the radar signal with ground cover. The presence of penetrable surfaces such as vegetation, and tree canopies poses a challenge since the depth of penetration depends on a number of parameters such as the operating radar frequency, polarization, incident angle, as well as terrain structure. The dependence of the reconstructed topography on polarization may lead to the characterization of the ground cover. Simulation of interferometric measurements is useful for interpretation of the derived topography (B. Houshmand, Proceedings of URSI, 314, 1997). In this talk , time domain simulations for interferometric measurement for TE- and TM- polarization are presented. Time domain simulation includes the effects of the surface material property as well geometry comparable the radar signal wavelength (B. Houshmand, Proceedings of the URSI, 25, 1998). The IFSAR simulation is carried out in two steps. First, the forward scattering data is generated based on full wave analysis. Next, the electromagnetic information is inverted to generate surface topography. This inversion is based on the well known IFSAR processing technique which is composed of signal compression, and formation of an interferogram. The full wave forward scattering data is generated by the scattered-field formulation of the FDTD algorithm. The simulation is carried out by exciting the computational domain by a radar signal. The scattered field is then computed and translated to the receiving interferometric antennas using the time-domain Huygen's principle. The inversion process starts by compressing the time-domain data. The range compressed data from both receivers are then coregistered to form an interferogram. The resulting interferogram is then related to the

  2. Simulations for the Development of Thermoelectric Measurements

    NASA Astrophysics Data System (ADS)

    Zabrocki, Knud; Ziolkowski, Pawel; Dasgupta, Titas; de Boor, Johannes; Müller, Eckhard

    2013-07-01

    In thermoelectricity, continuum theoretical equations are usually used for the calculation of the characteristics and performance of thermoelectric elements, modules or devices as a function of external parameters (material, geometry, temperatures, current, flow, load, etc.). An increasing number of commercial software packages aimed at applications, such as COMSOL and ANSYS, contain vkernels using direct thermoelectric coupling. Application of these numerical tools also allows analysis of physical measurement conditions and can lead to specifically adapted methods for developing special test equipment required for the determination of TE material and module properties. System-theoretical and simulation-based considerations of favorable geometries are taken into account to create draft sketches in the development of such measurement systems. Particular consideration is given to the development of transient measurement methods, which have great advantages compared with the conventional static methods in terms of the measurement duration required. In this paper the benefits of using numerical tools in designing measurement facilities are shown using two examples. The first is the determination of geometric correction factors in four-point probe measurement of electrical conductivity, whereas the second example is focused on the so-called combined thermoelectric measurement (CTEM) system, where all thermoelectric material properties (Seebeck coefficient, electrical and thermal conductivity, and Harman measurement of zT) are measured in a combined way. Here, we want to highlight especially the measurement of thermal conductivity in a transient mode. Factors influencing the measurement results such as coupling to the environment due to radiation, heat losses via the mounting of the probe head, as well as contact resistance between the sample and sample holder are illustrated, analyzed, and discussed. By employing the results of the simulations, we have developed an

  3. Automated measurement of office, home and ambulatory blood pressure in atrial fibrillation.

    PubMed

    Kollias, Anastasios; Stergiou, George S

    2014-01-01

    1. Hypertension and atrial fibrillation (AF) often coexist and are strong risk factors for stroke. Current guidelines for blood pressure (BP) measurement in AF recommend repeated measurements using the auscultatory method, whereas the accuracy of the automated devices is regarded as questionable. This review presents the current evidence on the feasibility and accuracy of automated BP measurement in the presence of AF and the potential for automated detection of undiagnosed AF during such measurements. 2. Studies evaluating the use of automated BP monitors in AF are limited and have significant heterogeneity in methodology and protocols. Overall, the oscillometric method is feasible for static (office or home) and ambulatory use and appears to be more accurate for systolic than diastolic BP measurement. 3. Given that systolic hypertension is particularly common and important in the elderly, the automated BP measurement method may be acceptable for self-home and ambulatory monitoring, but not for professional office or clinic measurement. 4. An embedded algorithm for the detection of asymptomatic AF during routine automated BP measurement with high diagnostic accuracy has been developed and appears to be a useful screening tool for elderly hypertensives.

  4. Prototypes of Cognitive Measures for Air Force Officers: Test Development and Item Banking

    DTIC Science & Technology

    1990-05-01

    AFHRL-TP-89-737 3, COPY AIR FORCE PROTOTYPES OF COGNITIVE MEASURES FOR AIR FORCE OFFICERS: TEST DEVELOPMENT AND ITEM BANKING DTIC f1 ELECTF H Frances...Jacobina Skinner MANPOWER AND PERSONNEL DIVISION R Brooks Air Force Base, Texas 78235-5601 E S O May 1990U Final Technical Paper for Period September 1987...November 1989 R C Approved for public release; distribution is unlimited. E S LABORATORY AIR FORCE SYSTEMS COMMAND BROOKS AIR FORCE BASE, TEXAS

  5. Crystalline lens MTF measurement during simulated accommodation

    NASA Astrophysics Data System (ADS)

    Borja, David; Takeuchi, Gaku; Ziebarth, Noel; Acosta, Ana C.; Manns, Fabrice; Parel, Jean-Marie

    2005-04-01

    Purpose: To design and test an optical system to measure the optical quality of post mortem lenses during simulated accommodation. Methods: An optical bench top system was designed to measure the point spread function and calculate the modulation transfer function (MTF) of monkey and human ex-vivo crystalline lenses. The system consists of a super luminescent diode emitting at 850nm, collimated into a 3mm beam which is focused by the ex-vivo lens under test. The intensity distribution at the focus (point spread function) is re-imaged and magnified onto a beam profiler CCD camera. The optical quality in terms of spatial frequency response (modulation transfer function) is calculated by Fourier transform of the point spread function. The system was used on ex-vivo lenses with attached zonules, ciliary body and sclera. The sclera was glued to 8 separate PMMA segments and stretched radial by 5mm on an accommodation simulating lens stretching device. The point spread function was measured for each lens in the relaxed and stretched state for 5 human (ages 38-86 years) and 5 cynomolgus monkey (ages 53 - 67 months) fresh post mortem crystalline lenses. Results: Stretching induced measurable changes in the MTF. The cutoff frequency increased from 54.4+/-13.6 lp/mm unstretched to 59.5+/-21.4 lp/mm stretched in the post-presbyopic human and from 51.9+/-24.7 lp/mm unstretched to 57.7+/-18.5 lp/mm stretched cynomolgus monkey lenses. Conclusion: The results demonstrate the feasibility of measuring the optical quality of ex-vivo human and cynomolgus monkey lenses during simulated accommodation. Additional experiments are underway to quantify changes in optical quality induced by stretching.

  6. Strategies for classifying patients based on office, home, and ambulatory blood pressure measurement.

    PubMed

    Zhang, Lu; Li, Yan; Wei, Fang-Fei; Thijs, Lutgarde; Kang, Yuan-Yuan; Wang, Shuai; Xu, Ting-Yan; Wang, Ji-Guang; Staessen, Jan A

    2015-06-01

    Hypertension guidelines propose home or ambulatory blood pressure monitoring as indispensable after office measurement. However, whether preference should be given to home or ambulatory monitoring remains undetermined. In 831 untreated outpatients (mean age, 50.6 years; 49.8% women), we measured office (3 visits), home (7 days), and 24-h ambulatory blood pressures. We applied hypertension guidelines for cross-classification of patients into normotension or white-coat, masked, or sustained hypertension. Based on office and home blood pressures, the prevalence of white-coat, masked, and sustained hypertension was 61 (10.3%), 166 (20.0%), and 162 (19.5%), respectively. Using daytime (from 8 am to 6 pm) instead of home blood pressure confirmed the cross-classification in 575 patients (69.2%), downgraded risk from masked hypertension to normotension (n=24) or from sustained to white-coat hypertension (n=9) in 33 (4.0%), but upgraded risk from normotension to masked hypertension (n=179) or from white-coat to sustained hypertension (n=44) in 223 (26.8%). Analyses based on 24-h ambulatory blood pressure were confirmatory. In adjusted analyses, both the urinary albumin-to-creatinine ratio (+20.6%; confidence interval, 4.4-39.3) and aortic pulse wave velocity (+0.30 m/s; confidence interval, 0.09-0.51) were higher in patients who moved up to a higher risk category. Both indexes of target organ damage and central augmentation index were positively associated (P≤0.048) with the odds of being reclassified. In conclusion, for reliably diagnosing hypertension and starting treatment, office measurement should be followed by ambulatory blood pressure monitoring. Using home instead of ambulatory monitoring misses the high-risk diagnoses of masked or sustained hypertension in over 25% of patients.

  7. Control over the Scheduling of Simulated Office Work Reduces the Impact of Workload on Mental Fatigue and Task Performance

    ERIC Educational Resources Information Center

    Hockey, G. Robert J.; Earle, Fiona

    2006-01-01

    Two experiments tested the hypothesis that task-induced mental fatigue is moderated by control over work scheduling. Participants worked for 2 hr on simulated office work, with control manipulated by a yoking procedure. Matched participants were assigned to conditions of either high control (HC) or low control (LC). HC participants decided their…

  8. An Exploratory Energy Analysis of Electrochromic Windows in Small and Medium Office Buildings - Simulated Results Using EnergyPlus

    SciTech Connect

    Belzer, David B.

    2010-08-01

    The Department of Energy’s (DOE) Building Technologies Program (BTP) has had an active research program in supporting the development of electrochromic (EC) windows. Electrochromic glazings used in these windows have the capability of varying the transmittance of light and heat in response to an applied voltage. This dynamic property allows these windows to reduce lighting, cooling, and heating energy in buildings where they are employed. The exploratory analysis described in this report examined three different variants of EC glazings, characterized by the amount of visible light and solar heat gain (as measured by the solar heat gain coefficients [SHGC] in their “clear” or transparent states). For these EC glazings, the dynamic range of the SHGC’s between their “dark” (or tinted) state and the clear state were: (0.22 - 0.70, termed “high” SHGC); (0.16 - 0.39, termed “low” SHGC); and (0.13 - 0.19; termed “very low” SHGC). These glazings are compared to conventional (static) glazing that meets the ASHRAE Standard 90.1-2004 energy standard for five different locations in the U.S. All analysis used the EnergyPlus building energy simulation program for modeling EC windows and alternative control strategies. The simulations were conducted for a small and a medium office building, where engineering specifications were taken from the set of Commercial Building Benchmark building models developed by BTP. On the basis of these simulations, total source-level savings in these buildings were estimated to range between 2 to 7%, depending on the amount of window area and building location.

  9. Are pressure measurements effective in the assessment of office chair comfort/discomfort? A review.

    PubMed

    Zemp, Roland; Taylor, William R; Lorenzetti, Silvio

    2015-05-01

    Nowadays, the majority of jobs in the western world involves sitting in an office chair. As a result, a comfortable and supported sitting position is essential for employees. In the literature, various objective methods (e.g. pressure measurements, measurements of posture, EMG etc.) have been used to assess sitting comfort/discomfort, but their validity remains unknown. This review therefore examines the relationship between subjective comfort/discomfort and pressure measurements while sitting in office chairs. The literature search resulted in eight papers that met all our requirements. Four studies identified a relationship between subjective comfort/discomfort and pressure distribution parameters (including correlations of up to r = 0.7 ± 0.13). However, the technique for evaluating subjective comfort/discomfort seems to play an important role on the results achieved, therefore placing their validity into question. The peak pressure on the seat pan, the pressure distribution on the backrest and the pressure pattern changes (seat pan and backrest) all appear to be reliable measures for quantifying comfort or discomfort.

  10. In Situ Measurement Activities at the Nasa Orbital Debris Program Office

    NASA Technical Reports Server (NTRS)

    Liou, J.-C.; Burchell, M.; Corsaro, R.; Drolshagen, G.; Giovane, F.; Pisacane, V.; Stansbery, E.

    2009-01-01

    The NASA Orbital Debris Program Office has been involved in the development of several particle impact instruments since 2003. The main objective of this development is to eventually conduct in situ measurements to better characterize the small (millimeter or smaller) orbital debris and micrometeoroid populations in the near-Earth environment. In addition, the Office also supports similar instrument development to define the micrometeoroid and lunar secondary ejecta environment for future lunar exploration activities. The instruments include impact acoustic sensors, resistive grid sensors, fiber optic displacement sensors, and impact ionization sensors. They rely on different mechanisms and detection principles to identify particle impacts. A system consisting of these different sensors will provide data that are complimentary to each other, and will provide a better description of the physical and dynamical properties (e.g., size, mass, and impact speed) of the particles in the environment. Details of several systems being considered by the Office and their intended mission objectives are summarized in this paper.

  11. Evaluating the Met Office Unified Model simulated land surface temperature (LST) using a multi-platform approach

    NASA Astrophysics Data System (ADS)

    Brooke, Jennifer; Harlow, Chawn; Best, Martin; Newman, Stuart; Scott, Russell; Edwards, John; Thelen, Jean-Claude; Pavelin, Ed; Weeks, Mark

    2015-04-01

    The Met Office Unified Model (UM) has a significant cold bias in land surface temperature (LST) in semi-arid regions at global resolution, and limited area 4.4 km and 2.2 km configurations. The daytime LST cold bias simulated by the JULES land surface scheme within the UM is present throughout the annual cycle in semi-arid regions of the globe in comparison to IASI retrievals. These errors are largest in late spring and early summer and have magnitudes of 5 to 15 K, dependent on model resolution. This work will show verification of model biases through ground-based, in-situ airborne and satellite observations during the Semi-Arid Land Surface Temperature and IASI Calibration Experiment (SALSTICE) in semi-arid south-eastern Arizona in May 2013. Airborne observations of LST from the FAAM research aircraft using the Airborne Research Interferometer Evaluation System (ARIES) were used to investigate the spatial distribution of the model errors and evaluate IASI retrievals. Airborne retrievals of surface temperature were found to broadly agree with IASI retrievals; uncertainties are attributed to the spatial variability in the ARIES measurements compared with the IASI footprints and due to differences within the retrieval, such as assumed emissivity. The UM errors in LST were found to vary with model resolution as well as topographic complexity, with the coarse resolution global model having larger errors than the limited area models. Regions with complex terrain had the highest LST errors while the errors over the less complex basins were lower, in the range of 4-5 K. Evaluation of the JULES land surface scheme has been performed for flux tower sites in the Walnut Gulch Experimental Watershed in south-eastern Arizona. An annual dataset of flux tower measurements confirms the LST biases seen with aircraft and satellite observations and indicates that night-time LST biases are of the order of those observed during the day. Comparisons of different model resolutions show

  12. Home and Office Blood Pressure Control among Treated Hypertensive Patients in Japan: Findings from the Japan Home versus Office Blood Pressure Measurement Evaluation (J-HOME) Study

    PubMed Central

    Obara, Taku; Ohkubo, Takayoshi; Satoh, Michihiro; Mano, Nariyasu; Imai, Yutaka

    2010-01-01

    Appropriate control of blood pressure (BP) is essential for prevention of future cardiovascular events. However, BP control among treated hypertensive patients has been insufficient. Recently, the usefulness of self-measured BP at home (home BP measurement) for the management of hypertension has been reported in many studies. We evaluated BP control both at home and in the office among treated hypertensive patients in primary care settings in Japan (the J-HOME study). We found poor control of home and office BPs and clarified some factors affecting control. We also examined factors associated with the magnitude of the white-coat effect, the morning–evening BP difference, and home heart rate in this J-HOME study. PMID:27713260

  13. Predictors of Speed Using Off-Ice Measures of College Hockey Players.

    PubMed

    Runner, Aaron R; Lehnhard, Robert A; Butterfield, Stephen A; Tu, Shihfen; OʼNeill, Terrence

    2016-06-01

    The purpose of this study was to examine the relationship between commonly employed dry-land performance tests and skating speed in male collegiate ice hockey players. Forty male National Collegiate Athletic Association Division I hockey players were tested on the following performance variables: vertical jump (VJ), standing broad jump, 40-yard dash, and maximal back squat (SQT). The subjects also performed 3 skating tests: the 90-ft forward acceleration test, the 90-ft backward acceleration test, and the 50-ft flying top speed test (F50). Pearson correlation coefficients were applied to compare the strength of association between each selected off-ice measure and each on-ice measure. Three multiple regression equations were then used to compare the weighted strengths of association between predictor and criterion variables. Only VJ showed significance in relation to skating speed (p = 0.011). These results suggest that meaningful performance testing in ice hockey players should occur mainly on the ice.

  14. Characterization of emissions from a desktop 3D printer and indoor air measurements in office settings.

    PubMed

    Steinle, Patrick

    2016-01-01

    Emissions from a desktop 3D printer based on fused deposition modeling (FDM) technology were measured in a test chamber and indoor air was monitored in office settings. Ultrafine aerosol (UFA) emissions were higher while printing a standard object with polylactic acid (PLA) than with acrylonitrile butadiene styrene (ABS) polymer (2.1 × 10(9) vs. 2.4 × 10(8) particles/min). Prolonged use of the printer led to higher emission rates (factor 2 with PLA and 4 with ABS, measured after seven months of occasional use). UFA consisted mainly of volatile droplets, and some small (100-300 nm diameter) iron containing and soot-like particles were found. Emissions of inhalable and respirable dust were below the limit of detection (LOD) when measured gravimetrically, and only slightly higher than background when measured with an aerosol spectrometer. Emissions of volatile organic compounds (VOC) were in the range of 10 µg/min. Styrene accounted for more than 50% of total VOC emitted when printing with ABS; for PLA, methyl methacrylate (MMA, 37% of TVOC) was detected as the predominant compound. Two polycyclic aromatic hydrocarbons (PAH), fluoranthene and pyrene, were observed in very low amounts. All other analyzed PAH, as well as inorganic gases and metal emissions except iron (Fe) and zinc (Zn), were below the LOD or did not differ from background without printing. A single 3D print (165 min) in a large, well-ventilated office did not significantly increase the UFA and VOC concentrations, whereas these were readily detectable in a small, unventilated room, with UFA concentrations increasing by 2,000 particles/cm(3) and MMA reaching a peak of 21 µg/m(3) and still being detectable in the room even 20 hr after printing.

  15. Large-Scale Hybrid Dynamic Simulation Employing Field Measurements

    SciTech Connect

    Huang, Zhenyu; Guttromson, Ross T.; Hauer, John F.

    2004-06-30

    Simulation and measurements are two primary ways for power engineers to gain understanding of system behaviors and thus accomplish tasks in system planning and operation. Many well-developed simulation tools are available in today's market. On the other hand, large amount of measured data can be obtained from traditional SCADA systems and currently fast growing phasor networks. However, simulation and measurement are still two separate worlds. There is a need to combine the advantages of simulation and measurements. In view of this, this paper proposes the concept of hybrid dynamic simulation which opens up traditional simulation by providing entries for measurements. A method is presented to implement hybrid simulation with PSLF/PSDS. Test studies show the validity of the proposed hybrid simulation method. Applications of such hybrid simulation include system event playback, model validation, and software validation.

  16. The Foreign Language Anxiety in a Medical Office Scale: developing and validating a measurement tool for Spanish-speaking individuals.

    PubMed

    Guntzviller, Lisa M; Jensen, Jakob D; King, Andy J; Davis, LaShara A

    2011-09-01

    Communication research has been hindered by a lack of validated measures for Latino populations. To develop and validate a foreign language anxiety in a medical office scale (the Foreign Language Anxiety in a Medical Office Scale [FLAMOS]), the authors conducted a survey of low income, primarily Spanish-speaking Latinos (N=100). The scale factored into a unidimensional construct and showed high reliability (α=.92). The Foreign Language Anxiety in a Medical Office Scale also demonstrated convergent and divergent validity compared with other communication anxiety scales (Personal Report of Communication Apprehension-24, Communication Anxiety Inventory, and Recipient Apprehension Test), and predictive validity for acculturation measures (the Short Acculturation Scale for Hispanics). The Foreign Language Anxiety in a Medical Office Scale provides a validated measure for researchers and may help to explain Latino health care communication barriers.

  17. Simulating UT measurements from bolthole cracks

    NASA Astrophysics Data System (ADS)

    Grandin, Robert; Gray, Tim; Roberts, Ron

    2016-02-01

    Analytical computer models of UT measurements are becoming more prominent in evaluating NDE methods - a process known as Model Assisted Probability of Detection, or MAPOD. As inspection requirements become more stringent, the respective models become more complex. An important application for aerospace structures involves inspection for cracks near boltholes in plate and layered structures. This paper describes a project to develop and validate analytical models for bolthole crack inspection, as well as to implement and demonstrate those models within an integrated graphical interface which can be used to simulate these inspections. The work involves a combination of approximate, paraxial, bulk-wave models as well as more rigorous, analytical models that include both bulk and surface/plate modes. The simpler models have greater flexibility and efficiency for handling complex geometry, while the more exact models are useful for benchmarking and assessing the accuracy of the paraxial versions. Model results will be presented for bolthole cracks in single layered components. Extensions of the models to multiple layers and to more complex geometries and materials will also be discussed.

  18. Risk Associated with Pulse Pressure on Out-of-Office Blood Pressure Measurement

    PubMed Central

    Gu, Yu-Mei; Aparicio, Lucas S.; Liu, Yan-Ping; Asayama, Kei; Hansen, Tine W.; Niiranen, Teemu J.; Boggia, José; Thijs, Lutgarde; Staessen, Jan A.

    2014-01-01

    Background Longitudinal studies have demonstrated that the risk of cardiovascular disease increases with pulse pressure (PP). However, PP remains an elusive cardiovascular risk factor with findings being inconsistent between studies. The 2013 ESH/ESC guideline proposed that PP is useful in stratification and suggested a threshold of 60 mm Hg, which is 10 mm Hg higher compared to that in the 2007 guideline; however, no justification for this increase was provided. Methodology Published thresholds of PP are based on office blood pressure measurement and often on arbitrary categorical analyses. In the International Database on Ambulatory blood pressure in relation to Cardiovascular Outcomes (IDACO) and the International Database on HOme blood pressure in relation to Cardiovascular Outcome (IDHOCO), we determined outcome-driven thresholds for PP based on ambulatory or home blood pressure measurement, respectively. Results The main findings were that for people aged <60 years, PP did not refine risk stratification, whereas in older people the thresholds were 64 and 76 mm Hg for the ambulatory and home PP, respectively. However, PP provided little added predictive value over and beyond classical risk factors. PMID:26587443

  19. Towards an automatic early stress recognition system for office environments based on multimodal measurements: A review.

    PubMed

    Alberdi, Ane; Aztiria, Asier; Basarab, Adrian

    2016-02-01

    Stress is a major problem of our society, as it is the cause of many health problems and huge economic losses in companies. Continuous high mental workloads and non-stop technological development, which leads to constant change and need for adaptation, makes the problem increasingly serious for office workers. To prevent stress from becoming chronic and provoking irreversible damages, it is necessary to detect it in its early stages. Unfortunately, an automatic, continuous and unobtrusive early stress detection method does not exist yet. The multimodal nature of stress and the research conducted in this area suggest that the developed method will depend on several modalities. Thus, this work reviews and brings together the recent works carried out in the automatic stress detection looking over the measurements executed along the three main modalities, namely, psychological, physiological and behavioural modalities, along with contextual measurements, in order to give hints about the most appropriate techniques to be used and thereby, to facilitate the development of such a holistic system.

  20. Associations of Objectively Measured and Self-Reported Sleep Duration With Carotid Artery Intima Media Thickness Among Police Officers

    PubMed Central

    Ma, Claudia C.; Burchfiel, Cecil M.; Charles, Luenda E.; Dorn, Joan M.; Andrew, Michael E.; Gu, Ja Kook; Joseph, Parveen Nedra; Fekedulegn, Desta; Slaven, James E.; Hartley, Tara A.; Mnatsakanova, Anna; Violanti, John M.

    2015-01-01

    Background We aimed to examine the association of objectively measured and self-reported sleep duration with carotid artery intima media thickness (IMT) among 257 police officers, a group at high risk for cardiovascular disease (CVD). Methods Sleep duration was estimated using actigraphic data and through self-reports. The mean maximum IMT was the average of the largest 12 values scanned bilaterally from three angles of the near and far wall of the common carotid, bulb, and internal carotid artery. Linear and quadratic regression models were used to assess the association of sleep duration with IMT. Results Officers who had fewer than 5 or 8 hr or more of objectively measured sleep duration had significantly higher maximum IMT values, independent of age. Self-reported sleep duration was not associated with either IMT measure. Conclusion Attainment of sufficient sleep duration may be considered as a possible strategy for atherosclerosis prevention among police officers. PMID:24038303

  1. Simulating a Senate Office: The Impact on Student Knowledge and Attitudes

    ERIC Educational Resources Information Center

    Lay, J. Celeste; Smarick, Kathleen J.

    2006-01-01

    Although many instructors are now using simulations and other experiential pedagogies in their classrooms, the effectiveness of such tools has generally not been examined in a systematic way. In this paper, we assess the effectiveness of a simulation of the legislative process in the U.S. Senate as a tool for teaching college students about the…

  2. Variations in the Lester Hill Office Simulation at the Darmstadt Career Center.

    ERIC Educational Resources Information Center

    Douglas, Willard B.; Carter, Thomas C., III

    1979-01-01

    Describes Lester Hill, a simulated corporation, that students utilize in education for clerical, secretarial, or distributive services. Following job interviews, students, assigned to appropriate jobs, process computer cards (simulated merchandise) through sales, warehouse, traffic, and accounting departments. Reviews roles of management, outside…

  3. A Simulation Method Measuring Psychomotor Nursing Skills.

    ERIC Educational Resources Information Center

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  4. Simulation of error in optical radar range measurements.

    PubMed

    Der, S; Redman, B; Chellappa, R

    1997-09-20

    We describe a computer simulation of atmospheric and target effects on the accuracy of range measurements using pulsed laser radars with p-i-n or avalanche photodiodes for direct detection. The computer simulation produces simulated images as a function of a wide variety of atmospheric, target, and sensor parameters for laser radars with range accuracies smaller than the pulse width. The simulation allows arbitrary target geometries and simulates speckle, turbulence, and near-field and far-field effects. We compare simulation results with actual range error data collected in field tests.

  5. Requirements and Techniques for Developing and Measuring Simulant Materials

    NASA Technical Reports Server (NTRS)

    Rickman, Doug; Owens, Charles; Howard, Rick

    2006-01-01

    The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication identify and reinforced a need for a set of standards and requirements for the production and usage of the lunar simulant materials. As NASA need prepares to return to the moon, a set of requirements have been developed for simulant materials and methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum Characteristics for simulants of lunar regolith, and 3) a method to produce lunar regolith simulants needed for NASA's exploration mission. A method to evaluate new and current simulants has also been rigorously defined through the mathematics of Figures of Merit (FoM), a concept new to simulant development. A single FoM is conceptually an algorithm defining a single characteristic of a simulant and provides a clear comparison of that characteristic for both the simulant and a reference material. Included as an intrinsic part of the algorithm is a minimum acceptable performance for the characteristic of interest. The algorithms for the FoM for Standard Lunar Regolith Simulants are also explicitly keyed to a recommended method to make lunar simulants.

  6. Chamber LIDAR measurements of aerosolized biological simulants

    NASA Astrophysics Data System (ADS)

    Brown, David M.; Thrush, Evan P.; Thomas, Michael E.; Siegrist, Karen M.; Baldwin, Kevin; Quizon, Jason; Carter, Christopher C.

    2009-05-01

    A chamber aerosol LIDAR is being developed to perform well-controlled tests of optical scattering characteristics of biological aerosols, including Bacillus atrophaeus (BG) and Bacillus thuringiensis (BT), for validation of optical scattering models. The 1.064 μm, sub-nanosecond pulse LIDAR allows sub-meter measurement resolution of particle depolarization ratio or backscattering cross-section at a 1 kHz repetition rate. Automated data acquisition provides the capability for real-time analysis or recording. Tests administered within the refereed 1 cubic meter chamber can provide high quality near-field backscatter measurements devoid of interference from entrance and exit window reflections. Initial chamber measurements of BG depolarization ratio are presented.

  7. Electron-cloud measurements and simulations for the APS

    SciTech Connect

    Furman, M.A.; Pivi, M.; Harkay, K.C.; Rosenberg, R.A.

    2001-06-26

    We compare experimental results with simulations of the electron cloud effect induced by a positron beam at the APS synchrotron light source at ANL, where the electron cloud effect has been observed and measured with dedicated probes. We find good agreement between simulations and measurements for reasonable values of certain secondary electron yield (SEY) parameters, most of which were extracted from recent bench measurements at SLAC.

  8. Simulators for Mariner Training and Licensing: Guidelines for Deck Officer Training Systems.

    DTIC Science & Technology

    1982-12-01

    Rules of the limitation. This finding underscores the importance of the Road and Port Approach Planning skills were trained. The non-simulator elements...of tralning system effec- * Rules-of-the-Road relatld skills and Port Approach tivenes can be based on (1) design criteria which establish Planning ...may reflect a bridge team organizational problem, of planned ship maneuvers (NTSB Annual Report, which should be emphasized during training. Many of

  9. Invasively Measured Aortic Systolic Blood Pressure and Office Systolic Blood Pressure in Cardiovascular Risk Assessment: A Prospective Cohort Study.

    PubMed

    Laugesen, Esben; Knudsen, Søren T; Hansen, Klavs W; Rossen, Niklas B; Jensen, Lisette Okkels; Hansen, Michael G; Munkholm, Henrik; Thomsen, Kristian K; Søndergaard, Hanne; Bøttcher, Morten; Raungaard, Bent; Madsen, Morten; Hulman, Adam; Witte, Daniel; Bøtker, Hans Erik; Poulsen, Per L

    2016-09-01

    Aortic systolic blood pressure (BP) represents the hemodynamic cardiac and cerebral burden more directly than office systolic BP. Whether invasively measured aortic systolic BP confers additional prognostic value beyond office BP remains debated. In this study, office systolic BP and invasively measured aortic systolic BP were recorded in 21 908 patients (mean age: 63 years; 58% men; 14% with diabetes mellitus) with stable angina pectoris undergoing elective coronary angiography during January 2001 to December 2012. Multivariate Cox models were used to assess the association with incident myocardial infarction, stroke, and death. Discrimination and reclassification were assessed using Harrell's C and the Continuous Net Reclassification Index. Data were analyzed with and without stratification by diabetes mellitus status. During a median follow-up period of 3.7 years (range: 0.1-10.8 years), 422 strokes, 511 myocardial infarctions, and 1530 deaths occurred. Both office and aortic systolic BP were associated with stroke in patients with diabetes mellitus (hazard ratio per 10 mm Hg, 1.18 [95% confidence interval, 1.07-1.30] and 1.14 [95% confidence interval, 1.05-1.24], respectively) and with myocardial infarction in patients without diabetes mellitus (hazard ratio, 1.07 [95% confidence interval, 1.02-1.12] and 1.05 [95% confidence interval, 1.01-1.10], respectively). In models including both BP measurements, aortic BP lost statistical significance and aortic BP did not confer improvement in either C-statistics or net reclassification analysis. In conclusion, invasively measured aortic systolic BP does not add prognostic information about cardiovascular outcomes and all-cause mortality compared with office BP in patients with stable angina pectoris, either with or without diabetes mellitus.

  10. Radio Plasma Imager Simulations and Measurements

    NASA Technical Reports Server (NTRS)

    Green, J. L.; Benson, R. F.; Fung, S. F.; Taylor, W. W. L.; Boardsen, S. A.; Reinisch, B. W.; Haines, D. M.; Bibl, K.; Cheney, G.; Galkin, I. A.

    1999-01-01

    The Radio Plasma Imager (RPI) will be the first-of-its kind instrument designed to use radio wave sounding techniques to perform repetitive remote sensing measurements of electron number density (N(sub e)) structures and the dynamics of the magnetosphere and plasmasphere. RPI will fly on the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) mission to be launched early in the year 2000. The design of the RPI is based on recent advances in radio transmitter and receiver design and modern digital processing techniques perfected for ground-based ionospheric sounding over the last two decades. Free-space electromagnetic waves transmitted by the RPI located in the low density magnetospheric cavity will be reflected at distant plasma cutoffs. The location and characteristics of the plasma at those remote reflection points can then be derived from measurements of the echo amplitude, phase, delay time, frequency, polarization, Doppler shift, and echo direction. The 500 m tip-to-tip X and Y (spin plane) antennas and 20 m boom Z axis antenna on RPI will be used to measures echoes coming from distances of several R(sub E).

  11. A Measure of Psychological Realism on a Visual Simulator

    NASA Technical Reports Server (NTRS)

    Palmer, Everett; Petitt, John

    1977-01-01

    A FUNDAMENTAL question of simulation technology is how to determine if an aircraft simulation is creating the proper psychological space necessary to assess manned-system performance. The standard approach to this problem for visual simulators is to measure how well pilots can make approaches and landings on the simulator. Experiments of this type generally show that simulator performance is worse than actual landing performance and that there is an excessive amount of training required to reach acceptable performance. Unfortunately, in these experiments it is difficult to sort out the inadequacies of the visual subsystem from possible inadequacies in other simulator subsystems, such as the motion subsystem. This synoptic presents the results from one of a series of five experiments which attempted to provide direct measures of the psychological realism on a computer graphics night visual flight attachment. These experiments used experimental procedures and methodologies that psychologists have developed in their attempts to determine how people perceived visual space in the real world.

  12. Whole body measurement systems. [for weightlessness simulation

    NASA Technical Reports Server (NTRS)

    Ogle, J. S. (Inventor)

    1973-01-01

    A system for measuring the volume and volume variations of a human body under zero gravity conditions is disclosed. An enclosed chamber having a defined volume and arranged for receiving a human body is provided with means for infrasonically varying the volume of the chamber. The changes in volume produce resultant changes in pressure, and under substantially isentropic conditions, an isentropic relationship permits a determination of gas volume which, in turn, when related to total chamber volume permits a determination of the body volume. By comparison techniques, volume changes of a human independent of gravity conditions can be determined.

  13. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  14. Control over the scheduling of simulated office work reduces the impact of workload on mental fatigue and task performance.

    PubMed

    Hockey, G Robert J; Earle, Fiona

    2006-03-01

    Two experiments tested the hypothesis that task-induced mental fatigue is moderated by control over work scheduling. Participants worked for 2 hr on simulated office work, with control manipulated by a yoking procedure. Matched participants were assigned to conditions of either high control (HC) or low control (LC). HC participants decided their own task scheduling, whereas LC participants had to follow these fixed schedules. For Experiment 1, fatigue was higher in LC participants who worked harder, so Experiment 2 compared control effects in high- and low-workload groups. As predicted, the impact of workload was reduced under HC conditions, for subjective fatigue, and most secondary tasks and aftereffects. The findings are interpreted within the framework of compensatory control theory.

  15. Measured energy performance of a US-China demonstrationenergy-efficient office building

    SciTech Connect

    Xu, Peng; Huang, Joe; Jin, Ruidong; Yang, Guoxiong

    2006-08-28

    In July 1998, the U.S. Department of Energy (USDOE) and China's Ministry of Science of Technology (MOST) signed a Statement of Work (SOW) to collaborate on the design and construction of an energy-efficient demonstration office building and design center to be located in Beijing. The proposed 13,000 m{sup 2} (140,000 ft{sup 2}) nine-story office building would use U.S. energy-efficient materials, space-conditioning systems, controls, and design principles that were judged to be widely replicable throughout China. The SOW stated that China would contribute the land and provide for the costs of the base building, while the U.S. would be responsible for the additional (or marginal) costs associated with the package of energy efficiency and renewable energy improvements to the building. The project was finished and the building occupied in 2004. Using DOE-2 to analyze the energy performance of the as-built building, the building obtained 44 out of 69 possible points according to the Leadership in Energy and Environmental Design (LEED) rating, including the full maximum of 10 points in the energy performance section. The building achieved a LEED Gold rating, the first such LEED-rated office building in China, and is 60% more efficient than ASHRAE 90.1-1999. The utility data from the first year's operation match well the analysis results, providing that adjustments are made for unexpected changes in occupancy and operations. Compared with similarly equipped office buildings in Beijing, this demonstration building uses 60% less energy per floor area. However, compared to conventional office buildings with less equipment and window air-conditioners, the building uses slightly more energy per floor area.

  16. Experimental Validation of Simulations Using Full-field Measurement Techniques

    SciTech Connect

    Hack, Erwin

    2010-05-28

    The calibration by reference materials of dynamic full-field measurement systems is discussed together with their use to validate numerical simulations of structural mechanics. The discussion addresses three challenges that are faced in these processes, i.e. how to calibrate a measuring instrument that (i) provides full-field data, and (ii) is dynamic; (iii) how to compare data from simulation and experimentation.

  17. Measurement and Simulation Results of Ti Coated Microwave Absorber

    SciTech Connect

    Sun, Ding; McGinnis, Dave; /Fermilab

    1998-11-01

    When microwave absorbers are put in a waveguide, a layer of resistive coating can change the distribution of the E-M fields and affect the attenuation of the signal within the microwave absorbers. In order to study such effect, microwave absorbers (TT2-111) were coated with titanium thin film. This report is a document on the coating process and measurement results. The measurement results have been used to check the simulation results from commercial software HFSS (High Frequency Structure Simulator.)

  18. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  19. Comsol Simulations as a Tool in Validating a Measurement Chamber

    NASA Astrophysics Data System (ADS)

    Lakka, Antti; Sairanen, Hannu; Heinonen, Martti; Högström, Richard

    2015-12-01

    The Centre for Metrology and Accreditation (MIKES) is developing a temperature-humidity calibration system for radiosondes. The target minimum air temperature and dew-point temperature are -80° C and -90° C, respectively. When operating in this range, a major limiting factor is the time of stabilization which is mainly affected by the design of the measurement chamber. To find an optimal geometry for the chamber, we developed a numerical simulation method taking into account heat and mass transfer in the chamber. This paper describes the method and its experimental validation using two stainless steel chambers with different geometries. The numerical simulation was carried out using Comsol Multiphysics simulation software. Equilibrium states of dry air flow at -70° C with different inlet air flow rates were used to determine the geometry of the chamber. It was revealed that the flow is very unstable despite having relatively small Reynolds number values. Humidity saturation abilities of the new chamber were studied by simulating water vapor diffusion in the chamber in time-dependent mode. The differences in time of humidity stabilization after a step change were determined for both the new chamber model and the MIKES Relative Humidity Generator III (MRHG) model. These simulations were used as a validation of the simulation method along with experimental measurements using a spectroscopic hygrometer. Humidity saturation stabilization simulations proved the new chamber to be the faster of the two, which was confirmed by experimental measurements.

  20. Prototype simulates remote sensing spectral measurements on fruits and vegetables

    NASA Astrophysics Data System (ADS)

    Hahn, Federico

    1998-09-01

    A prototype was designed to simulate spectral packinghouse measurements in order to simplify fruit and vegetable damage assessment. A computerized spectrometer is used together with lenses and an externally controlled illumination in order to have a remote sensing simulator. A laser is introduced between the spectrometer and the lenses in order to mark the zone where the measurement is being taken. This facilitates further correlation work and can assure that the physical and remote sensing measurements are taken in the same place. Tomato ripening and mango anthracnose spectral signatures are shown.

  1. Thresholds for Diagnosing Hypertension Based on Automated Office Blood Pressure Measurements and Cardiovascular Risk.

    PubMed

    Myers, Martin G; Kaczorowski, Janusz; Paterson, J Michael; Dolovich, Lisa; Tu, Karen

    2015-09-01

    The risk of cardiovascular events in relation to blood pressure is largely based on readings taken with a mercury sphygmomanometer in populations which differ from those of today in terms of hypertension severity and drug therapy. Given replacement of the mercury sphygmomanometer with electronic devices, we sought to determine the blood pressure threshold for a significant increase in cardiovascular risk using a fully automated device, which takes multiple readings with the subject resting quietly alone. Participants were 3627 community-dwelling residents aged >65 years untreated for hypertension. Automated office blood pressure readings were obtained in a community pharmacy with subjects seated and undisturbed. This method for recording blood pressure produces similar readings in different settings, including a pharmacy and family doctor's office providing the above procedures are followed. Subjects were followed for a mean (SD) of 4.9 (1.0) years for fatal and nonfatal cardiovascular events. Adjusted hazard ratios (95% confidence intervals) were computed for 10 mm Hg increments in blood pressure (mm Hg) using Cox proportional hazards regression and the blood pressure category with the lowest event rate as the reference category. A total of 271 subjects experienced a cardiovascular event. There was a significant (P=0.02) increase in the hazard ratio of 1.66 (1.09, 2.54) at a systolic blood pressure of 135 to 144 and 1.72 (1.21, 2.45; P=0.003) at a diastolic blood pressure of 80 to 89. A significant (P=0.03) increase in hazard ratio of 1.73 (1.04, 2.86) occurred with a pulse pressure of 80 to 89. These findings are consistent with a threshold of 135/85 for diagnosing hypertension in older subjects using automated office blood pressure.

  2. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  3. Simulations and Measurements of Stopbands in the Fermilab Recycler

    SciTech Connect

    Ainsworth, Robert; Adamson, Philip; Hazelwood, Kyle; Kourbanis, Ioanis; Stern, Eric

    2016-06-01

    Fermilab has recently completed an upgrade to the complex with the goal of delivering 700 kW of beam power as 120 GeV protons to the NuMI target. A major part of boosting beam power is to use the Fermilab Recycler to stack protons. Simulations focusing on the betatron resonance stopbands are presented taking into account different effects such as intensity and chromaticity. Simulations are compared with measurements.

  4. Water balance measurements and simulations of maize plants on lysimeters

    NASA Astrophysics Data System (ADS)

    Heinlein, Florian; Biernath, Christian; Klein, Christian; Thieme, Christoph; Priesack, Eckart

    2016-04-01

    In Central Europe expected major aspects of climate change are a shift of precipitation events and amounts towards winter months, and the general increase of extreme weather events like heat waves or summer droughts. This will lead to strongly changing regional water availability and will have an impact on future crop growth, water use efficiency and yields. Therefore, to estimate future crop yields by growth models accurate descriptions of transpiration as part of the water balance is important. In this study, maize was grown on weighing lysimeters (sowdate: 24 April 2013). Transpiration was determined by sap flow measurement devices (ICT International Pty Ltd, Australia) using the Heat-Ratio-Method: two temperature probes, 0.5 cm above and below a heater, detect a heat pulse and its speed which allows the calculation of sap flow. Water balance simulations were executed with different applications of the model framework Expert-N. The same pedotransfer and hydraulic functions and the same modules to simulate soil water flow, soil heat and nitrogen transport, nitrification, denitrification and mineralization were used. Differences occur in the chosen potential evapotranspiration ETpot (Penman-Monteith ASCE, Penman-Monteith FAO, Haude) and plant modules (SPASS, CERES). In all simulations ETpot is separated into a soil and a plant part using the leaf are index (LAI). In a next step, these parts are reduced by soil water availability. The sum of these parts is the actual evapotranspiration ETact which is compared to the lysimeter measurements. The results were analyzed from Mid-August to Mid-September 2013. The measured sap flow rates show clear diurnal cycles except on rainy days. The SPASS model is able to simulate these diurnal cycles, overestimates the measurements on rainy days and at the beginning of the analyzed period, and underestimates transpiration on the other days. The main reason is an overestimation of potential transpiration Tpot due to too high

  5. Simulation and Measurement of Stray Light in the CLASP

    NASA Technical Reports Server (NTRS)

    Narukage, Noriyuki; Kano, Ryohei; Bando, Takamasa; Ishikawa, Ryoko; Kubo, Masahito; Tsuzuki, Toshihiro; Katsukawa, Yukio; Ishikawa, Shin-nosuke; Giono, Gabriel; Suematsu, Yoshinori; Winebarger, Amy; Kobayashi, Ken

    2015-01-01

    We are planning an international rocket experiment Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is (2015 planned) that Lyman Alpha line polarization spectroscopic observations from the sun. The purpose of this experiment, detected with high accuracy of the linear polarization of the Ly?? lines to 0.1% by using a Hanle effect is to measure the magnetic field of the chromosphere-transition layer directly. For total flux of the sun visible light overwhelmingly larger and about 200 000 times the Ly?? line wavelength region, also hinder to 0.1% of the polarization photometric accuracy achieved in the stray light of slight visible light. Therefore we were first carried out using the illumination design analysis software called stray light simulation CLASP Light Tools. Feature of this simulation, using optical design file (ZEMAX format) and structural design file (STEP format), to reproduce realistic CLASP as possible to calculate machine is that it was stray study. And, at the stage in the actual equipment that made the provisional set of CLASP, actually put sunlight into CLASP using coelostat of National Astronomical Observatory of Japan, was subjected to measurement of stray light (San test). Pattern was not observed in the simulation is observed in the stray light measurement results need arise that measures. However, thanks to the stray light measurement and simulation was performed by adding, it was found this pattern is due to the diffracted light at the slit. Currently, the simulation results is where you have taken steps to reference. In this presentation, we report the stray light simulation and stray light measurement results that we have implemented

  6. A computer simulation approach to measurement of human control strategy

    NASA Technical Reports Server (NTRS)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III

    1982-01-01

    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  7. Measurement and simulation of the segmented Germanium-Detector's Efficiency

    NASA Astrophysics Data System (ADS)

    Salem, Shadi

    This paper presents the methods to determine the detection efficiency of the segmented germanium detector. Two methods are given for the investigating the detection efficiency of the semiconductor segmented-germanium detector. Experimental measurements using radioactive sources are reported. The radioactive sources, which were involved, can give us the opportunity to cover the photon energy ranging up to hundreds of keV. A useful compilation is included of the latest values of the emission rates per decay for the following radioactive sources: 241Am and 133Ba. The second method, the simulation of the efficiency is involved for comparison purposes. A good agreement between the measurements and the simulation is obtained.

  8. Computer simulation of fibrillation threshold measurements and electrophysiologic testing procedures

    NASA Technical Reports Server (NTRS)

    Grumbach, M. P.; Saxberg, B. E.; Cohen, R. J.

    1987-01-01

    A finite element model of cardiac conduction was used to simulate two experimental protocols: 1) fibrillation threshold measurements and 2) clinical electrophysiologic (EP) testing procedures. The model consisted of a cylindrical lattice whose properties were determined by four parameters: element length, conduction velocity, mean refractory period, and standard deviation of refractory periods. Different stimulation patterns were applied to the lattice under a given set of lattice parameter values and the response of the model was observed through a simulated electrocardiogram. The studies confirm that the model can account for observations made in experimental fibrillation threshold measurements and in clinical EP testing protocols.

  9. Temperature measurement error simulation of the pure rotational Raman lidar

    NASA Astrophysics Data System (ADS)

    Jia, Jingyu; Huang, Yong; Wang, Zhirui; Yi, Fan; Shen, Jianglin; Jia, Xiaoxing; Chen, Huabin; Yang, Chuan; Zhang, Mingyang

    2015-11-01

    Temperature represents the atmospheric thermodynamic state. Measure the atmospheric temperature accurately and precisely is very important to understand the physics of the atmospheric process. Lidar has some advantages in the atmospheric temperature measurement. Based on the lidar equation and the theory of pure rotational Raman (PRR), we've simulated the temperature measurement errors of the double-grating-polychromator (DGP) based PRR lidar. First of all, without considering the attenuation terms of the atmospheric transmittance and the range in the lidar equation, we've simulated the temperature measurement errors which are influenced by the beam splitting system parameters, such as the center wavelength, the receiving bandwidth and the atmospheric temperature. We analyzed three types of the temperature measurement errors in theory. We've proposed several design methods for the beam splitting system to reduce the temperature measurement errors. Secondly, we simulated the temperature measurement error profiles by the lidar equation. As the lidar power-aperture product is determined, the main target of our lidar system is to reduce the statistical and the leakage errors.

  10. Associations between overweight, obesity, health measures and need for recovery in office employees: a cross-sectional analysis

    PubMed Central

    2013-01-01

    Background With both a high need for recovery (NFR) and overweight and obesity being a potential burden for organizations (e.g. productivity loss and sickness absence), the aim of this paper was to examine the associations between overweight and obesity and several other health measures and NFR in office workers. Methods Baseline data of 412 office employees participating in a randomised controlled trial aimed at improving NFR in office workers were used. Associations between self-reported BMI categories (normal body weight, overweight, obesity) and several other health measures (general health, mental health, sleep quality, stress and vitality) with NFR were examined. Unadjusted and adjusted linear regression analyses were performed and adjusted for age, education and job demands. In addition, we adjusted for general health in the association between overweight and obesity and NFR. Results A significant positive association was observed between stress and NFR (B = 18.04, 95%CI:14.53-21.56). General health, mental health, sleep quality and vitality were negatively associated with NFR (p < 0.001). Analyses also showed a significant positive association between obesity and NFR (B = 8.77, 95%CI:0.01-17.56), but not between overweight and NFR. Conclusions The findings suggest that self-reported stress is, and obesity may be, associated with a higher NFR. Additionally, the results imply that health measures that indicate a better health are associated with a lower NFR. Trial registration The trial is registered at the Dutch Trial Register (NTR) under trial registration number: NTR2553. PMID:24359267

  11. A COMPARISON OF GADRAS SIMULATED AND MEASURED GAMMA RAY SPECTRA

    SciTech Connect

    Jeffcoat, R.; Salaymeh, S.

    2010-06-28

    Gamma-ray radiation detection systems are continuously being developed and improved for detecting the presence of radioactive material and for identifying isotopes present. Gamma-ray spectra, from many different isotopes and in different types and thicknesses of attenuation material and matrixes, are needed to evaluate the performance of these devices. Recently, a test and evaluation exercise was performed by the Savannah River National Laboratory that required a large number of gamma-ray spectra. Simulated spectra were used for a major portion of the testing in order to provide a pool of data large enough for the results to be statistically significant. The test data set was comprised of two types of data, measured and simulated. The measured data were acquired with a hand-held Radioisotope Identification Device (RIID) and simulated spectra were created using Gamma Detector Response and Analysis Software (GADRAS, Mitchell and Mattingly, Sandia National Laboratory). GADRAS uses a one-dimensional discrete ordinate calculation to simulate gamma-ray spectra. The measured and simulated spectra have been analyzed and compared. This paper will discuss the results of the comparison and offer explanations for spectral differences.

  12. NIF-0096141-OA Prop Simulations of NEL PBRS Measurements

    SciTech Connect

    Widmayer, C; Manes, K

    2003-02-21

    Portable Back Reflection Sensor, PBRS, (NEL only) and Quad Back Reflection Sensor, QBRS, time delay reflectometer traces are among the most useful diagnostics of NIF laser status available. NEL PBRS measurements show several signals reaching the detector for each shot. The time delay between signals suggests that the largest of these is due to energy at the spatial filter pinhole planes leaking into adjacent pinholes and traveling back upstream to the PBRS. Prop simulations agree with current PBRS measurements to within 50%. This suggests that pinhole leakage is the dominant source of energy at the PBRS. However, the simulations predict that the energy leakage is proportional to beam output energy, while the PBRS measurements increase more slowly (''saturate''). Further refinement of the model or the measurement may be necessary to resolve this discrepancy.

  13. Calibration of three rainfall simulators with automatic measurement methods

    NASA Astrophysics Data System (ADS)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  14. Measuring Financial Literacy: Developing and Testing a Measurement Instrument with a Selected Group of South African Military Officers

    ERIC Educational Resources Information Center

    Schwella, E.; van Nieuwenhuyzen, Bernard J.

    2014-01-01

    Are South Africans financially literate, and how can this be measured? Until 2009 there was no South African financial literacy measure and, therefore, the aim was to develop a South African measurement instrument that is scientific, socially acceptable, valid and reliable. To achieve this aim a contextual and conceptual analysis of financial…

  15. Monte Carlo simulation of portal detectors of a steel factory. Comparison of measured and simulated response

    NASA Astrophysics Data System (ADS)

    Takoudis, G.; Xanthos, S.; Clouvas, A.; Antonopoulos-Domis, M.; Potiriadis, C.

    2007-09-01

    Metal scrap is widely used in steel production. Millions of tons of scrap metal are traded each year worldwide; hence, both national and international authorities have shown an increasing interest in the probing and detection of radioactivity contamination in scrap metal. In order to minimize and/or avoid economical losses and material contamination, portal monitors have been installed at the entrance point of installations of many steel industries. Portal monitors typically consist of large organic scintillation detectors. The purpose of this study is to simulate such detectors and compare simulation results with experimental measurements in order to understand, calibrate and effectively use the detectors' response. Monte Carlo simulations of these systems demonstrate the assumptions that have to be made for optimal matching of measured and simulated results. As it was reported in previous studies, we observed a difference between measured and experimental values next to the light guide. In this work, we propose a transition area near the boundary surface of the scintillator and the light guide; this results in a good qualitative and quantitative agreement of measured and simulated results. This study will also define a guideline for later portal monitor simulations and a reliable estimation of the portals' efficiency.

  16. Radiological Disaster Simulators for Field and Aerial Measurements

    SciTech Connect

    H. W. Clark, Jr

    2002-11-01

    Simulators have been developed to dramatically improve the fidelity of play for field monitors and aircraft participating in radiological disaster drills and exercises. Simulated radiological measurements for the current Global Positioning System (GPS) location are derived from realistic models of radiological consequences for accidents and malicious acts. The aerial version outputs analog pulses corresponding to the signal that would be produced by various NaI (Tl) detectors at that location. The field monitor version reports the reading for any make/model of survey instrument selected. Position simulation modes are included in the aerial and field versions. The aerial version can generate a flight path based on input parameters or import an externally generated sequence of latitude and longitude coordinates. The field version utilizes a map-based point and click/drag interface to generate individual or a sequence of evenly spaced instrument measurements.

  17. Measurement of performance of solar-heated office buildings. Final report, June 1, 1982-October 31, 1983

    SciTech Connect

    Norford, L.N.; Rabl, A.; Socolow, R.H.

    1984-01-01

    Prudential Insurance Company is building two new office buildings that are a showcase of innovative energy efficient design and solar energy utilization. In order for this effort to be fully successful, the actual performance of these buildings needs to be monitored. This report summarizes the progress made during the first year. A thorough theoretical analysis has been carried out, using the DOE2.1 computer simulation code. This analysis has been supplemented by shorthand calculations and by special models to provide an independent check of the coding and to evaluate certain features, e.g. the double wall, that cannot be modeled by DOE2.1. A steady state shorthand method has been developed to calculate annual energy use; it is a modification of the ASHRAE bin method and agrees with the computer simulation within about 15% for cooling and 2% for heating. Energy savings due to daylighting have been evaluated using both shorthand methods and the computer code DOE2.1b. The calculations of annual energy use that were performed at the design stage have been reproduced, and changes during later design phases, e.g. the outdoor air flow rate, have been identified. Even without a variety of further energy savings that appear feasible, these buildings promise to be among the most efficient in the current stock of office buildings. A 100-channel instrumentation and data acquisition system has been designed, and installation should be complete by February 1984. Extensive software has been prepared to confront the model predictions with field data.

  18. Comparison of Experimentally Measured Rayleigh-Taylor Growth to Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Knauer, J. P.; Verdon, C. P.; Betti, R.; Meyerhofer, D. D.; Boehly, T. R.; Bradley, D. K.; Smalyuk, V. A.

    1997-11-01

    Experimental measurements of perturbation growth due to the Rayleigh-Taylor (RT) instability at the ablation interface have been used to try to understand the physical processes involved in ablative stabilization. The growth rate calculated from a dispersion relation and values for the acceleration and ablation velocity determined by a numerical simulation are compared to a growth rate from an experiment where the numerical simulation includes the correct ablation interface physics. Planar targets with initial perturbations of 20-, 31-, and 60- μ*m wavelengths and initial amplitudes of 0.5 μ*m have been accelerated. The analysis shows that the growth rate determined from an x-ray radiograph of the planar foil should not be compared with the results from a dispersion formula that calculates the spatial development of the perturbation. The ORCHID* simulation indicates a significant modification to the density distribution so that the measurement of ρΔ*x* does not reflect the evolution of Δ*x*. The amplitude of a perturbation measured as ρΔ*x* can be characterized as a=a0 ^**e^γ*^t*+c*, where a_0* is the initial amplitude, γ* is the growth rate, and c is a slowly varying function of time. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC03-92SF19460.

  19. Performance Measures for Evaluating Public Participation Activities in the Office of Environmental Management (DOE)

    SciTech Connect

    Carnes, S.A.

    2001-02-15

    Public participation in Office of Environmental Management (EM) activities throughout the DOE complex is a critical component of the overall success of remediation and waste management efforts. The challenges facing EM and its stakeholders over the next decade or more are daunting (Nuclear Waste News 1996). Achieving a mission composed of such challenges will require innovation, dedication, and a significant degree of good will among all stakeholders. EM's efforts to date, including obtaining and using inputs offered by EM stakeholders, have been notable. Public participation specialists have accepted and met challenges and have consistently tried to improve their performance. They have reported their experiences both formally and informally (e.g., at professional conferences and EM Public Participation Network Workshops, other internal meetings of DOE and contractor public participation specialists, and one-on-one consultations) in order to advance the state of their practice. Our research, and our field research in particular (including our interactions with many representatives of numerous stakeholder groups at nine DOE sites with diverse EM problems), have shown that it, is possible to develop coherent results even in a problem domain as complex as that of EM. We conclude that performance-based evaluations of public participation appear possible, and we have recommended an approach, based on combined and integrated multi-stakeholder views on the attributes of successful public participation and associated performance indicators, that seems workable and should be acceptable to diverse stakeholders. Of course, as an untested recommendation, our approach needs the validation that can only be achieved by application (perhaps at a few DOE sites with ongoing EM activities). Such an application would serve to refine the proposed approach in terms of its clarity, its workability, and its potential for full-scale use by EM and, potentially, other government agencies and

  20. A1cNow® InView™: A New Simple Method for Office-Based Glycohemoglobin Measurement

    PubMed Central

    Mattewal, Amarbir; Aldasouqi, Saleh; Solomon, David; Gossain, Ved; Koller, Anthony

    2007-01-01

    Background Glycohemoglobin A1c (HbA1c) is a universally accepted tool for glycemic control. Portable HbA1c devices for use in physicians' offices are desirable because they provide immediate results that physicians can share with their patients. This has been shown to enhance self-management in patients with diabetes. We undertook this study to evaluate the accuracy and precision of a recently introduced device, the A1cNow® InView™ capillary monitor. Methods Previously tested EDTA-preserved whole blood samples from our laboratory pool were preselected based on the results of HbA1c to cover a range from 4 to 13%. HbA1c was then measured using an A1cNow InView capillary monitor. Blinded aliquots of these samples were then sent to a National Glycohemoglobin Standardization Program (NGSP)-certified reference laboratory for comparison. One sample with a laboratory HbA1c result of 9.2% was measured with the InView device nine successive times to assess the device precision. The consistency between the measurement of HbA1c measured by the reference laboratory and the A1cNow InView device was analyzed via linear regression. Results Thirty-five samples were tested. The correlation between HbA1c measured by the InView device and the reference laboratory, as well as our own laboratory, was 0.96. The coefficient of variation was 2.71%. Conclusions Results of this study confirm the accuracy and precision of the InView capillary HbA1c monitor. However, the feasibility, reproducibility, and cost-effectiveness of this promising device in the real-life settings of physicians' offices must be verified by prospective clinical studies. PMID:19885160

  1. Measurement and simulation of deformation and stresses in steel casting

    NASA Astrophysics Data System (ADS)

    Galles, D.; Monroe, C. A.; Beckermann, C.

    2012-07-01

    Experiments are conducted to measure displacements and forces during casting of a steel bar in a sand mold. In some experiments the bar is allowed to contract freely, while in others the bar is manually strained using embedded rods connected to a frame. Solidification and cooling of the experimental castings are simulated using a commercial code, and good agreement between measured and predicted temperatures is obtained. The deformations and stresses in the experiments are simulated using an elasto-viscoplastic finite-element model. The high temperature mechanical properties are estimated from data available in the literature. The mush is modeled using porous metal plasticity theory, where the coherency and coalescence solid fraction are taken into account. Good agreement is obtained between measured and predicted displacements and forces. The results shed considerable light on the modeling of stresses in steel casting and help in developing more accurate models for predicting hot tears and casting distortions.

  2. Report: Office of Research and Development Needs to Improve Its Method of Measuring Administrative Savings

    EPA Pesticide Factsheets

    Report #11-P-0333, July 14, 2011. ORD’s efforts to reduce its administrative costs are noteworthy, but ORD needs to improve its measurement mechanism for assessing the effectiveness of its initiatives to reduce administrative costs.

  3. A Simulation Model for Measuring Customer Satisfaction through Employee Satisfaction

    NASA Astrophysics Data System (ADS)

    Zondiros, Dimitris; Konstantopoulos, Nikolaos; Tomaras, Petros

    2007-12-01

    Customer satisfaction is defined as a measure of how a firm's product or service performs compared to customer's expectations. It has long been a subject of research due to its importance for measuring marketing and business performance. A lot of models have been developed for its measurement. This paper propose a simulation model using employee satisfaction as one of the most important factors leading to customer satisfaction (the others being expectations and disconfirmation of expectations). Data obtained from a two-year survey in customers of banks in Greece were used. The application of three approaches regarding employee satisfaction resulted in greater customer satisfaction when there is serious effort to keep employees satisfied.

  4. Measurement and simulation of apertures on Z hohlraums

    SciTech Connect

    Chrien, R.E.; Matuska, W. Jr.; Swenson, F.J.

    1998-12-01

    The authors have performed aperture measurements and simulations for vacuum hohlraums heated by wire array implosions. A low-Z plastic coating is often applied to the aperture to create a high ablation pressure which retards the expansion of the gold hohlraum wall. However, this interface is unstable and may be subjects to the development of highly nonlinear perturbations (jets) as a result of shocks converging near the edge of the aperture. These experiments have been simulated using Lagrangian and Eulerian radiation hydrodynamics codes.

  5. Shear Strength Measurement Benchmarking Tests for K Basin Sludge Simulants

    SciTech Connect

    Burns, Carolyn A.; Daniel, Richard C.; Enderlin, Carl W.; Luna, Maria; Schmidt, Andrew J.

    2009-06-10

    Equipment development and demonstration testing for sludge retrieval is being conducted by the K Basin Sludge Treatment Project (STP) at the MASF (Maintenance and Storage Facility) using sludge simulants. In testing performed at the Pacific Northwest National Laboratory (under contract with the CH2M Hill Plateau Remediation Company), the performance of the Geovane instrument was successfully benchmarked against the M5 Haake rheometer using a series of simulants with shear strengths (τ) ranging from about 700 to 22,000 Pa (shaft corrected). Operating steps for obtaining consistent shear strength measurements with the Geovane instrument during the benchmark testing were refined and documented.

  6. Electron Beam Lifetime in SPEAR3: Measurement and Simulation

    SciTech Connect

    Corbett, J.; Huang, X.; Lee, M.; Lui, P.; Sayyar-Rodsari, B.; /Pavilon Tech., Austin

    2007-12-19

    In this paper we report on electron beam lifetime measurements as a function of scraper position, RF voltage and bunch fill pattern in SPEAR3. We then outline development of an empirical, macroscopic model using the beam-loss rate equation. By identifying the dependence of loss coefficients on accelerator and beam parameters, a numerically-integrating simulator can be constructed to compute beam decay with time. In a companion paper, the simulator is used to train a parametric, non-linear dynamics model for the system [1].

  7. Simulating Scintillator Light Collection Using Measured Optical Reflectance

    SciTech Connect

    Janecek, Martin; Moses, William

    2010-01-28

    To accurately predict the light collection from a scintillating crystal through Monte Carlo simulations, it is crucial to know the angular distribution from the surface reflectance. Current Monte Carlo codes allow the user to set the optical reflectance to a linear combination of backscatter spike, specular spike, specular lobe, and Lambertian reflections. However, not all light distributions can be expressed in this way. In addition, the user seldom has the detailed knowledge about the surfaces that is required for accurate modeling. We have previously measured the angular distributions within BGO crystals and now incorporate these data as look-up-tables (LUTs) into modified Geant4 and GATE Monte Carlo codes. The modified codes allow the user to specify the surface treatment (ground, etched, or polished), the attached reflector (Lumirror(R), Teflon(R), ESR film, Tyvek(R), or TiO paint), and the bonding type (air-coupled or glued). Each LUT consists of measured angular distributions with 4o by 5o resolution in theta and phi, respectively, for incidence angles from 0? to 90? degrees, in 1o-steps. We compared the new codes to the original codes by running simulations with a 3 x 10 x 30 mm3 BGO crystal coupled to a PMT. The simulations were then compared to measurements. Light output was measured by counting the photons detected by the PMT with the 3 x 10, 3 x 30, or 10 x 30 mm2 side coupled to the PMT, respectively. Our new code shows better agreement with the measured data than the current Geant4 code. The new code can also simulate reflector materials that are not pure specular or Lambertian reflectors, as was previously required. Our code is also more user friendly, as no detailed knowledge about the surfaces or light distributions is required from the user.

  8. Thermal crosstalk simulation and measurement of linear terahertz detector arrays

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Huang, Zehua; Wang, Jun; Li, Mingyu; Gou, Jun; Jiang, Yadong

    2015-11-01

    Thermal simulation of differently structured linear terahertz detector arrays (TDAs) based on lithium tantalate was performed by finite element analysis (FEA). Simulation results revealed that a relatively simple TDA structure can have good thermal insulation, i.e., low thermal crosstalk effect (TCE), between adjacent pixels, which was thus selected for the real fabrication of TDA sample. Current responsivity (Ri) of the sample for a 2.52 THz source was measured to be 6.66 × 10-6 A/W and non-uniformity (NU) of Ri was 4.1%, showing good performance of the sample. TCE test result demonstrated that small TCE existed in the sample, which was in good agreement with the simulation results.

  9. PINS Measurements of Explosive Simulants for Cargo Screening

    SciTech Connect

    E.H. Seabury

    2008-06-01

    As part of its efforts to prevent the introduction of explosive threats on commercial flights, the Transportation Security Administration (TSL) is evaluating new explosives detection systems (EDSs) for use in air cargo inspection. The TSL has contracted Battelle to develop a new type of explosives simulant to assist in this development. These are designed to mimic the elemental profile (C, H, N, O, etc.) of explosives as well as their densities. Several “neutron in—gamma out” (n,?) techniques have been considered to quantify the elemental profile in these new simulants and the respective explosives. The method chosen by Battelle is Portable Isotopic Neutron Spectroscopy (PINS), developed by Idaho National Laboratory (INL). Battelle wishes to validate that the simulants behave like the explosive threats with this technology. The results of the validation measurements are presented in this report.

  10. Simulations of infrared atmospheric transmittance based on measured data

    NASA Astrophysics Data System (ADS)

    Song, Fu-yin; Lu, Yuan; Qiao, Ya; Tao, Hui-feng; Tang, Cong; Ling, Yong-shun

    2016-10-01

    There are two regular methods to calculate infrared atmospheric transmittance, including empirical formula and professional software. However, it has large deviations to use empirical formula. It is complicated to use professional software and difficult to apply in other infrared simulative system. Therefore, based on measured atmospheric data in some area for many years, article used the method of molecular single absorption to calculate absorption coefficients of water vapor and carbon dioxide in different temperature. Temperatures, pressures, and consequent scattering coefficients which distributed in different high were fitted with analysis formula according to different months. Then, it built simulative calculation model of atmospheric transmittance of infrared radiation. The simulative results are very close to accuracy results calculated by user-defined model of MODTRAN. The method is easy and convenient to use and has certain referent value in the project application.

  11. Operation SUN BEAM. Shot Small Boy, Project Officers’ Report. Project 2. 1. Initial Radiation Measurements

    DTIC Science & Technology

    1981-05-01

    overload properties of the VCO, but some is of external origin. The source of this latter component has not been identified. (U) The traces from the...needs more attention. M) The colligated Measurement at 468 meters shows that the gamma-ray rate from the device itself fails the level expected

  12. Measurement and simulation of thermoelectric efficiency for single leg

    SciTech Connect

    Hu, Xiaokai; Yamamoto, Atsushi Ohta, Michihiro; Nishiate, Hirotaka

    2015-04-15

    Thermoelectric efficiency measurements were carried out on n-type bismuth telluride legs with the hot-side temperature at 100 and 150°C. The electric power and heat flow were measured individually. Water coolant was utilized to maintain the cold-side temperature and to measure heat flow out of the cold side. Leg length and vacuum pressure were studied in terms of temperature difference across the leg, open-circuit voltage, internal resistance, and heat flow. Finite-element simulation on thermoelectric generation was performed in COMSOL Multiphysics, by inputting two-side temperatures and thermoelectric material properties. The open-circuit voltage and resistance were in good agreement between the measurement and simulation. Much larger heat flows were found in measurements, since they were comprised of conductive, convective, and radiative contributions. Parasitic heat flow was measured in the absence of bismuth telluride leg, and the conductive heat flow was then available. Finally, the maximum thermoelectric efficiency was derived in accordance with the electric power and the conductive heat flow.

  13. Operation Sun Beam, Shot Small Boy. Project Officers report. Project 1. 9. Crater measurements

    SciTech Connect

    Rooke, A.D.; Davis, L.K.; Strange, J.N.

    1985-09-01

    The objectives of Project 1.9 were to obtain the dimensions of the apparent and true craters formed by the Small Boy event and to measure the permanent earth deformation occurring beyond the true crater boundary. Measurements were made of the apparent crater by aerial stereophotography and ground survey and of the true crater and subsurface zones of residual deformation by the excavation and mapping of an array of vertical, colored sand columns which were placed along one crater diameter prior to the shot. The results of the crater exploration are discussed, particularly the permanent compression of the medium beneath the true crater which was responsible for the major portion of the apparent and true crater volumes. Apparent and true crater dimensions are compared with those of previous cratering events.

  14. Measurement of human pilot dynamic characteristics in flight simulation

    NASA Technical Reports Server (NTRS)

    Reedy, James T.

    1987-01-01

    Fast Fourier Transform (FFT) and Least Square Error (LSE) estimation techniques were applied to the problem of identifying pilot-vehicle dynamic characteristics in flight simulation. A brief investigation of the effects of noise, input bandwidth and system delay upon the FFT and LSE techniques was undertaken using synthetic data. Data from a piloted simulation conducted at NASA Ames Research Center was then analyzed. The simulation was performed in the NASA Ames Research Center Variable Stability CH-47B helicopter operating in fixed-basis simulator mode. The piloting task consisted of maintaining the simulated vehicle over a moving hover pad whose motion was described by a random-appearing sum of sinusoids. The two test subjects used a head-down, color cathode ray tube (CRT) display for guidance and control information. Test configurations differed in the number of axes being controlled by the pilot (longitudinal only versus longitudinal and lateral), and in the presence or absence of an important display indicator called an 'acceleration ball'. A number of different pilot-vehicle transfer functions were measured, and where appropriate, qualitatively compared with theoretical pilot- vehicle models. Some indirect evidence suggesting pursuit behavior on the part of the test subjects is discussed.

  15. Development and Validation of Measures for Selecting Soldiers for the Officer Candidate School

    DTIC Science & Technology

    2011-08-01

    Class — Longitudinal, see Knapp & Heffner, 2009; for Select21, see Knapp & Tremble, 2007, for Profile of American Youth, see Moore , Pedlow, Krishnamurty...organization: A meta-analysis of antecedents, correlates, and consequences. Journal of Vocational Behavior, 61, 20-52. Moore , W., Pedlow, S., Krishnamurty, P... Basil Blackwell, Inc. Van Iddekinge, C.H., Putka, D.J., & Sager, C.E. (2005). Person-environment fit measures. In D.J. Knapp, C.E. Sager, & T.R

  16. Office blood pressure measurement practices among community health providers (medical and paramedical) in northern district of India

    PubMed Central

    Mohan, Bishav; Aslam, Naved; Ralhan, Upma; Sharma, Sarit; Gupta, Naveen; Singh, Vivudh Pratap; Takkar, Shibba; Wander, G.S.

    2014-01-01

    Introduction Hypertension is directly responsible for 57% of all stroke deaths and 24% of all coronary heart disease deaths in India. Appropriate blood pressure measurement techniques are the cornerstone of clinical acumen. Despite the clear guidelines on BP measurement technique, there seems to be large inter-observer variations. Aim & methods A prospective, observational study was done to assess the knowledge and to study the current practices of office BP measurement among the 400 medical and paramedical staff working in various hospitals of a northern district of India. A single observer under the supervision of investigators observed all the participants and a proforma was filled based on AHA guidelines. After observing BP measurement technique scoring was done (≤8 question correct = inaccurate practices, >9 questions correct = accurate practices). Similarly, the knowledge was assessed by giving a pretested questionnaire. Results 5.85 % of the medical staff had excellent knowledge and 80% of the doctors and 62% of the paramedical staff had good knowledge about BPM. Only 1.47% (3 doctors) and 0.5% (1 nurse) had accurate practices. There was no correlation between knowledge and practices. Conclusions We conclude that the right technique and knowledge of blood pressure measurement among community health providers is inadequate and warrants further interventions to improve. PMID:25173197

  17. A Framework for the Measurement of Simulated Behavior Performance

    DTIC Science & Technology

    2011-03-24

    takes professional soccer team recordings to aid Robocup team tactics development, measuring success through goal tracking [5]. With interest in...Cloning for Simulator Validation”. Ubbo Visser, Fernando Ribeiro, Takeshi Ohashi, and Frank Dellaert (editors), RoboCup 2007: Robot Soccer World Cup...XI, volume 5001 of Lecture Notes in Computer Science, 329–336. Springer Berlin / Heidelberg, 2008. URL http://dx.doi.org/10. 1007/978-3-540-68847-1_32

  18. Technical Performance Measures and Distributed-Simulation Training Systems

    DTIC Science & Technology

    2000-01-01

    Winter 2000 32 Piplani , L . K ., Mercer, J. G., & Roop, R. O. (1994). Systems acquisition manager’s guide for the use of models and simulations. Fort...tasks, moving on to learn advanced unit tasks, rein- forcement of p r e v i o u s l y learned tasks, and, finally, in- tegration of various combinations... Piplani , Mercer, and Roop, 1994), identifies numerous outcome-oriented, technical performance measures for use by acquisition managers of combat systems

  19. Microelectronics mounted on a piezoelectric transducer: method, simulations, and measurements.

    PubMed

    Johansson, Jonny; Delsing, Jerker

    2006-01-01

    This paper describes the design of a highly integrated ultrasound sensor where the piezoelectric ceramic transducer is used as the carrier for the driver electronics. Intended as one part in a complete portable, battery operated ultrasound sensor system, focus has been to achieve small size and low power consumption. An optimized ASIC driver stage is mounted directly on the piezoelectric transducer and connected using wire bond technology. The absence of wiring between driver and transducer provides excellent pulse control possibilities and eliminates the need for broad band matching networks. Estimates of the sensor power consumption are made based on the capacitive behavior of the piezoelectric transducer. System behavior and power consumption are simulated using SPICE models of the ultrasound transducer together with transistor level modelling of the driver stage. Measurements and simulations are presented of system power consumption and echo energy in a pulse echo setup. It is shown that the power consumption varies with the excitation pulse width, which also affects the received ultrasound energy in a pulse echo setup. The measured power consumption for a 16 mm diameter 4.4 MHz piezoelectric transducer varies between 95 microW and 130 microW at a repetition frequency of 1 kHz. As a lower repetition frequency gives a linearly lower power consumption, very long battery operating times can be achieved. The measured results come very close to simulations as well as estimated ideal minimum power consumption.

  20. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Rubin, D. M.

    2012-06-01

    In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  1. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    USGS Publications Warehouse

    Daniel Buscombe,; Rubin, David M.

    2012-01-01

    1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  2. Long-term vital sign measurement using a non-contact vital sign sensor inside an office cubicle setting.

    PubMed

    Hall, T; Malone, N A; Tsay, J; Lopez, J; Nguyen, T; Banister, R E; Lie, D Y C

    2016-08-01

    Heart and respiration rates can be wirelessly measured by extracting the phase shift caused by the periodic displacement of a patient's chest wall. We have developed a phased-array Doppler-based non-contact vital sign (NCVS) sensor capable of long-term vital signs monitoring using an automatic patient tracking and movement detection algorithm. Our NCVS sensor achieves non-contact heart rate monitoring with accuracies of over 90% (i.e, within ±5 Beats-Per-Minute vs. a reference sensor) across a large number of data points collected over various days of the week inside a typical office cubicle setting at a distance of 1.5 meters.

  3. High performance surface plasmon sensors: Simulations and measurements

    NASA Astrophysics Data System (ADS)

    Tiwari, Kunal; Sharma, Suresh C.; Hozhabri, Nader

    2015-09-01

    Through computer simulations and surface plasmon resonance (SPR) measurements, we establish optimum parameters for the design and fabrication of SPR sensors of high sensitivity, resolution, stability, and long decay-length evanescent fields. We present simulations and experimental SPR data for variety of sensors fabricated by using bimetal (Ag/Au) and multilayer waveguide-coupled Ag/Si3N4/Au structures. The simulations were carried out by using the transfer matrix method in MATLAB environment. Results are presented as functions of the thickness of the metal (Ag or Au) and the waveguide dielectric used in Ag/Si3N4/Au structures. Excellent agreement is observed between the simulations and experiments. For optimized thickness of the Si3N4 waveguide (150 nm), the sensor exhibits very high sensitivity to changes in the refractive index of analytes, Sn≈52°/R I U , extremely high resolution (F W H M ≤0.28° ) , and long penetration depth of evanescent fields (δ≥305 n m ) .

  4. Diffuse photon density wave measurements and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kuzmin, Vladimir L.; Neidrauer, Michael T.; Diaz, David; Zubkov, Leonid A.

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe-Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source-detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal-noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source-detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  5. Diffuse photon density wave measurements and Monte Carlo simulations.

    PubMed

    Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  6. In-flight and simulated aircraft fuel temperature measurements

    NASA Technical Reports Server (NTRS)

    Svehla, Roger A.

    1990-01-01

    Fuel tank measurements from ten flights of an L1011 commercial aircraft are reported for the first time. The flights were conducted from 1981 to 1983. A thermocouple rake was installed in an inboard wing tank and another in an outboard tank. During the test periods of either 2 or 5 hr, at altitudes of 10,700 m (35,000 ft) or higher, either the inboard or the outboard tank remained full. Fuel temperature profiles generally developed in the expected manner. The bulk fuel was mixed by natural convection to a nearly uniform temperature, especially in the outboard tank, and a gradient existed at the bottom conduction zone. The data indicated that when full, the upper surface of the inboard tank was wetted and the outboard tank was unwetted. Companion NASA Lewis Research Center tests were conducted in a 0.20 cubic meter (52 gal) tank simulator of the outboard tank, chilled on the top and bottom, and insulated on the sides. Even though the simulator tank had no internal components corresponding to the wing tank, temperatures agreed with the flight measurements for wetted upper surface conditions, but not for unwetted conditions. It was concluded that if boundary conditions are carefully controlled, simulators are a useful way of evaluating actual flight temperatures.

  7. Organ radiation exposure with EOS: GATE simulations versus TLD measurements

    NASA Astrophysics Data System (ADS)

    Clavel, A. H.; Thevenard-Berger, P.; Verdun, F. R.; Létang, J. M.; Darbon, A.

    2016-03-01

    EOS® is an innovative X-ray imaging system allowing the acquisition of two simultaneous images of a patient in the standing position, during the vertical scan of two orthogonal fan beams. This study aimed to compute organs radiation exposure to a patient, in the particular geometry of this system. Two different positions of the patient in the machine were studied, corresponding to postero-anterior plus left lateral projections (PA-LLAT) and antero-posterior plus right lateral projections (AP-RLAT). To achieve this goal, a Monte-Carlo simulation was developed based on a GATE environment. To model the physical properties of the patient, a computational phantom was produced based on computed tomography scan data of an anthropomorphic phantom. The simulations provided several organs doses, which were compared to previously published dose results measured with Thermo Luminescent Detectors (TLD) in the same conditions and with the same phantom. The simulation results showed a good agreement with measured doses at the TLD locations, for both AP-RLAT and PA-LLAT projections. This study also showed that the organ dose assessed only from a sample of locations, rather than considering the whole organ, introduced significant bias, depending on organs and projections.

  8. Simulation and measurement of transcranial near infrared light penetration

    NASA Astrophysics Data System (ADS)

    Yue, Lan; Monge, Manuel; Ozgur, Mehmet H.; Murphy, Kevin; Louie, Stan; Miller, Carol A.; Emami, Azita; Humayun, Mark S.

    2015-03-01

    We are studying the transmission of LED array-emitted near-infrared (NIR) light through human tissues. Herein, we simulated and measured transcranial NIR penetration in highly scattering human head tissues. Using finite element analysis, we simulated photon diffusion in a multilayered 3D human head model that consists of scalp, skull, cerebral spinal fluid, gray matter and white matter. The optical properties of each layer, namely scattering and absorption coefficient, correspond to the 850 nm NIR light. The geometry of the model is minimally modified from the IEEE standard and the multiple LED emitters in an array were evenly distributed on the scalp. Our results show that photon distribution produced by the array exhibits little variation at similar brain depth, suggesting that due to strong scattering effects of the tissues, discrete spatial arrangements of LED emitters in an array has the potential to create a quasi-radially symmetrical illumination field. Measurements on cadaveric human head tissues excised from occipital, parietal, frontal and temporal regions show that illumination with an 850 nm LED emitter rendered a photon flux that closely follows simulation results. In addition, prolonged illumination of LED emitted NIR showed minimal thermal effects on the brain.

  9. Evaluation of Intersection Traffic Control Measures through Simulation

    NASA Astrophysics Data System (ADS)

    Asaithambi, Gowri; Sivanandan, R.

    2015-12-01

    Modeling traffic flow is stochastic in nature due to randomness in variables such as vehicle arrivals and speeds. Due to this and due to complex vehicular interactions and their manoeuvres, it is extremely difficult to model the traffic flow through analytical methods. To study this type of complex traffic system and vehicle interactions, simulation is considered as an effective tool. Application of homogeneous traffic models to heterogeneous traffic may not be able to capture the complex manoeuvres and interactions in such flows. Hence, a microscopic simulation model for heterogeneous traffic is developed using object oriented concepts. This simulation model acts as a tool for evaluating various control measures at signalized intersections. The present study focuses on the evaluation of Right Turn Lane (RTL) and Channelised Left Turn Lane (CLTL). A sensitivity analysis was performed to evaluate RTL and CLTL by varying the approach volumes, turn proportions and turn lane lengths. RTL is found to be advantageous only up to certain approach volumes and right-turn proportions, beyond which it is counter-productive. CLTL is found to be advantageous for lower approach volumes for all turn proportions, signifying the benefits of CLTL. It is counter-productive for higher approach volume and lower turn proportions. This study pinpoints the break-even points for various scenarios. The developed simulation model can be used as an appropriate intersection lane control tool for enhancing the efficiency of flow at intersections. This model can also be employed for scenario analysis and can be valuable to field traffic engineers in implementing vehicle-type based and lane-based traffic control measures.

  10. Measurement and simulation of the RHIC abort kicker longitudinal impedence

    SciTech Connect

    Abreu,N.P.; Hahn,H.; Choi, E.

    2009-09-01

    In face of the new upgrades for RHIC the longitudinal impedance of the machine plays an important role in setting the threshold for instabilities and the efficacy of some systems. In this paper we describe the measurement of the longitudinal impedance of the abort kicker for RHIC as well as computer simulations of the structure. The impedance measurement was done by the S{sub 21} wire method covering the frequency range from 9 kHz to 2.5 GHz. We observed a sharp resonance peak around 10 MHz and a broader peak around 20 MHz in both, the real and imaginary part, of the Z/n. These two peaks account for a maximum imaginary longitudinal impedance of j15 {Omega}, a value an order of magnitude larger than the estimated value of j0.2 {Omega}, which indicates that the kicker is one of the main sources of longitudinal impedance in the machine. A computer model was constructed for simulations in the CST MWS program. Results for the magnet input and the also the beam impedance are compared to the measurements. A more detail study of the system properties and possible changes to reduce the coupling impedance are presented.

  11. Numerical simulation in alternating current field measurement inducer design

    NASA Astrophysics Data System (ADS)

    Zhou, Zhixiong; Zheng, Wenpei

    2017-02-01

    The present work develops a numerical simulation model to evaluate the magnetic field perturbation of a twin coil alternating current field measurement (ACFM) inducer passing above a surface-breaking crack for the purpose of enhanced crack detection. Model predictions show good agreement with experimental data, verifying the accuracy of the model. The model includes the influence of various parameters, such as core dimensions and core positions on the perturbed magnetic field above a crack. Optimized design parameters for a twin coil inducer are given according to the analysis results, which provide for a greatly improved detection effect.

  12. Parametric analysis of open plan offices

    NASA Astrophysics Data System (ADS)

    Nogueira, Flavia F.; Viveiros, Elvira B.

    2002-11-01

    The workspace has been undergoing many changes. Open plan offices are being favored instead of ones of traditional design. In such offices, workstations are separated by partial height barriers, which allow a certain degree of visual privacy and some sound insulation. The challenge in these offices is to provide acoustic privacy for the workstations. Computer simulation was used as a tool for this investigation. Two simple models were generated and their results compared to experimental data measured in two real offices. After validating the approach, models with increasing complexity were generated. Lastly, an ideal office with 64 workstations was created and a parametric survey performed. Nine design parameters were taken as variables and the results are discussed in terms of sound pressure level, in octave bands, and intelligibility index.

  13. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  14. Measurements and simulation on the comfort of forklifts

    NASA Astrophysics Data System (ADS)

    Verschoore, R.; Pieters, J. G.; Pollet, I. V.

    2003-09-01

    In order to determine the influence of some parameters of a forklift such as the road profile, the tyre characteristics, the riding comfort, etc., measurements carried out on a forklift with different tyres and seats were evaluated using different standards and methods. In addition, a simulation model was developed and used to investigate the influence of these parameters. Simulations and test run results showed good agreement. The comparison of the results obtained with several methods of comfort evaluation and a series of tests showed that they nearly all resulted in the same classification. However, the results obtained with different methods could not always be compared among themselves. Solid tyres were found to be more comfortable than pneumatic ones because of their high damping. The negative influence of higher stiffness was smaller than the positive influence of higher damping. The simulations pointed out that for a global general investigation about comfort, the influence of the horizontal tyre stiffness and damping can be neglected. Also the seat characteristics could be linearized. When the stability of the forklift has to be investigated, the horizontal forces must also be considered.

  15. Preliminary Effects of Real-World Factors on the Recovery and Exploitation of Forensic Impurity Profiles of a Nerve-Agent Simulant from Office Media

    SciTech Connect

    Fraga, Carlos G.; Sego, Landon H.; Hoggard, Jamin C.; Perez Acosta, Gabriel A.; Viglino, Emilie A.; Wahl, Jon H.; Synovec, Robert E.

    2012-12-28

    Dimethyl methylphosphonate (DMMP) was used as a chemical threat agent (CTA) simulant for a first look at the effects of real-world factors on the recovery and exploitation of a CTA’s impurity profile for source matching. Four stocks of DMMP having different impurity profiles were disseminated as aerosols onto cotton, painted wall board, and nylon coupons according to a thorough experimental design. The DMMP-exposed coupons were then solvent extracted and analyzed for DMMP impurities by comprehensive 2-D gas chromatography/mass spectrometry (GC×GC/MS). The similarities between the coupon DMMP impurity profiles and the known (reference) DMMP profiles were measured by dot products of the coupon profiles and known profiles and by score values obtained from principal component analysis. One stock, with a high impurity-profile selectivity value of 0.9 out of 1, had 100% of its respective coupons correctly classified and no false positives from other coupons. Coupons from the other three stocks with low selectivity values (0.0073, 0.012, and 0.018) could not be sufficiently distinguished from one another for reliable matching to their respective stocks. The results from this work support that: (1) extraction solvents, if not appropriately selected, can have some of the same impurities present in a CTA reducing a CTA’s useable impurity profile, (2) low selectivity among a CTA’s known impurity profiles will likely make definitive source matching impossible in some real-world conditions, (3) no detrimental chemical-matrix interference was encountered during the analysis of actual office media, (4) a short elapsed time between release and sample storage is advantageous for the recovery of the impurity profile because it minimizes volatilization of forensic impurities, and (5) forensic impurity profiles weighted towards higher volatility impurities are more likely to be altered by volatilization following CTA exposure.

  16. Preliminary effects of real-world factors on the recovery and exploitation of forensic impurity profiles of a nerve-agent simulant from office media.

    PubMed

    Fraga, Carlos G; Sego, Landon H; Hoggard, Jamin C; Acosta, Gabriel A Pérez; Viglino, Emilie A; Wahl, Jon H; Synovec, Robert E

    2012-12-28

    Dimethyl methylphosphonate (DMMP) was used as a chemical threat agent (CTA) simulant for a first look at the effects of real-world factors on the recovery and exploitation of a CTA's impurity profile for source matching. Four stocks of DMMP having different impurity profiles were disseminated as aerosols onto cotton, painted wall board, and nylon coupons according to a thorough experimental design. The DMMP-exposed coupons were then solvent extracted and analyzed for DMMP impurities by comprehensive 2D gas chromatography/mass spectrometry (GC×GC/MS). The similarities between the coupon DMMP impurity profiles and the known (reference) DMMP profiles were measured by dot products of the coupon profiles and known profiles and by score values obtained from principal component analysis. One stock, with a high impurity-profile selectivity value of 0.9 out of 1, had 100% of its respective coupons correctly classified and no false positives from other coupons. Coupons from the other three stocks with low selectivity values (0.0073, 0.012, and 0.018) could not be sufficiently distinguished from one another for reliable matching to their respective stocks. The results from this work support that: (1) extraction solvents, if not appropriately selected, can have some of the same impurities present in a CTA reducing a CTA's useable impurity profile, (2) low selectivity among a CTA's known impurity profiles will likely make definitive source matching impossible in some real-world conditions, (3) no detrimental chemical-matrix interference was encountered during the analysis of actual office media, (4) a short elapsed time between release and sample storage is advantageous for the recovery of the impurity profile because it minimizes volatilization of forensic impurities, and (5) forensic impurity profiles weighted toward higher volatility impurities are more likely to be altered by volatilization following CTA exposure.

  17. Measurements of contrast sensitivity by an adaptive optics visual simulator

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tatsuo; Ucikawa, Keiji

    2015-08-01

    We developed an adaptive optics visual simulator (AOVS) to study the relationship between the contrast sensitivity and higher-order wavefront aberrations of human eyes. A desired synthetic aberration was virtually generated on a subject eye by the AOVS, and red laser light was used to measure the aberrations. The contrast sensitivity was measured in a psychophysical experiment using visual stimulus patterns provided by a large-contrast-range imaging system, which included two liquid crystal displays illuminated by red light emitting diodes from the backside. The diameter of the pupil was set to 4 mm by an artificial aperture, and the retinal illuminance of the stimulus image was controlled to 10 Td. Experiments conducted with four normal subjects revealed that their contrast sensitivity to a high-spatial-frequency vertical sinusoidal grating pattern was lower in the presence of a horizontal coma aberration than in the presence of a vertical coma or no aberrations ( p < 0.02, Nagai method).

  18. Hanford Sludge Simulant Selection for Soil Mechanics Property Measurement

    SciTech Connect

    Wells, Beric E.; Russell, Renee L.; Mahoney, Lenna A.; Brown, Garrett N.; Rinehart, Donald E.; Buchmiller, William C.; Golovich, Elizabeth C.; Crum, Jarrod V.

    2010-03-23

    The current System Plan for the Hanford Tank Farms uses relaxed buoyant displacement gas release event (BDGRE) controls for deep sludge (i.e., high level waste [HLW]) tanks, which allows the tank farms to use more storage space, i.e., increase the sediment depth, in some of the double-shell tanks (DSTs). The relaxed BDGRE controls are based on preliminary analysis of a gas release model from van Kessel and van Kesteren. Application of the van Kessel and van Kesteren model requires parametric information for the sediment, including the lateral earth pressure at rest and shear modulus. No lateral earth pressure at rest and shear modulus in situ measurements for Hanford sludge are currently available. The two chemical sludge simulants will be used in follow-on work to experimentally measure the van Kessel and van Kesteren model parameters, lateral earth pressure at rest, and shear modulus.

  19. Upper trapezius muscle activity in healthy office workers: reliability and sensitivity of occupational exposure measures to differences in sex and hand dominance.

    PubMed

    Marker, Ryan J; Balter, Jaclyn E; Nofsinger, Micaela L; Anton, Dan; Fethke, Nathan B; Maluf, Katrina S

    2016-09-01

    Patterns of cervical muscle activity may contribute to overuse injuries in office workers. The purpose of this investigation was to characterise patterns of upper trapezius muscle activity in pain-free office workers using traditional occupational exposure measures and a modified Active Amplitude Probability Distribution Function (APDF), which considers only periods of active muscle contraction. Bilateral trapezius muscle activity was recorded in 77 pain-free office workers for 1-2 full days in their natural work environment. Mean amplitude, gap frequency, muscular rest and Traditional and Active APDF amplitudes were calculated. All measures demonstrated fair to substantial reliability. Dominant muscles demonstrated higher amplitudes of activity and less muscular rest compared to non-dominant, and women demonstrated less muscular rest with no significant difference in amplitude assessed by Active APDF compared to men. These findings provide normative data to identify atypical motor patterns that may contribute to persistence or recurrence of neck pain in office workers. Practitioner Summary: Upper trapezius muscle activity was characterised in a large cohort of pain-free workers using electromyographic recordings from office environments. Dominant muscles demonstrated higher activity and less rest than non-dominant, and women demonstrated less rest than men. Results may be used to identify atypical trapezius muscle activity in office workers.

  20. Quantitative Morphology Measures in Galaxies: Ground-Truthing from Simulations

    NASA Astrophysics Data System (ADS)

    Narayanan, Desika T.; Abruzzo, Matthew W.; Dave, Romeel; Thompson, Robert

    2017-01-01

    The process of galaxy assembly is a prevalent question in astronomy; there are a variety of potentially important effects, including baryonic accretion from the intergalactic medium, as well as major galaxy mergers. Recent years have ushered in the development of quantitative measures of morphology such as the Gini coefficient (G), the second-order moment of the brightest quintile of a galaxy’s light (M20), and the concentration (C), asymmetry (A), and clumpiness (S) of galaxies. To investigate the efficacy of these observational methods at identifying major mergers, we have run a series of very high resolution cosmological zoom simulations, and coupled these with 3D Monte Carlo dust radiative transfer. Our methodology is powerful in that it allows us to “observe” the simulation as an observer would, while maintaining detailed knowledge of the true merger history of the galaxy. In this presentation, we will present our main results from our analysis of these quantitative morphology measures, with a particular focus on high-redshift (z>2) systems.

  1. Cosmic-ray neutron simulations and measurements in Taiwan.

    PubMed

    Chen, Wei-Lin; Jiang, Shiang-Huei; Sheu, Rong-Jiun

    2014-10-01

    This study used simulations of galactic cosmic ray in the atmosphere to investigate the neutron background environment in Taiwan, emphasising its altitude dependence and spectrum variation near interfaces. The calculated results were analysed and compared with two measurements. The first measurement was a mobile neutron survey from sea level up to 3275 m in altitude conducted using a car-mounted high-sensitivity neutron detector. The second was a previous measured result focusing on the changes in neutron spectra near air/ground and air/water interfaces. The attenuation length of cosmic-ray neutrons in the lower atmosphere was estimated to be 163 g cm(-2) in Taiwan. Cosmic-ray neutron spectra vary with altitude and especially near interfaces. The determined spectra near the air/ground and air/water interfaces agree well with measurements for neutrons below 10 MeV. However, the high-energy portion of spectra was observed to be much higher than our previous estimation. Because high-energy neutrons contribute substantially to a dose evaluation, revising the annual sea-level effective dose from cosmic-ray neutrons at ground level in Taiwan to 35 μSv, which corresponds to a neutron flux of 5.30 × 10(-3) n cm(-2) s(-1), was suggested.

  2. Photolysis frequency measurements in a sunlit simulation chamber

    NASA Astrophysics Data System (ADS)

    Bohn, B.; Rohrer, F.; Brauers, T.; Wahner, A.

    2003-04-01

    The simulation chamber SAPHIR at Forschungszentrum Jülich provides a unique tool to investigate atmospheric photochemistry under realistic ambient conditions. However, while transport processes and chemical composition are controlled more easily compared to field measurements, the radiation field within the chamber is more complex. Construction elements produce shady areas while the Teflon walls and the chamber ground are scattering and reflecting light. On the other hand, actinic flux or photolysis frequency measurements with a spectral radiometer or filterradiometers can only be made at selected points where the measured quantities are not representative for the chamber as a whole. In this work we describe a method to derive mean photolysis frequencies for SAPHIR based on solar actinic flux measurements outside of the chamber. The calculation is based on a distinction between direct and diffuse solar radiation, a numerical model describing the illumination and calibrations using the whole chamber as a chemical actinometer by observing the photochemical NO_2-NO-O_3 equilibrium under various external conditions.

  3. Measurement and simulation of the TRR BNCT beam parameters

    NASA Astrophysics Data System (ADS)

    Bavarnegin, Elham; Sadremomtaz, Alireza; Khalafi, Hossein; Kasesaz, Yaser; Golshanian, Mohadeseh; Ghods, Hossein; Ezzati, Arsalan; Keyvani, Mehdi; Haddadi, Mohammad

    2016-09-01

    Recently, the configuration of the Tehran Research Reactor (TRR) thermal column has been modified and a proper thermal neutron beam for preclinical Boron Neutron Capture Therapy (BNCT) has been obtained. In this study, simulations and experimental measurements have been carried out to identify the BNCT beam parameters including the beam uniformity, the distribution of the thermal neutron dose, boron dose, gamma dose in a phantom and also the Therapeutic Gain (TG). To do this, the entire TRR structure including the reactor core, pool, the thermal column and beam tubes have been modeled using MCNPX Monte Carlo code. To measure in-phantom dose distribution a special head phantom has been constructed and foil activation techniques and TLD700 dosimeter have been used. The results show that there is enough uniformity in TRR thermal BNCT beam. TG parameter has the maximum value of 5.7 at the depth of 1 cm from the surface of the phantom, confirming that TRR thermal neutron beam has potential for being used in treatment of superficial brain tumors. For the purpose of a clinical trial, more modifications need to be done at the reactor, as, for example design, and construction of a treatment room at the beam exit which is our plan for future. To date, this beam is usable for biological studies and animal trials. There is a relatively good agreement between simulation and measurement especially within a diameter of 10 cm which is the dimension of usual BNCT beam ports. This relatively good agreement enables a more precise prediction of the irradiation conditions needed for future experiments.

  4. Simulation of n-qubit quantum systems. V. Quantum measurements

    NASA Astrophysics Data System (ADS)

    Radtke, T.; Fritzsche, S.

    2010-02-01

    The FEYNMAN program has been developed during the last years to support case studies on the dynamics and entanglement of n-qubit quantum registers. Apart from basic transformations and (gate) operations, it currently supports a good number of separability criteria and entanglement measures, quantum channels as well as the parametrizations of various frequently applied objects in quantum information theory, such as (pure and mixed) quantum states, hermitian and unitary matrices or classical probability distributions. With the present update of the FEYNMAN program, we provide a simple access to (the simulation of) quantum measurements. This includes not only the widely-applied projective measurements upon the eigenspaces of some given operator but also single-qubit measurements in various pre- and user-defined bases as well as the support for two-qubit Bell measurements. In addition, we help perform generalized and POVM measurements. Knowing the importance of measurements for many quantum information protocols, e.g., one-way computing, we hope that this update makes the FEYNMAN code an attractive and versatile tool for both, research and education. New version program summaryProgram title: FEYNMAN Catalogue identifier: ADWE_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 27 210 No. of bytes in distributed program, including test data, etc.: 1 960 471 Distribution format: tar.gz Programming language: Maple 12 Computer: Any computer with Maple software installed Operating system: Any system that supports Maple; the program has been tested under Microsoft Windows XP and Linux Classification: 4.15 Catalogue identifier of previous version: ADWE_v4_0 Journal reference of previous version: Comput. Phys. Commun

  5. Measuring Impact of U.S. DOE Geothermal Technologies Office Funding: Considerations for Development of a Geothermal Resource Reporting Metric

    SciTech Connect

    Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.; Bennett, Mitchell; Segneri, Brittany

    2015-04-25

    This paper reviews existing methodologies and reporting codes used to describe extracted energy resources such as coal and oil and describes a comparable proposed methodology to describe geothermal resources. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of assessing the impacts of its funding programs. This framework will allow for GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress. Standards and reporting codes used in other countries and energy sectors provide guidance to inform development of a geothermal methodology, but industry feedback and our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and we sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for assessing and reporting on GTO funding according to resource knowledge and resource grade (or quality). This methodology would allow GTO to target funding or measure impact by progression of projects or geological potential for development.

  6. Measurements and simulations of focused beam for orthovoltage therapy

    SciTech Connect

    Abbas, Hassan; Mahato, Dip N.; Satti, Jahangir; MacDonald, C. A.

    2014-04-15

    Purpose: Megavoltage photon beams are typically used for therapy because of their skin-sparing effect. However, a focused low-energy x-ray beam would also be skin sparing, and would have a higher dose concentration at the focal spot. Such a beam can be produced with polycapillary optics. MCNP5 was used to model dose profiles for a scanned focused beam, using measured beam parameters. The potential of low energy focused x-ray beams for radiation therapy was assessed. Methods: A polycapillary optic was used to focus the x-ray beam from a tungsten source. The optic was characterized and measurements were performed at 50 kV. PMMA blocks of varying thicknesses were placed between optic and the focal spot to observe any variation in the focusing of the beam after passing through the tissue-equivalent material. The measured energy spectrum was used to model the focused beam in MCNP5. A source card (SDEF) in MCNP5 was used to simulate the converging x-ray beam. Dose calculations were performed inside a breast tissue phantom. Results: The measured focal spot size for the polycapillary optic was 0.2 mm with a depth of field of 5 mm. The measured focal spot remained unchanged through 40 mm of phantom thickness. The calculated depth dose curve inside the breast tissue showed a dose peak several centimeters below the skin with a sharp dose fall off around the focus. The percent dose falls below 10% within 5 mm of the focus. It was shown that rotating the optic during scanning would preserve the skin-sparing effect of the focused beam. Conclusions: Low energy focused x-ray beams could be used to irradiate tumors inside soft tissue within 5 cm of the surface.

  7. Heat transfer measurements and CFD simulations of an impinging jet

    NASA Astrophysics Data System (ADS)

    Petera, Karel; Dostál, Martin

    2016-03-01

    Heat transport in impinging jets makes a part of many experimental and numerical studies because some similarities can be identified between a pure impingement jet and industrial processes like, for example, the heat transfer at the bottom of an agitated vessel. In this paper, experimental results based on measuring the response to heat flux oscillations applied to the heat transfer surface are compared with CFD simulations. The computational cost of a LES-based approach is usually too high therefore a comparison with less computationally expensive RANS-based turbulence models is made in this paper and a possible improvement of implementing an anisotropic explicit algebraic model for the turbulent heat flux model is evaluated.

  8. Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.

    PubMed

    Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G

    2010-04-16

    We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.

  9. Measures Of Diffusion Regions Applied To PIC Reconnection Simulations

    NASA Astrophysics Data System (ADS)

    Goldman, M. V.; Newman, D. L.; Lapenta, G.

    2015-12-01

    The primary goal of the current NASA-MMS mission is to "identify and study diffusion regions during magnetic reconnection in Earth's magnetopause and magnetotail. Yet the term diffusion region is often misunderstood and can be ambiguous. Different conditions for a region to be a "diffusion region" are interpreted theoretically, related to each other and applied to PIC simulations of tail reconnection(a) (and to MMS measurements, if possible, at time of AGU). None of the conditions is both necessary and sufficient for topological reconnection to occur. During magnetic reconnection in a kinetic plasma key differences exist between the locations of diffusion regions in the electron fluid, the ion fluid and a single (MHD) fluid. (a)M.V. Goldman, D.L. Newman and G. Lapenta, Space Science Reviews, 2015

  10. Aerodynamic measurements on a finite wing with simulated ice

    NASA Technical Reports Server (NTRS)

    Bragg, M. B.; Khodadoust, A.; Soltani, R.; Wells, S.; Kerho, M.

    1991-01-01

    The effect of a simulated glaze ice accretion on the aerodynamic performance of a three-dimensional straight and swept wing is studied experimentally. A semispan wing of effective aspect ratio five was mounted from the sidewall of the UIUC subsonic wind tunnel. The model uses an NACA 0012 airfoil section on a rectangular planform with interchangeable tip and root sections to allow for 0- and 30-deg sweep. A sidewall suction system is used to minimize the tunnel boundary-layer interaction with the model. A three-component sidewall balance has been designed, built and used to measure lift, drag and pitching moment on the clean and iced model. Fluorescent oil flow visualization has been performed on the iced model and reveals extensive spanwise flow in the separation bubble aft of the upper surface horn. These results are compared to computational results for the surface pressures, span loads and surface oil flow.

  11. Observing System Simulations for ASCENDS: Synthesizing Science Measurement Requirements (Invited)

    NASA Astrophysics Data System (ADS)

    Kawa, S. R.; Baker, D. F.; Schuh, A. E.; Crowell, S.; Rayner, P. J.; Hammerling, D.; Michalak, A. M.; Wang, J. S.; Eluszkiewicz, J.; Ott, L.; Zaccheo, T.; Abshire, J. B.; Browell, E. V.; Moore, B.; Crisp, D.

    2013-12-01

    The measurement of atmospheric CO2 from space using active (lidar) sensing techniques has several potentially significant advantages in comparison to current and planned passive CO2 instruments. Application of this new technology aims to advance CO2 measurement capability and carbon cycle science into the next decade. The NASA Active Sensing of Carbon Emissions, Nights, Days, and Seasons (ASCENDS) mission has been recommended by the US National Academy of Sciences Decadal Survey for the next generation of space-based CO2 observing systems. ASCENDS is currently planned for launch in 2022. Several possible lidar instrument approaches have been demonstrated in airborne campaigns and the results indicate that such sensors are quite feasible. Studies are now underway to evaluate performance requirements for space mission implementation. Satellite CO2 observations must be highly precise and unbiased in order to accurately infer global carbon source/sink fluxes. Measurement demands are likely to further increase in the wake of GOSAT, OCO-2, and enhanced ground-based in situ and remote sensing CO2 data. The objective of our work is to quantitatively and consistently evaluate the measurement capabilities and requirements for ASCENDS in the context of advancing our knowledge of carbon flux distributions and their dependence on underlying physical processes. Considerations include requirements for precision, relative accuracy, spatial/temporal coverage and resolution, vertical information content, interferences, and possibly the tradeoffs among these parameters, while at the same time framing a mission that can be implemented within a constrained budget. Here, we attempt to synthesize the results of observing system simulation studies, commissioned by the ASCENDS Science Requirements Definition Team, into a coherent set of mission performance guidelines. A variety of forward and inverse model frameworks are employed to reduce the potential dependence of the results on model

  12. Simulations & Measurements of Airframe Noise: A BANC Workshops Perspective

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Lockard, David

    2016-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate computational fluid dynamics, computational aeroacoustics, and in depth measurements targeting a selected set of canonical yet realistic configurations that advance the current state-of-the-art in multiple respects. Unique features of the BANC Workshops include: intrinsically multi-disciplinary focus involving both fluid dynamics and aeroacoustics, holistic rather than predictive emphasis, concurrent, long term evolution of experiments and simulations with a powerful interplay between the two, and strongly integrative nature by virtue of multi-team, multi-facility, multiple-entry measurements. This paper illustrates these features in the context of the BANC problem categories and outlines some of the challenges involved and how they were addressed. A brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far is also included.

  13. Simulated O VI Doppler dimming measurements of coronal outflow velocities

    NASA Technical Reports Server (NTRS)

    Strachan, Leonard; Gardner, L. D.; Kohl, John L.

    1992-01-01

    The possibility of determining O(5+) outflow velocities by using a Doppler dimming analysis of the resonantly scattered intensities of O VI lambda 1031.9 and lambda 1037.6 is addressed. The technique is sensitive to outflow velocities, W, in the range W greater than 30 and less than 250 km/s and can be used for probing regions of the inner solar corona, where significant coronal heating and solar wind acceleration may be occurring. These velocity measurements, when combined with measurements of other plasma parameters (temperatures and densities of ions and electrons) can be used to estimate the energy and mass flux of O(5+). In particular, it may be possible to locate where the flow changes from subsonic to supersonic and to identify source regions for the high and low speed solar wind. The velocity diagnostic technique is discussed with emphasis placed on the requirements needed for accurate outflow velocity determinations. Model determinations of outflow velocities based on simulated Doppler observations are presented.

  14. Electrophysiological measurement of interest during walking in a simulated environment.

    PubMed

    Takeda, Yuji; Okuma, Takashi; Kimura, Motohiro; Kurata, Takeshi; Takenaka, Takeshi; Iwaki, Sunao

    2014-09-01

    A reliable neuroscientific technique for objectively estimating the degree of interest in a real environment is currently required in the research fields of neuroergonomics and neuroeconomics. Toward the development of such a technique, the present study explored electrophysiological measures that reflect an observer's interest in a nearly-real visual environment. Participants were asked to walk through a simulated shopping mall and the attractiveness of the shopping mall was manipulated by opening and closing the shutters of stores. During the walking task, participants were exposed to task-irrelevant auditory probes (two-stimulus oddball sequence). The results showed a smaller P2/early P3a component of task-irrelevant auditory event-related potentials and a larger lambda response of eye-fixation-related potentials in an interesting environment (i.e., open-shutter condition) than in a boring environment (i.e., closed-shutter condition); these findings can be reasonably explained by supposing that participants allocated more attentional resources to visual information in an interesting environment than in a boring environment, and thus residual attentional resources that could be allocated to task-irrelevant auditory probes were reduced. The P2/early P3a component and the lambda response may be useful measures of interest in a real visual environment.

  15. Development of a model for the simulation of Farinograph measurements

    NASA Astrophysics Data System (ADS)

    Hermannseder, Bernhard; Ahmad, Muhammad Haseeb; Kügler, Philip; Hitzmann, Bernd

    2016-10-01

    Based upon kneading curves obtained from eight wheat flours using different cultivars, a mathematical model was developed to simulate the middle curve of Farinograph measurements. In the model the different states of the protein polymers during the kneading process are considered. All together five different states of protein polymer fractions are presumed: 1. non-hydrated, 2. unstretched, 3. stretched, 4. intermediate and 5. broken protein polymer fraction, which are represented by their corresponding state variables. The model consists of five connected ordinary differential equations with first and second order kinetics, which describe the dynamic behavior of the state variables using four kinetic parameters. Four state variables are used in a weighted sum (four parameters) to calculate the Farinograph middle curve. Using the dynamic process model the eight different Farinograph measurements are fitted individually by eight parameters in total. The system of differential equations was solved with an implicit finite difference method. Each step of the fit was done by a Quasi-Newton method. The overall fits were very good, with an average R2 of 0.996 ± 0.003 and an average sum of squared errors of 5,000 ± 3,000 BU2.

  16. OPERATION SUN BEAM, SHOT SMALL BOY. Project Officer’s Report - Project 7.1.4. Transient Radiation Effects Measurements on Guidance System Circuits

    DTIC Science & Technology

    1985-09-01

    BEAM, SHOT SMALL BOY Project Officer’s Report—Project 7.1.4 00 oo in en < i Q < CJ> Transient Radiation Effects Measurements on Guidance...This is an extract of P0R-2239 (WT-2239), Operation SUN BEAM. Shot Small Boy. Project 7.1.4. Approved for public release; distribution is unlimited...UNIT ACCESSION NO ; ■ ■ i ./nc/uc/e sr.unty c/.m.f/Mf.ofijOPERATION SUN BEAM, SHOT SMALL BOY, Project Officer’s Report- Project 7.1.4, Transient

  17. Measuring Virtual Simulations Value in Training Exercises - USMC Use Case

    DTIC Science & Technology

    2015-12-04

    interoperable and ‘stove-piped’ virtual and constructive Training Aids , Devices, Simulators and Simulations (TADSS) at I MEF’s First Marine...constructive Training Aids , Devices, Simulators and Simulations (TADSS) in support of the I MEF/ First Marine Expeditionary Brigade (1st MEB) Large Scale...exercise coordination and information flow considerations that will aid the refinement and stability of virtual integration and further increase

  18. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    Hungarian Meteorological Service1 Budapest University of Technology and Economics2 Precipitation is one of the the most important meteorological parameters describing the state of the climate and to get correct information from trends, accurate measurements of precipitation is very important. The problem is that the precipitation measurements are affected by systematic errors leading to an underestimation of actual precipitation which errors vary by type of precipitaion and gauge type. It is well known that the wind speed is the most important enviromental factor that contributes to the underestimation of actual precipitation, especially for solid precipitation. To study and correct the errors of precipitation measurements there are two basic possibilities: · Use of results and conclusion of International Precipitation Measurements Intercomparisons; · To build standard reference gauges (DFIR, pit gauge) and make own investigation; In 1999 at the HMS we tried to achieve own investigation and built standard reference gauges But the cost-benefit ratio in case of snow (use of DFIR) was very bad (we had several winters without significant amount of snow, while the state of DFIR was continously falling) Due to the problem mentioned above there was need for new approximation that was the modelling made by Budapest University of Technology and Economics, Department of Fluid Mechanics using the FLUENT 6.2 model. The ANSYS Fluent package is featured fluid dynamics solution for modelling flow and other related physical phenomena. It provides the tools needed to describe atmospheric processes, design and optimize new equipment. The CFD package includes solvers that accurately simulate behaviour of the broad range of flows that from single-phase to multi-phase. The questions we wanted to get answer to are as follows: · How do the different types of gauges deform the airflow around themselves? · Try to give quantitative estimation of wind induced error. · How does the use

  19. Measuring aniseikonia using scattering filters to simulate cataract

    NASA Astrophysics Data System (ADS)

    Wilson, Jason

    2011-12-01

    The relationship between anisometropia and aniseikonia (ANK) is not well understood. Ametropic cataract patients provide a unique opportunity to study this relationship after undergoing emmetropizing lens extraction. Because light scatter may affect ANK measurement in cataract patients, its effect should also be evaluated. The Basic Aniseikonia Test (BAT) was evaluated using afocal size lenses to produce specific changes in retinal height. Several light scattering devices were then evaluated to determine which produced effects most similar to cataract. Contrast sensitivity and visual acuity (VA) losses were measured with each device and compared to those reported in cataract. After determining the most appropriate light scattering device, twenty healthy patients with normal visual function were recruited to perform the BAT using the filters to simulate cataract. Cataract patients were recruited from Vision America and the University of Alabama at Birmingham School of Optometry. Patients between 20 and 75 years of age with at least 20/80 VA in each eye, ≥ 2D ametropia, and normal binocular function were recruited. Stereopsis and ANK were tested and each patient completed a symptom questionnaire. ANK measurements using afocal size lenses indicated that the BAT underestimates ANK, although the effect was minimal for vertical targets and darkened surroundings, as previously reported. Based on VA and contrast sensitivity loss, Vistech scattering filters produced changes most similar to cataract. Results of the BAT using Vistech filters demonstrated that a moderate cataract but not a mild cataract may affect the ANK measurement. ANK measurements on cataract patients indicated that those with ≥ 2 D ametropia in each eye may suffer from induced ANK after the first cataract extraction. With upcoming healthcare reform, unilateral cataract extraction may be covered, but not necessarily bilateral, depending on patient VA in each eye. However, a questionnaire about symptoms

  20. An assessment of discriminatory power of office blood pressure measurements in predicting optimal ambulatory blood pressure control in people with type 2 diabetes

    PubMed Central

    Kengne, Andre Pascal; Libend, Christelle Nong; Dzudie, Anastase; Menanga, Alain; Dehayem, Mesmin Yefou; Kingue, Samuel; Sobngwi, Eugene

    2014-01-01

    Introduction Ambulatory blood pressure (BP) measurements (ABPM) predict health outcomes better than office BP, and are recommended for assessing BP control, particularly in high-risk patients. We assessed the performance of office BP in predicting optimal ambulatory BP control in sub-Saharan Africans with type 2 diabetes (T2DM). Methods Participants were a random sample of 51 T2DM patients (25 men) drug-treated for hypertension, receiving care in a referral diabetes clinic in Yaounde, Cameroon. A quality control group included 46 non-diabetic individuals with hypertension. Targets for BP control were systolic (and diastolic) BP. Results Mean age of diabetic participants was 60 years (standard deviation: 10) and median duration of diabetes was 6 years (min-max: 0-29). Correlation coefficients between each office-based variable and the 24-h ABPM equivalent (diabetic vs. non-diabetic participants) were 0.571 and 0.601 for systolic (SBP), 0.520 and 0.539 for diastolic (DBP), 0.631 and 0.549 for pulse pressure (PP), and 0.522 and 0.583 for mean arterial pressure (MAP). The c-statistic for the prediction of optimal ambulatory control from office-BP in diabetic participants was 0.717 for SBP, 0.494 for DBP, 0.712 for PP, 0.582 for MAP, and 0.721 for either SBP + DBP or PP + MAP. Equivalents in diabetes-free participants were 0.805, 0.763, 0.695, 0.801 and 0.813. Conclusion Office DBP was ineffective in discriminating optimal ambulatory BP control in diabetic patients, and did not improve predictions based on office SBP alone. Targeting ABPM to those T2DM patients who are already at optimal office-based SBP would likely be more cost effective in this setting. PMID:25838859

  1. Comparison of the simulated performance of a VSAT satellite link with measurements

    NASA Astrophysics Data System (ADS)

    Mwanakatwe, M.; Willis, M. J.; Evans, B. G.

    1991-06-01

    The transmisson performance of a Ka-band VSAT system (CODE) has been simulated to verify the systems design and to demonstrate the adequacy of the implementation margin and phase noise. A detailed simulation of phase noise effects on VSAT systems design is also presented. Hardware measurements and BOSS simulations for the test set-up show a good agreement for values of Eb/N0 up to 7dB. The simulated results indicate an increased error when the TWTA is operated in the nonlinear region, with the simulations indicating larger degradation than the measurement. The phase noise performance of the digital TRL modem is found to be consistently better than that of the simulated model. There appears to be closer agreement with the BOSS simulations than with the TOPSIM III simulations. The discrepancy between the TOPSIM III and BOSS phase noise simulations was only resolved by measurements taken using the Olympus satellite and BTI satellite simulator.

  2. New simulation and measurement results on gateable DEPFET devices

    NASA Astrophysics Data System (ADS)

    Bähr, Alexander; Aschauer, Stefan; Hermenau, Katrin; Herrmann, Sven; Lechner, Peter H.; Lutz, Gerhard; Majewski, Petra; Miessner, Danilo; Porro, Matteo; Richter, Rainer H.; Schaller, Gerhard; Sandow, Christian; Schnecke, Martina; Schopper, Florian; Stefanescu, Alexander; Strüder, Lothar; Treis, Johannes

    2012-07-01

    To improve the signal to noise level, devices for optical and x-ray astronomy use techniques to suppress background events. Well known examples are e.g. shutters or frame-store Charge Coupled Devices (CCDs). Based on the DEpleted P-channel Field Effect Transistor (DEPFET) principle a so-called Gatebale DEPFET detector can be built. Those devices combine the DEPFET principle with a fast built-in electronic shutter usable for optical and x-ray applications. The DEPFET itself is the basic cell of an active pixel sensor build on a fully depleted bulk. It combines internal amplification, readout on demand, analog storage of the signal charge and a low readout noise with full sensitivity over the whole bulk thickness. A Gatebale DEPFET has all these benefits and obviates the need for an external shutter. Two concepts of Gatebale DEPFET layouts providing a built-in shutter will be introduced. Furthermore proof of principle measurements for both concepts are presented. Using recently produced prototypes a shielding of the collection anode up to 1 • 10-4 was achieved. Predicted by simulations, an optimized geometry should result in values of 1 • 10-5 and better. With the switching electronic currently in use a timing evaluation of the shutter opening and closing resulted in rise and fall times of 100ns.

  3. Measurement of impact force, simulation of fall and hip fracture.

    PubMed

    Gardner, T N; Simpson, A H; Booth, C; Sprukkelhorst, P; Evans, M; Kenwright, J; Evans, J G

    1998-01-01

    It has been shown that the incidence of hip fracture in the elderly may be influenced by the type of floor covering commonly used in homes for the elderly. This study describes the development of a method for modelling a fall during a hip fracture event, to examine the influence of different floors on impact force. An impact transducer is dropped in free fall through a smooth plastic tube. The impactor nose of the transducer models the curvature of the greater trochanter, and a steel spring is used to simulate the compliance of the skeletal structure. A weight, which corresponds to one-sixteenth of average body mass, compresses the spring and applies force to the impactor nose on striking the floor. The temporal variation in the force of impact with the floor is measured by the transducer to within 0.41 percent (SD = 0.63%, n = 10). Five common floor coverings were tested over a concrete floor slab (vinyl, loop carpet and pile carpet--both with and without underpad). ANOVA analysis showed that the differences between mean forces for each floor covering were highly significant (p > 0.001), with the thicker coverings producing 7 percent lower forces. The transducer may be used to examine the correlation between impact force and fracture incidence for a variety of different floors in homes for the elderly.

  4. Using GTO-Velo to Facilitate Communication and Sharing of Simulation Results in Support of the Geothermal Technologies Office Code Comparison Study

    SciTech Connect

    White, Signe K.; Purohit, Sumit; Boyd, Lauren W.

    2015-01-26

    The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that provides a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is

  5. Simulation and Analysis of Cosmic Microwave Background Anisotropy Measurements

    NASA Astrophysics Data System (ADS)

    Delabrouille, Jacques H.

    This thesis work is devoted to the analysis of optimal methods allowing the reprojection of the data streams obtained by next-generation experiments for CMB mapping (and in particular the European space mission Planck Surveyor) onto two-dimensional maps free of instrument-induced artifacts. After a short introduction to the field of CMB anisotropy measurements, I calculate in Chapter 2 the cosmological signal expected on a circular scan as a function of the two dimensional anisotropy spectrum. This allows a comparison of the noise and signal spectra, and permits the optimization of several instrumental parameters. In addition to being useful for the optimized design of future CMB missions, this analysis is useful for Planck data reduction. Chapter 3 is dedicated to the modeling of the measurement of CMB anisotropies with the Planck High Frequency Instrument (HFI), and serves as an introduction to the next two chapters, which deal with two specific instrumental effects. The first is that of the presence of low-frequency drifts in the data streams, which can generate striping on the maps. In answer to the suggestion that the observing strategy of Planck might induce such an effect, I show in Chapter 4, with the help of numerical simulations and analytic calculations, that the low-frequency drifts can be corrected for. The constraints implied on the payload are easily satisfied. The second, which I address in Chapter 5, is the problem of straylight in the far sidelobes of the antenna pattern of Planck. Because of the high sensitivity goals, the rejection of the signals from the Sun, the Earth and the Galaxy below the noise level put stringent requirements on the payload, which are very difficult to fulfill. I show how an adequate processing of the data can lead to the identification of the signals from various sources, and to the subtraction of unwanted contributions from the signal. The last part of this work (Chapter 6) is devoted to the DIABOLO experiment and to

  6. Experimental ship fire measurements with simulated radioactive cargo

    SciTech Connect

    Koski, J.A.; Arviso, M.; Bobbe, J.G.; Wix, S.D.; Cole, J.K.; Hohnstreiter, G.F.; Beene, D.E. Jr.; Keane, M.P.

    1997-10-01

    Results from a series of eight test fires ranging in size from 2.2 to 18.8 MW conducted aboard the Coast Guard fire test ship Mayo Lykes at Mobile, Alabama are presented and discussed. Tests aboard the break bulk type cargo ship consisted of heptane spray fires simulating engine room and galley fires, wood crib fires simulating cargo hold fires, and pool fires staged for comparison to land based regulatory fire results. Primary instrumentation for the tests consisted of two pipe calorimeters that simulated a typical package shape for radioactive materials packages.

  7. Measurement of Ground Level Muon Charge Ratio Using ECRS Simulation

    NASA Astrophysics Data System (ADS)

    Sanjeewa, Hakmana; He, Xiaochun; Cleven, Christopher

    2006-11-01

    The Muon charge ratio at the Earth's surface has been studied with a Geant4 based simulation for two different geomagnetic locations: Atlanta and Lynn Lake. The simulation results are shown in excellent agreement with the data from NMSU-WIZARD/CAPRICE and BESS experiments at Lynn Lake, At low momentum, ground level muon charge ratios show latitude dependent geomagnetic effects for both Atlanta and Lynn Lake from the simulation. The simulated charge ratio is 1.20 ± 0.05 (without geomagnetic field), 1.12 ± 0.05 (with geomagnetic field) for Atlanta and 1.22 ± 0.04 (with geomagnetic field) for Lynn Lake. These types of studies are very important for analyzing secondary cosmic ray muon flux distribution at Earth's surface and can be used to evaluate the parameter of atmospheric neutrino oscillations.

  8. Comparison of Hydrocode Simulations with Measured Shock Wave Velocities

    SciTech Connect

    Hixson, R. S.; Veeser, L. R.

    2014-11-30

    We have conducted detailed 1- and 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly made to understand various shock processes in a sample and to design shock experiments. We began with relatively simple shock experiments, where we examined the effects of the equation of state and the viscoplastic strength models. Eventually we included spallation in copper and iron and a solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations.

  9. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    SciTech Connect

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  10. Development and reliability testing of a self-report instrument to measure the office layout as a correlate of occupational sitting

    PubMed Central

    2013-01-01

    Background Spatial configurations of office environments assessed by Space Syntax methodologies are related to employee movement patterns. These methods require analysis of floors plans which are not readily available in large population-based studies or otherwise unavailable. Therefore a self-report instrument to assess spatial configurations of office environments using four scales was developed. Methods The scales are: local connectivity (16 items), overall connectivity (11 items), visibility of co-workers (10 items), and proximity of co-workers (5 items). A panel cohort (N = 1154) completed an online survey, only data from individuals employed in office-based occupations (n = 307) were used to assess scale measurement properties. To assess test-retest reliability a separate sample of 37 office-based workers completed the survey on two occasions 7.7 (±3.2) days apart. Redundant scale items were eliminated using factor analysis; Chronbach’s α was used to evaluate internal consistency and test re-test reliability (retest-ICC). ANOVA was employed to examine differences between office types (Private, Shared, Open) as a measure of construct validity. Generalized Linear Models were used to examine relationships between spatial configuration scales and the duration of and frequency of breaks in occupational sitting. Results The number of items on all scales were reduced, Chronbach’s α and ICCs indicated good scale internal consistency and test re-test reliability: local connectivity (5 items; α = 0.70; retest-ICC = 0.84), overall connectivity (6 items; α = 0.86; retest-ICC = 0.87), visibility of co-workers (4 items; α = 0.78; retest-ICC = 0.86), and proximity of co-workers (3 items; α = 0.85; retest-ICC = 0.70). Significant (p ≤ 0.001) differences, in theoretically expected directions, were observed for all scales between office types, except overall connectivity. Significant associations were observed between

  11. Metrology target design simulations for accurate and robust scatterometry overlay measurements

    NASA Astrophysics Data System (ADS)

    Ben-Dov, Guy; Tarshish-Shapir, Inna; Gready, David; Ghinovker, Mark; Adel, Mike; Herzel, Eitan; Oh, Soonho; Choi, DongSub; Han, Sang Hyun; El Kodadi, Mohamed; Hwang, Chan; Lee, Jeongjin; Lee, Seung Yoon; Lee, Kuntack

    2016-03-01

    Overlay metrology target design is an essential step prior to performing overlay measurements. This step is done through the optimization of target parameters for a given process stack. A simulation tool is therefore used to improve measurement performances. This work shows how our Metrology Target Design (MTD) simulator helps significantly in the target design process. We show the role of film and Optical CD measurements in improving significantly the fidelity of the simulations. We demonstrate that for various target design parameters we are capable of predicting measured performance metrics by simulations and correctly rank various designs performances.

  12. Evaluating the impact of distance measures on deforestation simulations in the fluvial landscapes of amazonia.

    PubMed

    Salonen, Maria; Maeda, Eduardo Eiji; Toivonen, Tuuli

    2014-10-01

    Land use and land cover change (LUCC) models frequently employ different accessibility measures as a proxy for human influence on land change processes. Here, we simulate deforestation in Peruvian Amazonia and evaluate different accessibility measures as LUCC model inputs. We demonstrate how the selection, and different combinations, of accessibility measures impact simulation results. Out of the individual measures, time distance to market center catches the essential aspects of accessibility in our study area. The most accurate simulation is achieved when time distance to market center is used in association with distance to transport network and additional landscape variables. Although traditional Euclidean measures result in clearly lower simulation accuracy when used separately, the combination of two complementary Euclidean measures enhances simulation accuracy significantly. Our results highlight the need for site and context sensitive selection of accessibility variables. More sophisticated accessibility measures can potentially improve LUCC models' spatial accuracy, which often remains low.

  13. Bistatic GPR Measurements in the Egyptian Western Desert - Measured and Simulated data

    NASA Astrophysics Data System (ADS)

    Ciarletti, V.; Le Gall, A.; Berthelier, J.; Ney, R.; Corbel, C.; Dolon, F.

    2006-12-01

    The TAPIR (Terrestrial And Planetary Investigation Radar) instrument has been designed at CETP (Centre d'etude des Environnements Terrestre et Planetaires) to explore the deep Martian subsurface (down to a few kilometers) and to detect liquid water reservoirs. TAPIR is an impulse ground penetrating radar operating at central frequencies ranging from 2 to 4 MHz operating from the surface. In November 2005, an updated version of the instrument working either in monostatic or in bi-static mode was tested in the Egyptian Western Desert. The work presented here focuses on the bi-static measurements performed on the Abou Saied plateau which shows a horizontally layered sub-surface. The electromagnetic signal was transmitted using one of the two orthogonal 70 m loaded electrical dipole antennas of the transmitting GPR. A second GPR, 50 or 100 meters apart, was dedicated to the signal reception. The received waves were characterized by a set of 5 measurements performed on the receiving GPR : the two horizontal components of the electric field and the three composants of the magnetic field. They were used to compute the direction of arrival of the incoming waves and to retrieve more accurately their propagation path and especially to discriminate between waves due to some sub-surface reflecting structure and those due to interaction with the surface clutter. A very efficient synchronization between the two radars enabled us to perform coherent additions up to 2^{31} which improves dramatically the obtained signal to noise ratio. Complementary electromagnetic measurements were conducted on the same site by the LPI (Lunar and Planetary Institute) and the SwRI (Southwest Research Institute). They provided independent information which helped the interpretation of the TAPIR data. Accurate simulations obtained by FDTD taking into account the information available are presented and used for both the interpretation of the measured data and the validation of the instrument.

  14. Movement Characteristics Analysis and Dynamic Simulation of Collaborative Measuring Robot

    NASA Astrophysics Data System (ADS)

    guoqing, MA; li, LIU; zhenglin, YU; guohua, CAO; yanbin, ZHENG

    2017-03-01

    Human-machine collaboration is becoming increasingly more necessary, and so collaborative robot applications are also in high demand. We selected a UR10 robot as our research subject for this study. First, we applied D-H coordinate transformation of the robot to establish a link system, and we then used inverse transformation to solve the robot’s inverse kinematics and find all the joints. Use Lagrange method to analysis UR robot dynamics; use ADAMS multibody dynamics simulation software to dynamic simulation; verifying the correctness of the derived kinetic models.

  15. Ability of College Students to Simulate ADHD on Objective Measures of Attention

    ERIC Educational Resources Information Center

    Booksh, Randee Lee; Pella, Russell D.; Singh, Ashvind N.; Gouvier, William Drew

    2010-01-01

    Objective: The authors examined the ability of college students to simulate ADHD symptoms on objective and self-report measures and the relationship between knowledge of ADHD and ability to simulate ADHD. Method: Undergraduate students were assigned to a control or a simulated ADHD malingering condition and compared with a clinical AD/HD group.…

  16. Effects of a Simulated Tennis Match on Lymphocyte Subset Measurements

    ERIC Educational Resources Information Center

    Schafer, Mark; Kell, Holly; Navalta, James; Tibana, Ramires; Lyons, Scott; Arnett, Scott

    2014-01-01

    Tennis is an activity requiring both endurance and anaerobic components, which could have immunosuppressive effects postexercise. Purpose: The purpose of this investigation was to determine the effect of a simulated tennis match on apoptotic and migratory markers on lymphocyte subsets. Method: Male high school (n = 5) and college (n = 3) tennis…

  17. Simulation of Space Shuttle neutron measurements with FLUKA.

    PubMed

    Pinsky, L; Carminati, F; Ferrari, A

    2001-06-01

    FLUKA is an integrated particle transport code that has enhanced multigroup low-energy neutron transport capability similar to the well-known MORSE transport code. Gammas are produced in groups but many important individual lines are specifically included, and subsequently transported by the main FLUKA routines which use a modified version of EGS4 for electromagnetic (EM) transport. Recoil protons are also transported by the primary FLUKA transport simulation. The neutron cross-section libraries employed within FLUKA were supplied by Giancarlo Panini (ENEA, Italy) based upon the most recent data from JEF-1, JEF-2.2, ENDF/B-VI, JENDL-3, etc. More than 60 different materials are included in the FLUKA databases with temperature ranges including down to cryogenic temperatures. This code has been used extensively to model the neutron environments near high-energy physics experiment shielding. A simulation of the Space Shuttle based upon a spherical aluminum equivalent shielding distribution has been performed with reasonable results. There are good prospects for extending this calculation to a more realistic 3-D geometrical representation of the Shuttle including an accurate representation of its composition, which is an essential ingredient for the improvement of the predictions. A proposed project to develop a combined analysis and simulation package based upon FLUKA and the analysis infrastructure provided by the ROOT software is under active consideration. The code to be developed for this project will be of direct application to the problem of simulating the neutron environment in space, including the albedo effects.

  18. Measures for simulator evaluation of a helicopter obstacle avoidance system

    NASA Technical Reports Server (NTRS)

    Demaio, Joe; Sharkey, Thomas J.; Kennedy, David; Hughes, Micheal; Meade, Perry

    1993-01-01

    The U.S. Army Aeroflightdynamics Directorate (AFDD) has developed a high-fidelity, full-mission simulation facility for the demonstration and evaluation of advanced helicopter mission equipment. The Crew Station Research and Development Facility (CSRDF) provides the capability to conduct one- or two-crew full-mission simulations in a state-of-the-art helicopter simulator. The CSRDF provides a realistic, full field-of-regard visual environment with simulation of state-of-the-art weapons, sensors, and flight control systems. We are using the CSRDF to evaluate the ability of an obstacle avoidance system (OASYS) to support low altitude flight in cluttered terrain using night vision goggles (NVG). The OASYS uses a laser radar to locate obstacles to safe flight in the aircraft's flight path. A major concern is the detection of wires, which can be difficult to see with NVG, but other obstacles--such as trees, poles or the ground--are also a concern. The OASYS symbology is presented to the pilot on a head-up display mounted on the NVG (NVG-HUD). The NVG-HUD presents head-stabilized symbology to the pilot while allowing him to view the image intensified, out-the-window scene through the HUD. Since interference with viewing through the display is a major concern, OASYS symbology must be designed to present usable obstacle clearance information with a minimum of clutter.

  19. Office automation.

    PubMed

    Arenson, R L

    1986-03-01

    By now, the term "office automation" should have more meaning for those readers who are not intimately familiar with the subject. Not all of the preceding material pertains to every department or practice, but certainly, word processing and simple telephone management are key items. The size and complexity of the organization will dictate the usefulness of electronic mail and calendar management, and the individual radiologist's personal needs and habits will determine the usefulness of the home computer. Perhaps the most important ingredient for success in the office automation arena relates to the ability to integrate information from various systems in a simple and flexible manner. Unfortunately, this is perhaps the one area that most office automation systems have ignored or handled poorly. In the personal computer world, there has been much emphasis recently on integration of packages such as spreadsheet, database management, word processing, graphics, time management, and communications. This same philosophy of integration has been applied to a few office automation systems, but these are generally vendor-specific and do not allow for a mixture of foreign subsystems. During the next few years, it is likely that a few vendors will emerge as dominant in this integrated office automation field and will stress simplicity and flexibility as major components.

  20. Quantum Dynamics Simulations for Modeling Experimental Pump-Probe Measurements

    NASA Astrophysics Data System (ADS)

    Pearson, Brett; Nayyar, Sahil; Liss, Kyle; Weinacht, Thomas

    2016-05-01

    Time-resolved studies of quantum dynamics have benefited greatly from developments in ultrafast table-top and free electron lasers. Advances in computer software and hardware have lowered the barrier for performing calculations such that relatively simple simulations allow for direct comparison with experimental results. We describe here a set of quantum dynamics calculations in low-dimensional molecular systems. The calculations incorporate coupled electronic-nuclear dynamics, including two interactions with an applied field and nuclear wave packet propagation. The simulations were written and carried out by undergraduates as part of a senior research project, with the specific goal of allowing for detailed interpretation of experimental pump-probe data (in additional to the pedagogical value).

  1. Simulation of fluorescent measurements in the human skin

    NASA Astrophysics Data System (ADS)

    Meglinski, Igor V.; Sinichkin, Yurii P.; Utz, Sergei R.; Pilipenko, Helena A.

    1995-05-01

    Reflectance and fluorescence spectroscopy are successfully used for skin disease diagnostics. Human skin optical parameters are defined by its turbid, scattering properties with nonuniform absorption and fluorescence chromophores distribution, its multilayered structure, and variability under different physiological and pathological conditions. Theoretical modeling of light propagation in skin could improve the understanding of these condition and may be useful in the interpretation of in vivo reflectance and autofluorescence (AF) spectra. Laser application in medical optical tomography, tissue spectroscopy, and phototherapy stimulates the development of optical and mathematical light-tissue interaction models allowing to account the specific features of laser beam and tissue inhomogeneities. This paper presents the version of a Monte Carlo method for simulating of optical radiation propagation in biotissue and highly scattering media, allowing for 3D geometry of a medium. The simulation is based on use of Green's function of medium response to single external pulse. The process of radiation propagation is studied in the area with given boundary conditions, taking into account the processes of reflection and refraction at the boundaries of layers inside the medium under study. Results of Monte Carlo simulation were compared with experimental investigations and demonstrated good agreement.

  2. A Structured-Grid Quality Measure for Simulated Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A structured-grid quality measure is proposed, combining three traditional measurements: intersection angles, stretching, and curvature. Quality assesses whether the grid generated provides the best possible tradeoffs in grid stretching and skewness that enable accurate flow predictions, whereas the grid density is assumed to be a constraint imposed by the available computational resources and the desired resolution of the flow field. The usefulness of this quality measure is assessed by comparing heat transfer predictions from grid convergence studies for grids of varying quality in the range of [0.6-0.8] on an 8'half-angle sphere-cone, at laminar, perfect gas, Mach 10 wind tunnel conditions.

  3. Electrochemical concentration measurements for multianalyte mixtures in simulated electrorefiner salt

    NASA Astrophysics Data System (ADS)

    Rappleye, Devin Spencer

    The development of electroanalytical techniques in multianalyte molten salt mixtures, such as those found in used nuclear fuel electrorefiners, would enable in situ, real-time concentration measurements. Such measurements are beneficial for process monitoring, optimization and control, as well as for international safeguards and nuclear material accountancy. Electroanalytical work in molten salts has been limited to single-analyte mixtures with a few exceptions. This work builds upon the knowledge of molten salt electrochemistry by performing electrochemical measurements on molten eutectic LiCl-KCl salt mixture containing two analytes, developing techniques for quantitatively analyzing the measured signals even with an additional signal from another analyte, correlating signals to concentration and identifying improvements in experimental and analytical methodologies. (Abstract shortened by ProQuest.).

  4. Sphericity estimation bias for repeated measures designs in simulation studies.

    PubMed

    Bono, Roser; Arnau, Jaume; Blanca, María J; Alarcón, Rafael

    2016-12-01

    In this study, we explored the accuracy of sphericity estimation and analyzed how the sphericity of covariance matrices may be affected when the latter are derived from simulated data. We analyzed the consequences that normal and nonnormal data generated from an unstructured population covariance matrix-with low (ε = .57) and high (ε = .75) sphericity-can have on the sphericity of the matrix that is fitted to these data. To this end, data were generated for four types of distributions (normal, slightly skewed, moderately skewed, and severely skewed or log-normal), four sample sizes (very small, small, medium, and large), and four values of the within-subjects factor (K = 4, 6, 8, and 10). Normal data were generated using the Cholesky decomposition of the correlation matrix, whereas the Vale-Maurelli method was used to generate nonnormal data. The results indicate the extent to which sphericity is altered by recalculating the covariance matrix on the basis of simulated data. We concluded that bias is greater with spherical covariance matrices, nonnormal distributions, and small sample sizes, and that it increases in line with the value of K. An interaction was also observed between sample size and K: With very small samples, the observed bias was greater as the value of K increased.

  5. Radio Occultation Measurements of the Lower Troposphere: A Simulation Study

    NASA Astrophysics Data System (ADS)

    Hurst, K. J.; Ao, C. O.; Mannucci, A. J.

    2011-12-01

    We use simulations to investigate the ability of the Radio Occultation technique to capture the vertical refractivity structure within the Atmospheric Boundary Layer. We first generate a suite of atmospheric profiles of pressure, temperature, water content, and boundary layer height, calculate a suite of forward models to get phase variations which are then run through standard Abel transform-based inversion methods to retrieve the input parameters. We are interested to see if the structure between the bottom and top of the ABL can be resolved in spite of the well known negative bias caused by the large refractivity gradients at the top of the ABL. This study can be used as a basis for comparison with other experimental radio occultation inversion methods.

  6. Simulation and measurement of a Ka-band HTS MMIC Josephson junction mixer

    NASA Astrophysics Data System (ADS)

    Zhang, Ting; Pegrum, Colin; Du, Jia; Guo, Yingjie Jay

    2017-01-01

    We report modeling and simulation results for a Ka band high-temperature superconducting (HTS) monolithic microwave integrated circuit (MMIC) Josephson junction mixer. A Verilog-A model of a Josephson junction is established and imported into the system simulator to realize a full HTS MMIC circuit simulation containing the HTS passive circuit models. Impedance matching optimization between the junction and passive devices is investigated. Junction DC I-V characteristics, current and local oscillator bias conditions and mixing performance are simulated and compared with the experimental results. Good agreement is obtained between the simulation and measurement results.

  7. Transient water stress in a vegetation canopy - Simulations and measurements

    NASA Technical Reports Server (NTRS)

    Carlson, Toby N.; Belles, James E.; Gillies, Robert R.

    1991-01-01

    Consideration is given to observational and modeling evidence of transient water stress, the effects of the transpiration plateau on the canopy radiometric temperature, and the factors responsible for the onset of the transpiration plateau, such as soil moisture. Attention is also given to the point at which the transient stress can be detected by remote measurement of surface temperature.

  8. Gamma Efficiency Simulations towards Coincidence Measurements for Fusion Cross Sections

    NASA Astrophysics Data System (ADS)

    Heine, M.; Courtin, S.; Fruet, G.; Jenkins, D. G.; Montanari, D.; Morris, L.; Regan, P. H.; Rudigier, M.; Symochko, D.

    2016-10-01

    With the experimental station STELLA (STELlar LAboratory) we will measure fusion cross sections of astrophysical relevance making use of the coincident detection of charged particles and gamma rays for background reduction. For the measurement of gamma rays from the de-excitation of fusion products a compact array of 36 UK FATIMA LaBr3 detectors is designed based on efficiency studies with Geant4. The photo peak efficiency in the region of interest compares to other gamma detection systems used in this field. The features of the internal decay of 138La is used in a background study to obtain an online calibration of the gamma detectors. Background data are fit to the Monte Carlo model of the self activity assuming crude exponential behavior of external background. Accuracy in the region of interest is of the order of some keV in this first study.

  9. Measurement and simulation of high voltage-wake interactions

    SciTech Connect

    Hardy, D.A.; Olsen, D.; Burke, W.J.; Ginet, G.; Gough, P.; Huang, C.; James, H.G.

    1996-12-31

    Oedipus C was a tethered mother-son payload with a 50 kHz to 8.0 Mhz stepped frequency transmitter on the forward payload and a HF receiver on the aft payload. In the course of the upleg of the flight the tether was deployed to a distance of approximately 1 km. At apogee the tether was cut. As part of the compliment of environmental sensors, multiangular electrostatic analyzers were flown on both the forward and aft payloads. These detectors compiled 10, 32 point electron spectra/second over the energy range from 20 eV to 20 keV and in 8 angular zones defining a detection fan of 1,440{degree} by 100{degree}. The HF transmitter was normally swept through 165 frequency steps every .5 seconds. The stepping of the electrostatic analyzer was synchronized to the stepping of the transmitter such that the analyzer measured at a fixed frequency at each energy step. The output pulses of the electrostatic analyzers were also processed by an on-board particle correlator that measured bunching in the electron flux produced by coherent wave-particle interactions. Throughout the flight the analyzer in the forward payload measured large increases in the electron flux at energies up to several keV and over a wide angular range whenever the transmitter was emitting at approximately the local electron gyrofrequency. Similar effects were seen on the aft payload for separation up to several hundred meters. In addition, weaker electron flux enhancements were also seen at sub-harmonics of the gyrofrequency at low altitudes. The correlator measured clear Mhz modulation of the electrons at harmonics of the gyrofrequency. The data indicate strong heating of the local plasma due to the HF wave injection.

  10. Simulated Measurements of Cooling in Muon Ionization Cooling Experiment

    SciTech Connect

    Mohayai, Tanaz; Rogers, Chris; Snopok, Pavel

    2016-06-01

    Cooled muon beams set the basis for the exploration of physics of flavour at a Neutrino Factory and for multi-TeV collisions at a Muon Collider. The international Muon Ionization Cooling Experiment (MICE) measures beam emittance before and after an ionization cooling cell and aims to demonstrate emittance reduction in muon beams. In the current MICE Step IV configuration, the MICE muon beam passes through low-Z absorber material for reducing its transverse emittance through ionization energy loss. Two scintillating fiber tracking detectors, housed in spectrometer solenoid modules upstream and downstream of the absorber are used for reconstructing position and momentum of individual muons for calculating transverse emittance reduction. However, due to existence of non-linear effects in beam optics, transverse emittance growth can be observed. Therefore, it is crucial to develop algorithms that are insensitive to this apparent emittance growth. We describe a different figure of merit for measuring muon cooling which is the direct measurement of the phase space density.

  11. Simulation of the BSDF measurement capabilities for various materials with GCMS-4 gonio-spectrophotometer

    NASA Astrophysics Data System (ADS)

    Zhdanov, Dmitry D.; Potemin, Igor S.; Sokolov, Vadim G.; Garbul, Alexey A.; Voloboy, Alexey G.; Galaktionov, Vladimir A.

    2016-10-01

    Physically accurate lighting simulation requires precise account of the optical properties (BSDF) which are usually measured using gonio-spectrophotometer. In this paper, the authors analyzed the accuracy of BSDF shape measured for later use of measurements in special software for photorealistic visualization and virtual prototyping. Visual and numerical analysis were done. In the first case we look at the sample image rendered under specified lighting conditions, replacing its properties on measurement results and visually estimate the similarity (or difference). In the second case we compare the results of simulation of spatial or angular radiance distribution with results of corresponding radiometric measurements.

  12. Estimating and Measuring Application Latency of Typical Distributed Interactive Simulation (DIS)-Based Simulation Architecture

    DTIC Science & Technology

    2013-03-01

    analysis of state space consistency using a Petri net model (Hodson, 2009). 21 Figure 3: Producer, Network, and Consumer Models by Hodson...used a Petri net model to simulate a consumer/producer system to characterize the age of the state data. This research effort will use actual 24...software architecture, which is based on Hodson’s analysis of state space consistency using Petri net (Hodson, 2009). Figure 4: Multithreaded

  13. Measurement and Simulation of Volatile Particle Emissions from Military Aircraft

    DTIC Science & Technology

    2011-12-01

    HCs and Sulfate • Combustor system studies 0 1 km Carnegie Mellon University Team (WP-1626) • Measurement of PM in engine exhaust • Aging in “ smog ...Stratotanker CFM56-2B Engine Rake Inlet Heated Transfer Line Mobile Laboratory Smog Chamber WP-1626 Engine 1 m DR ~ 1 T ~ 500°C 250 m DR ~ 200 0 2 4 6 8...Dilution sampler, thermodenuder, and smog chamber techniques ● Archival Papers: 7 published, 4 near submission, others in process 25 Primary PM

  14. Simulations of Convection Zone Flows and Measurements from Multiple Viewing Angles

    NASA Technical Reports Server (NTRS)

    Duvall, Thomas L.; Hanasoge, Shravan

    2011-01-01

    A deep-focusing time-distance measurement technique has been applied to linear acoustic simulations of a solar interior perturbed by convective flows. The simulations are for the full sphere for r/R greater than 0.2. From these it is straightforward to simulate the observations from different viewing angles and to test how multiple viewing angles enhance detectibility. Some initial results will be presented.

  15. Simulation of Callisto's exosphere as measured by JUICE/NIM

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Galli, A.; Mousis, O.; Barabash, S.; Lammer, H.

    2014-04-01

    Whereas Callisto's surface has been mapped as early as in 1980 by the two Voyager missions, Callisto's tenuous atmosphere, actually an exosphere, was not directly observed for almost another two decades. In 1999, during the Galileo mission, the Near-Infrared Mapping Spectrometer finally conducted the first and so far only directly measurement of a constituent in Callisto's exosphere: A layer of CO2 molecules reaching up to 100 km above the surface [2]. During the same mission, an ionospheric layer was discovered above Callisto's sunlit trailing hemisphere [5]. The photo-ionization of the observed neutral CO2 atmosphere is insufficient to produce the observed electron densities, though. The existence of a neutral exosphere consisting primarily of O2 was thus proposed, models of which agree well with O2 upper limits derived from Hubble Space Telescope measurements [8]. The Neutral Ion Mass Spectrometer of the Particle Environment Package on board the planned JUpiter ICy moons Explorer mission will conduct the first-ever direct sampling of the exospheres of Europa, Ganymede, and Callisto. We present here density profiles of all primary constituents expected to be present in Callisto's exosphere, and mass spectra as we expect them to be recorded by NIM.

  16. Operation Sun Beam, Shots Little Feller I, II and Johnie Boy. Project officers report. Project 6. 6. Electromagnetic measurements

    SciTech Connect

    Henderson, W.D.; Livingston, P.M.; Rutter, R.L.

    1985-09-01

    Of considerable interest from both a physical and practical viewpoint is the coupling of electromagnetic energy from a nuclear explosion into various electrical systems in the vicinity of the burst. A series of electromagnetic measurements were made on Shots Little Feller I, Little Feller II, and Johnie Boy. It is clear from the records that radiation shielding must be given closer consideration in future tests. Due to equipment failure and radiation inactivation, only the Johnie Boy dynamic current measurement and the passive peak current indicators on all three events are interpretable.

  17. Measurements and simulation of the flow around a poppet valve

    NASA Astrophysics Data System (ADS)

    Lilek, Z.; Nadarajah, S.; Peric, M.; Tindal, M. J.; Yianneskis, M.

    The flow through an axisymmetric inlet port was investigated experimentally and numerically. Laser-Doppler anemometry was used to measure the three ensemble-averaged mean and rms velocity components for two valve lifts, 6 and 10 mm. Numerical calculations of the flows were carried out using a finite volume multigrid method and a standard k-epsilon turbulence model. Comparison of the predictions with the experimental results shows good agreement for the mean velocities for the 10 mm lift case. However, for the 6 mm liftcase the predicted flow differs substantially from the experimental results. This indicates the extreme sensitivity of the flow to the valve lift and the need for more sophisticated turbulence modeling when predicting such flows.

  18. Fluorescence cross section measurements of biological agent simulants

    SciTech Connect

    Stephens, J.R.

    1996-11-01

    Fluorescence is a powerful technique that has potential uses in detection and characterization of biological aerosols both in the battlefield and in civilian environments. Fluorescence techniques can be used with ultraviolet (UV) light detection and ranging (LIDAR) equipment to detect biological aerosol clouds at a distance, to provide early warning of a biological attack, and to track an potentially noxious cloud. Fluorescence can also be used for detection in a point sensor to monitor biological materials and to distinguish agents from benign aerosols. This work is part of a continuing program by the Army`s Chemical and Biological Defense Command to characterized the optical properties of biological agents. Reported here are ultraviolet fluorescence measurements of Bacillus megaterium and Bacillus Globigii aerosols suspended in an electrodynamic particle trap. Fluorescence spectra of a common atmospheric aerosol, pine pollen, are also presented.

  19. A three-axis flight simulator. [for testing and evaluating inertial measuring units, and flight platforms

    NASA Technical Reports Server (NTRS)

    Mason, M. G.

    1975-01-01

    A simulator is described, which was designed for testing and evaluating inertial measuring units, and flight platforms. Mechanical and electrical specifications for the outer, middle, and inner axis are presented. Test results are included.

  20. Experimental measurement of investment shell properties and use of the data in casting simulation software

    SciTech Connect

    Browne, D.J.; Sayers, K.

    1995-12-31

    This paper describes the development of a systematic program of experimental measurement of relevant properties of mould materials, conducted with the express purpose of generating data for use in casting (filling and solidification) simulation software. In particular the thermophysical properties of the ceramic shell built up for the investment casting process are measured. These properties include specific heat capacity, thermal conductivity, gas permeability, density and surface emissivity. Much of the experimental measurements are taken as a function of temperature, up to the temperature at which moulds are typically fired or preheated. Typical results are presented. The data so generated is then used in a casting simulation model to simulate the investment casting of a prosthetic device. The results of the simulation are presented, and comparisons are made with measurements and observations from an experimental casting of the same part. In this way both the reliability of the data and the accuracy of the filling and solidification model are validated.

  1. Advanced Simulator for Pilot Training: Design of Automated Performance Measurement System

    DTIC Science & Technology

    1980-08-01

    reverse aide if necessary and identify by block number) pilot pertormance measurement Advanced Simulator for Pilot Training ( ASPT ) Aircrew performance...Simulator for Pilot Training ( ASPT ). This report documents that development effort and describes the current status of the measurement system. It was...Continued): cj;? /To date, the following scenarios have been implemented on the ASPT : (a)1’nusition Tasks - Straight and Level, Airspeed Changes, Turns

  2. Soil moisture at local scale: Measurements and simulations

    NASA Astrophysics Data System (ADS)

    Romano, Nunzio

    2014-08-01

    Soil moisture refers to the water present in the uppermost part of a field soil and is a state variable controlling a wide array of ecological, hydrological, geotechnical, and meteorological processes. The literature on soil moisture is very extensive and is developing so rapidly that it might be considered ambitious to seek to present the state of the art concerning research into this key variable. Even when covering investigations about only one aspect of the problem, there is a risk of some inevitable omission. A specific feature of the present essay, which may make this overview if not comprehensive at least of particular interest, is that the reader is guided through the various traditional and more up-to-date methods by the central thread of techniques developed to measure soil moisture interwoven with applications of modeling tools that exploit the observed datasets. This paper restricts its analysis to the evolution of soil moisture at the local (spatial) scale. Though a somewhat loosely defined term, it is linked here to a characteristic length of the soil volume investigated by the soil moisture sensing probe. After presenting the most common concepts and definitions about the amount of water stored in a certain volume of soil close to the land surface, this paper proceeds to review ground-based methods for monitoring soil moisture and evaluates modeling tools for the analysis of the gathered information in various applications. Concluding remarks address questions of monitoring and modeling of soil moisture at scales larger than the local scale with the related issue of data aggregation. An extensive, but not exhaustive, list of references is provided, enabling the reader to gain further insights into this subject.

  3. Combining Disparate Measures of Metabolic Rate During Simulated Spacewalks

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Kuznetz, Larry; Nguyen, Dan

    2009-01-01

    Scientists from NASA's Extravehicular Activities (EVA) Physiology Systems and Performance Project help design space suits for future missions, during which astronauts are expected to perform EVA activities on the Lunar or Martian surface. During an EVA, an astronaut s integrated metabolic rate is used to predict how much longer the activity can continue and still provide a safe margin of remaining consumables. For EVAs in the Apollo era, NASA physicians monitored live data feeds of heart rate, O2 consumption, and liquid cooled garment (LCG) temperatures, which were subjectively combined or compared to produce an estimate of metabolic rate. But these multiple data feeds sometimes provided conflicting estimates of metabolic rate, making real-time calculations of remaining time difficult for physician/monitors. Currently, designs planned for the Constellation Program EVAs utilize an automated, but largely heuristic methodology for incorporating the above three measurements, plus an additional one - CO2 production, ignoring data that appears in conflict; however a more rigorous model-based approach is desirable. In this study, we show how principal axis factor analysis, in combination with OLS regression and LOWESS smoothing can be used to estimate metabolic rate as a data-driven weighted average of heart rate, O2 consumption, LCG temperature data, and CO2 production. Preliminary results suggest less sensitivity to occasional spikes in observed data feeds, and reasonable within-subject reproducibility when applied to subsequent tasks. These methods do not require physician monitoring and as such can be automated in the electronic components of future space suits. With additional validation, our models show promise for increasing astronaut safety, while reducing the need for and potential errors associated with human monitoring of multiple systems.

  4. Comparison of Analysis, Simulation, and Measurement of Wire-to-Wire Crosstalk. Part 1

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur T.; Yavoich, Brian James; Hodson, Shame M.; Godley, Richard Franklin

    2010-01-01

    In this investigation, we compare crosstalk analysis, simulation, and measurement results for electrically short configurations. Methods include hand calculations, PSPICE simulations, Microstripes transient field solver, and empirical measurement. In total, four representative physical configurations are examined, including a single wire over a ground plane, a twisted pair over a ground plane, generator plus receptor wires inside a cylindrical conduit, and a single receptor wire inside a cylindrical conduit. Part 1 addresses the first two cases, and Part 2 addresses the final two. Agreement between the analysis, simulation, and test data is shown to be very good.

  5. Measurements and simulations of rail vehicle dynamics with respect to overturning risk

    NASA Astrophysics Data System (ADS)

    Thomas, Dirk; Berg, Mats; Stichel, Sebastian

    2010-01-01

    Rail vehicles are exposed to strong lateral influences through curves, track imperfections and crosswind leading to large deflections of the vehicle suspension systems and carbody displacements. In turn, this increases the risk of vehicle overturning. In the present work, multibody simulations are performed in order to study the motion in the secondary suspension. Suspension deflection measurements on a fast test train were carried out and used for validation of the simulations. The simulations show good agreement with the measurements and represent a good tool to predict the motion in the secondary suspension.

  6. In-vehicle CO ingression: validation through field measurements and mass balance simulations.

    PubMed

    Esber, Layale Abi; El-Fadel, Mutasem

    2008-05-01

    In this study a mass balance modeling approach with measured out-vehicle carbon monoxide (CO) levels and trip-specific movement record as boundary conditions were used to simulate in-vehicle CO concentration profiles. The simulation results were coupled with field measurements to demonstrate the occurrence of CO ingression into the vehicle compartment from the engine combustion and/or exhaust return of the test vehicle. Agreement between field and simulation results was obtained for variable amounts of infiltrated CO equivalent to an in-vehicle emission rate of 250 to 1250 mg/h of CO depending on the vehicle ventilation settings.

  7. RSRM top hat cover simulator lightning test, volume 2. Appendix A: Resistance measurements. Appendix B: Lightning test data plots

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Resistance measurements are given in graphical for when a simulated lightning discharge strikes on an exposed top hat cover simulator. The test sequence was to measure the electric and magnetic fields induced inside a redesigned solid rocket motor case.

  8. Operation Dominic, Fish Bowl Series. Project Officer's report. Project 9. 1b. Ionospheric wind and diffusion measurements

    SciTech Connect

    Champion, K.; Manring, E.R.

    1985-09-01

    The aim of this project was to measure high-altitude wind velocities and diffusion coefficients in the altitude region between 60 and 150 km. The method involved the ejection of a sodium vapor trail from a Cajun rocket at dust or dawn twilight. The sodium was sunlit, and as a result of emission of resonance radiation, was visible against a darkened background for about 20 minutes. The trail was photographed simultaneously from four different sites, allowing for subsequent triangulation to determine the altitude of various parts of the cloud. A major application of these wind and diffusion data, taken at dusk and dawn following the high-altitude nuclear tests, was to aid in determining the disposition of the nuclear debris.

  9. Cost estimation of hypertension management based on home blood pressure monitoring alone or combined office and ambulatory blood pressure measurements.

    PubMed

    Boubouchairopoulou, Nadia; Karpettas, Nikos; Athanasakis, Kostas; Kollias, Anastasios; Protogerou, Athanase D; Achimastos, Apostolos; Stergiou, George S

    2014-10-01

    This study aims at estimating the resources consumed and subsequent costs for hypertension management, using home blood pressure (BP) monitoring (HBPM) alone versus combined clinic measurements and ambulatory blood pressure monitoring (C/ABPM). One hundred sixteen untreated hypertensive subjects were randomized to use HBPM or C/ABPM for antihypertensive treatment initiation and titration. Health resources utilized within 12-months follow-up, their respective costs, and hypertension control were assessed. The total cost of the first year of hypertension management was lower in HBPM than C/ABPM arm (€1336.0 vs. €1473.5 per subject, respectively; P < .001). Laboratory tests' cost was identical in both arms. There was no difference in achieved BP control and drug expenditure (HBPM: €233.1 per subject; C/ABPM: €247.6 per subject; P = not significant), whereas the cost of BP measurements and/or visits was higher in C/ABPM arm (€393.9 vs. €516.9, per patient, respectively P < .001). The cost for subsequent years (>1) was €348.9 and €440.2 per subject, respectively for HBPM and C/ABPM arm and €2731.4 versus €3234.3 per subject, respectively (P < .001) for a 5-year projection. HBPM used alone for the first year of hypertension management presents lower cost than C/ABPM, and the same trend is observed in 5-year projection. The results on the resources consumption can be used to make cost estimates for other health-care systems.

  10. Evidence-based ergonomics. A comparison of Japanese and American office layouts.

    PubMed

    Noro, Kageyu; Fujimaki, Goroh; Kishi, Shinsuke

    2003-01-01

    There is a variety of alternatives in office layouts. Yet the theoretical basis and criteria for predicting how well these layouts accommodate employees are poorly understood. The objective of this study was to evaluate criteria for selecting office layouts. Intensive computer workers worked in simulated office layouts in a controlled experimental laboratory. Eye movement measures indicate that knowledge work requires both concentration and interaction. Findings pointed to one layout as providing optimum balance between these 2 requirements. Recommendations for establishing a theoretical basis and design criteria for selecting office layouts based on work style are suggested.

  11. Quantitative analyses of spectral measurement error based on Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Ma, Congcong; Zhang, Qi; Lu, Junsheng; Xu, Kexin

    2015-03-01

    The spectral measurement error is controlled by the resolution and the sensitivity of the spectroscopic instrument and the instability of involved environment. In this talk, the spectral measurement error has been analyzed quantitatively by using the Monte Carlo (MC) simulation. Take the floating reference point measurement for example, unavoidably there is a deviation between the measuring position and the theoretical position due to various influence factors. In order to determine the error caused by the positioning accuracy of the measuring device, Monte Carlo simulation has been carried out at the wavelength of 1310nm, simulating Intralipid solution of 2%. MC simulation was performed with the number of 1010 photons and the sampling interval of the ring at 1μm. The data from MC simulation will be analyzed on the basis of thinning and calculating method (TCM) proposed in this talk. The results indicate that TCM could be used to quantitatively analyze the spectral measurement error brought by the positioning inaccuracy.

  12. Uncertainty Analysis of Sonic Boom Levels Measured in a Simulator at NASA Langley

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Ely, Jeffry W.

    2012-01-01

    A sonic boom simulator has been constructed at NASA Langley Research Center for testing the human response to sonic booms heard indoors. Like all measured quantities, sonic boom levels in the simulator are subject to systematic and random errors. To quantify these errors, and their net influence on the measurement result, a formal uncertainty analysis is conducted. Knowledge of the measurement uncertainty, or range of values attributable to the quantity being measured, enables reliable comparisons among measurements at different locations in the simulator as well as comparisons with field data or laboratory data from other simulators. The analysis reported here accounts for acoustic excitation from two sets of loudspeakers: one loudspeaker set at the facility exterior that reproduces the exterior sonic boom waveform and a second set of interior loudspeakers for reproducing indoor rattle sounds. The analysis also addresses the effect of pressure fluctuations generated when exterior doors of the building housing the simulator are opened. An uncertainty budget is assembled to document each uncertainty component, its sensitivity coefficient, and the combined standard uncertainty. The latter quantity will be reported alongside measurement results in future research reports to indicate data reliability.

  13. MCNPX simulation of influence of cosmic rays on low-activity spectrometric measurements

    NASA Astrophysics Data System (ADS)

    Šolc, Jaroslav; Kovář, Petr; Dryák, Pavel

    2014-02-01

    Germanium gamma spectrometers are effective instruments for low-activity measurement of a mixture of radionuclides in environmental samples, food samples, in materials released from nuclear facilities to the environment, etc. In such measurements cosmic rays have a significant contribution to the background signal. A Monte Carlo code MCNPXTM was used to calculate coaxial high-purity germanium (HPGe) detector pulse-height spectra caused by cosmic rays penetrating through shielding made of concrete and lead. Simulations were compared to two different measurements, one performed inside a 10 cm thick lead shielding and another done inside a larger chamber made of low-activity concrete and with several ceiling thicknesses. In the first experiment, a discrepancy was found between simulated and measured spectra by up to the factor of 4 at 2.62 MeV and slowly decreasing to unity at 13 MeV. It is assumed that the discrepancy between the measured and simulated spectra is caused by the simplification of muon energy losses treatment resulting in the underestimation of count rate in simulated pulse-height spectrum. Good agreement was obtained between simulation and measurement of differences of detector count rates in 662 keV and 1332 keV energy windows inside a concrete chamber with varying ceiling thickness. It is assumed that due to lower effective Z of concrete, delta electron bremsstrahlung has lower yield and the muon radiation energy losses start to be important at higher energies than in lead. As a result, the total contribution of these effects to the outputs of MCNPXTM simulations of concrete chamber is not dominant in the investigated energy windows and the simulation results are in a close agreement with the measurement.

  14. Monte Carlo simulation of air sampling methods for the measurement of radon decay products.

    PubMed

    Sima, Octavian; Luca, Aurelian; Sahagia, Maria

    2017-02-21

    A stochastic model of the processes involved in the measurement of the activity of the (222)Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the (222)Rn decay products concentrations in the air are realistically evaluated.

  15. Determining minimum alarm activities of orphan sources in scrap loads; Monte Carlo simulations, validated with measurements

    NASA Astrophysics Data System (ADS)

    Takoudis, G.; Xanthos, S.; Clouvas, A.; Potiriadis, C.

    2010-02-01

    Portal monitoring radiation detectors are commonly used by steel industries in the probing and detection of radioactivity contamination in scrap metal. These portal monitors typically consist of polystyrene or polyvinyltoluene (PVT) plastic scintillating detectors, one or more photomultiplier tubes (PMT), an electronic circuit, a controller that handles data output and manipulation linking the system to a display or a computer with appropriate software and usually, a light guide. Such a portal used by the steel industry was opened and all principal materials were simulated using a Monte Carlo simulation tool (MCNP4C2). Various source-detector configurations were simulated and validated by comparison with corresponding measurements. Subsequently an experiment with a uniform cargo along with two sets of experiments with different scrap loads and radioactive sources ( 137Cs, 152Eu) were performed and simulated. Simulated and measured results suggested that the nature of scrap is crucial when simulating scrap load-detector experiments. Using the same simulating configuration, a series of runs were performed in order to estimate minimum alarm activities for 137Cs, 60Co and 192Ir sources for various simulated scrap densities. The minimum alarm activities as well as the positions in which they were recorded are presented and discussed.

  16. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    NASA Astrophysics Data System (ADS)

    Smaluk, Victor; Fielder, Richard; Blednykh, Alexei; Rehm, Guenther; Bartolini, Riccardo

    2014-07-01

    One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID) straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  17. Coupling impedance of an in-vacuum undulator. Measurement, simulation, and analytical estimation

    SciTech Connect

    Simaluk, Victor; Blednykh, Alexei; Fielder, Richard; Rehm, Guenther; Bartolini, Riccardo

    2014-07-25

    One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. In order to get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID) straights. Moreover, the impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  18. An Evaluation of Monte Carlo Simulations of Neutron Multiplicity Measurements of Plutonium Metal

    SciTech Connect

    Mattingly, John; Miller, Eric; Solomon, Clell J. Jr.; Dennis, Ben; Meldrum, Amy; Clarke, Shaun; Pozzi, Sara

    2012-06-21

    In January 2009, Sandia National Laboratories conducted neutron multiplicity measurements of a polyethylene-reflected plutonium metal sphere. Over the past 3 years, those experiments have been collaboratively analyzed using Monte Carlo simulations conducted by University of Michigan (UM), Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and North Carolina State University (NCSU). Monte Carlo simulations of the experiments consistently overpredict the mean and variance of the measured neutron multiplicity distribution. This paper presents a sensitivity study conducted to evaluate the potential sources of the observed errors. MCNPX-PoliMi simulations of plutonium neutron multiplicity measurements exhibited systematic over-prediction of the neutron multiplicity distribution. The over-prediction tended to increase with increasing multiplication. MCNPX-PoliMi had previously been validated against only very low multiplication benchmarks. We conducted sensitivity studies to try to identify the cause(s) of the simulation errors; we eliminated the potential causes we identified, except for Pu-239 {bar {nu}}. A very small change (-1.1%) in the Pu-239 {bar {nu}} dramatically improved the accuracy of the MCNPX-PoliMi simulation for all 6 measurements. This observation is consistent with the trend observed in the bias exhibited by the MCNPX-PoliMi simulations: a very small error in {bar {nu}} is 'magnified' by increasing multiplication. We applied a scalar adjustment to Pu-239 {bar {nu}} (independent of neutron energy); an adjustment that depends on energy is probably more appropriate.

  19. Final Report - From Measurements to Models: Cross-Comparison of Measured and Simulated Behavioral States of the Atmosphere

    SciTech Connect

    Del Genio, Anthony D; Hoffman, Forrest M; Hargrove, Jr, William W

    2007-10-22

    The ARM sites and the ARM Mobile Facility (AMF) were constructed to make measurements of the atmosphere and radiation system in order to quantify deficiencies in the simulation of clouds within models and to make improvements in those models. While the measurement infrastructure of ARM is well-developed and a model parameterization testbed capability has been established, additional effort is needed to develop statistical techniques which permit the comparison of simulation output from atmospheric models with actual measurements. Our project establishes a new methodology for objectively comparing ARM measurements to the outputs of leading global climate models and reanalysis data. The quantitative basis for this comparison is provided by a statistical procedure which establishes an exhaustive set of mutually-exclusive, recurring states of the atmosphere from sets of multivariate atmospheric and cloud conditions, and then classifies multivariate measurements or simulation outputs into those states. Whether measurements and models classify the atmosphere into the same states at specific locations through time provides an unequivocal comparison result. Times and locations in both geographic and state space of model-measurement agreement and disagreement will suggest directions for the collection of additional measurements at existing sites, provide insight into the global representativeness of the current ARM sites (suggesting locations and times for use of the AMF), and provide a basis for improvement of models. Two different analyses were conducted: One, using the Parallel Climate Model, focused on an IPCC climate change scenario and clusters that characterize long-term changes in the hydrologic cycle. The other, using the GISS Model E GCM and the ARM Active Remotely Sensed Cloud Layers product, explored current climate cloud regimes in the Tropical West Pacific.

  20. Simulation method for interference fringe patterns in measuring gear tooth flanks by laser interferometry.

    PubMed

    Fang, Suping; Wang, Leijie; Komori, Masaharu; Kubo, Aizoh

    2010-11-20

    We present a ray-tracing-based method for simulation of interference fringe patterns (IFPs) for measuring gear tooth flanks with a two-path interferometer. This simulation method involves two steps. In the first step, the profile of an IFP is achieved by means of ray tracing within the object path of the interferometer. In the second step, the profile of an IFP is filled with interference fringes, according to a set of functions from an optical path length to a fringe gray level. To examine the correctness of this simulation method, simulations are performed for two spur involute gears, and the simulated IFPs are verified by experiments using the actual two-path interferometer built on an optical platform.

  1. Simulation System for a Rebreathing Technique To Measure Multiple Cardiopulmonary Function Parameters

    PubMed Central

    Yilmaz, Cuneyt; Chance, William W.; Johnson, Robert L.; Hsia, Connie C. W.

    2009-01-01

    Background: We developed a simple method for simulating a rebreathing maneuver to test the accuracy of the apparatus for simultaneous measurement of lung volume, diffusing capacity of the lung for carbon monoxide (Dlco), diffusing capacity of the lung for nitric oxide (Dlno), and pulmonary blood flow (Q̇c). Methods: A test gas mixture containing 0.3% methane, 0.3% CO, 0.8% acetylene, 30% O2, and 40 ppm nitric oxide in balance of nitrogen was sequentially diluted with a rebreathing gas mixture containing 0.3% acetylene, 0.3% methane, and 21% O2 in balance of nitrogen in order to simulate the in vivo end-tidal disappearance of the test gas mixture. Simulation of one rebreathing maneuver consisted of at least four serial dilution steps with a performance time of < 5 min. Using this technique, we estimated functional residual capacity, Q̇c, Dlco, and Dlno at various flow rates and dilution ratios (0.95 to 4.04 L, 3.54 to 6.83 L/min, 7.27 to 15.12 mL/min/mm Hg, and 6.51 to 12.00 mL/min/mm Hg, respectively) and verified simulation results against nominal values. The same apparatus also could simulate a single-breath procedure. Results: Compared to nominal values, errors in measured values by rebreathing and single-breath Dlco simulation remained < 5% and 7%, respectively. Slopes of the correlations were close to 1.0 (within ± 5% and ± 6.4% in rebreathing and single-breath Dlco simulation studies, respectively). Conclusion: The results demonstrate the feasibility of this simulation method for standardizing the experimental measurements obtained by rebreathing and single-breath techniques. Incorporation of these simulation steps enhances the noninvasive assessment of cardiopulmonary function. PMID:19420198

  2. Myocardial physiology measurements using contrast enhanced dynamic computed tomography: simulation of beam hardening effect

    NASA Astrophysics Data System (ADS)

    Cao, Minsong; Stantz, Keith M.; Liang, Yun

    2006-03-01

    Initial animal study for quantifying myocardial physiology through contrast-enhanced dynamic x-ray CT suggested that beam hardening is one of the limiting factors for accurate regional physiology measurement. In this study, a series of simulations were performed to investigate its deterioration effects and two correction algorithms were adapted to evaluate for their efficiency in improving the measurements. The simulation tool consists of a module simulating data acquisition of a real polyenergetic scanner system and a heart phantom consisting of simple geometric objects representing ventricles and myocardium. Each phantom component was modeled with time-varying attenuation coefficients determined by ideal iodine contrast dynamic curves obtained from experimental data or simulation. A compartment model was used to generate the ideal myocardium contrast curve using physiological parameters consistent with measured values. Projection data of the phantom were simulated and reconstructed to produce a sequence of simulated CT images. Simulated contrast dynamic curves were fitted to the compartmental model and the resultant physiological parameters were compared with ideal values to estimate the errors induced by beam hardening artifacts. The simulations yielded similar deterioration patterns of contrast dynamic curves as observed in the initial study. Significant underestimation of left ventricle curves and corruption of regional myocardium curves result in systematic errors of regional perfusion up to approximately 24% and overestimates of fractional blood volume (f iv) up to 13%. The correction algorithms lead to significant improvement with errors of perfusion reduced to 7% and errors of f iv within 2% which shows promise for more robust myocardial physiology measurement.

  3. Measurement of metabolic responses to an orbital-extravehicular work-simulation exercise

    NASA Technical Reports Server (NTRS)

    Lantz, Renee; Webbon, Bruce

    1988-01-01

    This paper describes a new system designed to simulate orbital EVA work and measure metabolic responses to these space-work exercises. The system incorporates an experimental protocol, a controlled-atmosphere chamber, an EVA-work exercise device, the instrumentation, and a data acquisition system. Engineering issues associated with the design of the proposed system are discussed. This EVA-work simulating system can be used with various types of upper-body work, including task boards, rope pulling, and arm ergometry. Design diagrams and diagrams of various types of work simulation are included.

  4. Temperature Dependent Measurement And Simulation Of Fresnel Lenses For Concentrating Photovoltaics

    NASA Astrophysics Data System (ADS)

    Hornung, Thorsten; Bachmaier, Andreas; Nitz, Peter; Gombert, Andreas

    2010-10-01

    Concentrating photovoltaics (CPV) require large areas of optical components that concentrate incident sunlight effectively onto a solar cell. Fresnel lenses are often used as primary optical component providing this concentration. When applied in the field, varying conditions during operation lead to variations in lens temperature which has a strong impact on the optical efficiency of the lenses. A setup for indoor characterization with the ability to heat lens plates allows for the assessment of the quality of Fresnel lenses by means of their irradiance profiles in the focal plane. To analyze the measured temperature dependency we simulate thermal deformations of the lens geometry with finite element method (FEM) tools and use the resulting lens geometry as an input to ray tracing simulations. We performed high accuracy measurements of the temperature and wavelength dependent refractive indices of relevant lens materials to obtain additional input data for computer simulations. A close match between computer simulations and measurements of the irradiance in the focal plane could be achieved, validating our simulation approach. This allows us to judge and optimize the temperature dependence of new lens designs before building and testing prototypes. The simulations themselves allow us to analyze and understand all superimposed effects in detail. The developed tools in combination with detailed solar resource data and knowledge of the CPV system will be the basis for future assessment of overall performance and further optimization of optics for CPV applications.

  5. Simulations for the PHENIX Muon Piston Calorimeter Measurement of Transverse Energy

    NASA Astrophysics Data System (ADS)

    Zumberge, Christopher

    2012-10-01

    The PHENIX detector's Muon Piston Calorimeter measures the energies of photons (most of which are the products of pion decay) in the collisions of particles at the Relativistic Heavy Ion Collider (RHIC). The data acquired from the collisions of gold ions at √sNN=200 GeV will be used to measure the transverse energy over the kinematic acceptance of the detector. Corrections for the detector's hadronic response are needed to complete a measurement of the transverse energy and estimate systematic error. The PHENIX Integrated Simulation Application (PISA) is a software package that integrates both a GEANT3 simulation of the entire PHENIX detector and an event generator. In this case HIJING is being used as the event generator. Progress on the production of these simulations will be reported.

  6. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg–Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  7. Assessment of simulation fidelity using measurements of piloting technique in flight

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Cleveland, W. B.; Key, D. L.

    1984-01-01

    The U.S. Army and NASA joined together on a project to conduct a systematic investigation and validation of a ground based piloted simulation of the Army/Sikorsky UH-60A helicopter. Flight testing was an integral part of the validation effort. Nap-of-the-Earth (NOE) piloting tasks which were investigated included the bob-up, the hover turn, the dash/quickstop, the sidestep, the dolphin, and the slalom. Results from the simulation indicate that the pilot's NOE task performance in the simulator is noticeably and quantifiably degraded when compared with the task performance results generated in flight test. The results of the flight test and ground based simulation experiments support a unique rationale for the assessment of simulation fidelity: flight simulation fidelity should be judged quantitatively by measuring pilot's control strategy and technique as induced by the simulator. A quantitative comparison is offered between the piloting technique observed in a flight simulator and that observed in flight test for the same tasks performed by the same pilots.

  8. Molecular dynamics simulations as a complement to nuclear magnetic resonance and X-ray diffraction measurements.

    PubMed

    Feller, Scott E

    2007-01-01

    Advances in the field of atomic-level membrane simulations are being driven by continued growth in computing power, improvements in the available potential energy functions for lipids, and new algorithms that implement advanced sampling techniques. These developments are allowing simulations to assess time- and length scales wherein meaningful comparisons with experimental measurements on macroscopic systems can be made. Such comparisons provide stringent tests of the simulation methodologies and force fields, and thus, advance the simulation field by pointing out shortcomings of the models. Extensive testing against available experimental data suggests that for many properties modern simulations have achieved a level of accuracy that provides substantial predictive power and can aid in the interpretation of experimental data. This combination of closely coupled laboratory experiments and molecular dynamics simulations holds great promise for the understanding of membrane systems. In the following, the molecular dynamics method is described with particular attention to those aspects critical for simulating membrane systems and to the calculation of experimental observables from the simulation trajectory.

  9. A SOFTWARE TOOL TO COMPARE MEASURED AND SIMULATED BUILDING ENERGY PERFORMANCE DATA

    SciTech Connect

    Maile, Tobias; Bazjanac, Vladimir; O'Donnell, James; Garr, Matthew

    2011-11-01

    Building energy performance is often inadequate when compared to design goals. To link design goals to actual operation one can compare measured with simulated energy performance data. Our previously developed comparison approach is the Energy Performance Comparison Methodology (EPCM), which enables the identification of performance problems based on a comparison of measured and simulated performance data. In context of this method, we developed a software tool that provides graphing and data processing capabilities of the two performance data sets. The software tool called SEE IT (Stanford Energy Efficiency Information Tool) eliminates the need for manual generation of data plots and data reformatting. SEE IT makes the generation of time series, scatter and carpet plots independent of the source of data (measured or simulated) and provides a valuable tool for comparing measurements with simulation results. SEE IT also allows assigning data points on a predefined building object hierarchy and supports different versions of simulated performance data. This paper briefly introduces the EPCM, describes the SEE IT tool and illustrates its use in the context of a building case study.

  10. Infrared measurements and simulations of metal meshes in a focused beam

    SciTech Connect

    Stewart, K. P.; Möller, K. D.; Grebel, H.

    2014-02-07

    Infrared transmittance measurements of quasioptical filters are often restricted to a focused beam due to the optical design of the spectrometer. In contrast, numerical simulations assume an incident plane wave, which makes it difficult to compare theory with experimental data. We compare transmittance measurements with numerical simulations of square arrays of circular holes in 3-μm thick Cu sheets at angles of incidence from 0° to 20° for both s and p polarizations. These simple structures allow detailed tests of our electromagnetic simulation methods and show excellent agreement between theory and measurement. Measurements in a focused beam are accurately simulated by combining plane wave calculations over a range of angles that correspond to the focal ratio of the incident beam. Similar screens have been used as components of narrow bandpass filters for far-infrared astronomy, but these results show that the transmittance variations with angle of incidence and polarization limit their use to collimated beams at near normal incidence. The simulations are accurate enough to eliminate a costly trial-and-error approach to the design of more complex and useful quasioptical infrared filters and to predict their in-band performance and out-of-band blocking in focused beams.

  11. Flow visualizations, velocity measurements, and surface convection measurements in simulated 20. 8-cm Nova box amplifier cavities

    SciTech Connect

    Julien, J.L.; Molishever, E.L.

    1983-10-31

    Reported are fluid mechanics experiments performed in models of the 20.8-cm Nova amplifier lamp and disk cavities. Lamp cavity nitrogen flows are shown, by both flow visualization and velocity measurements, to be acceptably uniform and parallel to the flashlamps. In contrast, the nitrogen flows in the disk cavity are shown to be disordered. Even though disk cavity flows are disordered, the simplest of three proposed nitrogen introduction systems for the disk cavity was found to be acceptable based on convection measurements made at the surfaces of simulated laser disks.

  12. Electric field simulation and measurement of a pulse line ion accelerator

    NASA Astrophysics Data System (ADS)

    Shen, Xiao-Kang; Zhang, Zi-Min; Cao, Shu-Chun; Zhao, Hong-Wei; Wang, Bo; Shen, Xiao-Li; Zhao, Quan-Tang; Liu, Ming; Jing, Yi

    2012-07-01

    An oil dielectric helical pulse line to demonstrate the principles of a Pulse Line Ion Accelerator (PLIA) has been designed and fabricated. The simulation of the axial electric field of an accelerator with CST code has been completed and the simulation results show complete agreement with the theoretical calculations. To fully understand the real value of the electric field excited from the helical line in PLIA, an optical electric integrated electric field measurement system was adopted. The measurement result shows that the real magnitude of axial electric field is smaller than that calculated, probably due to the actual pitch of the resister column which is much less than that of helix.

  13. Diffuse photon density wave measurements in comparison with the Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kuzmin, V. L.; Neidrauer, M. T.; Diaz, D.; Zubkov, L. A.

    2015-03-01

    The Diffuse Photon Density Wave (DPDW) methodology is widely used in a number of biomedical applications. Here we present results of Monte Carlo simulations that employ an effective numerical procedure, based upon a description of radiative transfer in terms of the Bethe-Salpeter equation, and compare them with measurements from Intralipid aqueous solutions. In our scheme every act of scattering contributes to the signal. We find the Monte Carlo simulations and measurements to be in a very good agreement for a wide range of source -detector separations.

  14. Comparison of Analysis, Simulation, and Measurement of Wire-to-Wire Crosstalk. Part 2

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur T.; Yavoich, Brian James; Hodson, Shane M.; Godley, Franklin

    2010-01-01

    In this investigation, we compare crosstalk analysis, simulation, and measurement results for electrically short configurations. Methods include hand calculations, PSPICE simulations, Microstripes transient field solver, and empirical measurement. In total, four representative physical configurations are examined, including a single wire over a ground plane, a twisted pair over a ground plane, generator plus receptor wires inside a cylindrical conduit, and a single receptor wire inside a cylindrical conduit. Part 1 addresses the first two cases, and Part 2 addresses the final two. Agreement between the analysis methods and test data is shown to be very good.

  15. Simulation of Feynman-alpha measurements from SILENE reactor using a discrete ordinates code

    SciTech Connect

    Humbert, P.; Mechitoua, B.; Verrey, B.

    2006-07-01

    In this paper we present the simulation of Feynman-{alpha} measurements from SILENE reactor using the discrete ordinates code PANDA. A 2-D cylindrical model of SILENE reactor is designed for computer simulations. Two methods are implemented for variance to mean calculation. In the first method we used the Feynman point reactor formula where the parameters (Diven factor, reactivity, detector efficiency and alpha eigenvalue) are obtained by 2-D PANDA calculations. In the second method the time dependent adjoint equations for the first two moments are solved. The calculated results are compared to the measurements. Both methods are in excellent agreement with the experimental data. (authors)

  16. Measurement of flowfield in a simulated solid-propellant ducted rocket combustor using laser Doppler velocimetry

    SciTech Connect

    Hsieh, W.H.; Yang, V.; Chuang, C.L.; Yang, A.S.; Cherng, D.L.

    1989-01-01

    A two-component LDV system was used to obtain detailed flow velocity and turbulence measurements in order to study the flow characteristics in a simulated solid-propellant ducted rocket combustor. The vortical structures near the dome region, the size of the recirculation zone, and the location of the reattachment point are all shown to be strongly affected by the jet momentum of both ram air and fuel streams. It is found that the turbulence intensity is anisotropic throughout the front portion of the simulated conbustor, and that the measured Reynolds stress conmponent distribution is well correlated with the local mean velocity vector distribution. 25 refs.

  17. Occupational exposure of personnel operating military radio equipment: measurements and simulation.

    PubMed

    Paljanos, Annamaria; Miclaus, Simona; Munteanu, Calin

    2015-09-01

    Technical literature provides numerous studies concerning radiofrequency exposure measurements for various radio communication devices, but there are few studies related to exposure of personnel operating military radio equipment. In order to evaluate exposure and identify cases when safety requirements are not entirely met, both measurements and simulations are needed for accurate results. Moreover, given the technical characteristics of the radio devices used in the military, personnel mainly operate in the near-field region so both measurements and simulation becomes more complex. Measurements were made in situ using a broadband personal exposimeter equipped with two isotropic probes for both electric and magnetic components of the field. The experiment was designed for three different operating frequencies of the same radio equipment, while simulations were made in FEKO software using hybrid numerical methods to solve complex electromagnetic field problems. The paper aims to discuss the comparative results of the measurements and simulation, as well as comparing them to reference levels specified in military or civilian radiofrequency exposure standards.

  18. Design of an Orthodontic Torque Simulator for Measurement of Bracket Deformation

    NASA Astrophysics Data System (ADS)

    Melenka, G. W.; Nobes, D. S.; Major, P. W.; Carey, J. P.

    2013-12-01

    The design and testing of an orthodontic torque simulator that reproduces the effect of archwire rotation on orthodontic brackets is described. This unique device is capable of simultaneously measuring the deformation and loads applied to an orthodontic bracket due to archwire rotation. Archwire rotation is used by orthodontists to correct the inclination of teeth within the mouth. This orthodontic torque simulator will provide knowledge of the deformation and loads applied to orthodontic bracket that will aide clinicians by describing the effect of archwire rotation on brackets. This will also impact that design on new archwirebracket systems by providing an assessment of performance. Deformation of the orthodontic bracket tie wings is measured using a digital image correlation process to measure elastic and plastic deformation. The magnitude of force and moments applied to the bracket though the archwire is also measured using a six-axis load cell. Initial tests have been performed on two orthodontic brackets of varying geometry to demonstrate the measurement capability of the orthodontic torque simulator. The demonstration experiment shows that a Damon Q bracket had a final plastic deformation after a single loading of 0.022 mm while the Speed bracket deformed 0.071 mm. This indicates that the Speed bracket plastically deforms 3.2 times more than the Damon Q bracket for similar magnitude of applied moment. The demonstration experiment demonstrates that bracket geometry affect the deformation of orthodontic brackets and this difference can be detected using the orthodontic torque simulator.

  19. Design and Development of Virtual Reality Simulation for Teaching High-Risk Low-Volume Problem-Prone Office-Based Medical Emergencies

    ERIC Educational Resources Information Center

    Lemheney, Alexander J.

    2014-01-01

    Physicians' offices are not the usual place where emergencies occur; thus how staff remains prepared and current regarding medical emergencies presents an ongoing challenge for private practitioners. The very nature of low-volume, high-risk, and problem-prone medical emergencies is that they occur with such infrequency it is difficult for staff to…

  20. Simulation of boreal black spruce chronosequences: Comparison to field measurements and model evaluation

    NASA Astrophysics Data System (ADS)

    Bond-Lamberty, Ben; Gower, Stith T.; Goulden, Michael L.; McMillan, Andrew

    2006-06-01

    This study used the Biome Biogeochemical Cycles (Biome-BGC) process model to simulate boreal forest dynamics, compared the results with a variety of measured carbon content and flux data from two boreal chronosequences in northern Manitoba, Canada, and examined how model output was affected by water and nitrogen limitations on simulated plant production and decomposition. Vascular and nonvascular plant growth were modeled over 151 years in well-drained and poorly drained forests, using as many site-specific model parameters as possible. Measured data included (1) leaf area and carbon content from site-specific allometry data, (2) aboveground and belowground net primary production from allometry and root cores, and (3) flux data, including biometry-based net ecosystem production and tower-based net ecosystem exchange. The simulation used three vegetation types or functional groups (evergreen needleaf trees, deciduous broadleaf trees, and bryophytes). Model output matched some of the observed data well, with net primary production, biomass, and net ecosystem production (NEP) values usually (50-80% of data) within the errors of observed values. Leaf area was generally underpredicted. In the simulation, nitrogen limitation increased with stand age, while soil anoxia limited vascular plant growth in the poorly drained simulation. NEP was most sensitive to climate variability in the poorly drained stands. Simulation results are discussed with respect to conceptual issues in, and parameterization of, the Biome-BGC model.

  1. Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction

    SciTech Connect

    Aaltonen, T.; Adelman, J.; Alvarez Gonzalez, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.; Apresyan, A.; /Purdue U. /Waseda U.

    2010-04-01

    The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.

  2. Measurement of the B- lifetime using a simulation free approach for trigger bias correction

    NASA Astrophysics Data System (ADS)

    Aaltonen, T.; Adelman, J.; Álvarez González, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.; Apresyan, A.; Arisawa, T.; Artikov, A.; Asaadi, J.; Ashmanskas, W.; Attal, A.; Aurisano, A.; Azfar, F.; Badgett, W.; Barbaro-Galtieri, A.; Barnes, V. E.; Barnett, B. A.; Barria, P.; Bartos, P.; Bauer, G.; Beauchemin, P.-H.; Bedeschi, F.; Beecher, D.; Behari, S.; Bellettini, G.; Bellinger, J.; Benjamin, D.; Beretvas, A.; Bhatti, A.; Binkley, M.; Bisello, D.; Bizjak, I.; Blair, R. E.; Blocker, C.; Blumenfeld, B.; Bocci, A.; Bodek, A.; Boisvert, V.; Bortoletto, D.; Boudreau, J.; Boveia, A.; Brau, B.; Bridgeman, A.; Brigliadori, L.; Bromberg, C.; Brubaker, E.; Budagov, J.; Budd, H. S.; Budd, S.; Burkett, K.; Busetto, G.; Bussey, P.; Buzatu, A.; Byrum, K. L.; Cabrera, S.; Calancha, C.; Camarda, S.; Campanelli, M.; Campbell, M.; Canelli, F.; Canepa, A.; Carls, B.; Carlsmith, D.; Carosi, R.; Carrillo, S.; Carron, S.; Casal, B.; Casarsa, M.; Castro, A.; Catastini, P.; Cauz, D.; Cavaliere, V.; Cavalli-Sforza, M.; Cerri, A.; Cerrito, L.; Chang, S. H.; Chen, Y. C.; Chertok, M.; Chiarelli, G.; Chlachidze, G.; Chlebana, F.; Cho, K.; Chokheli, D.; Chou, J. P.; Chung, K.; Chung, W. H.; Chung, Y. S.; Chwalek, T.; Ciobanu, C. I.; Ciocci, M. A.; Clark, A.; Clark, D.; Compostella, G.; Convery, M. E.; Conway, J.; Corbo, M.; Cordelli, M.; Cox, C. A.; Cox, D. J.; Crescioli, F.; Cuenca Almenar, C.; Cuevas, J.; Culbertson, R.; Cully, J. C.; Dagenhart, D.; D'Ascenzo, N.; Datta, M.; Davies, T.; de Barbaro, P.; de Cecco, S.; Deisher, A.; de Lorenzo, G.; Dell'Orso, M.; Deluca, C.; Demortier, L.; Deng, J.; Deninno, M.; D'Errico, M.; di Canto, A.; di Ruzza, B.; Dittmann, J. R.; D'Onofrio, M.; Donati, S.; Dong, P.; Dorigo, T.; Dube, S.; Ebina, K.; Elagin, A.; Erbacher, R.; Errede, D.; Errede, S.; Ershaidat, N.; Eusebi, R.; Fang, H. C.; Farrington, S.; Fedorko, W. T.; Feild, R. G.; Feindt, M.; Fernandez, J. P.; Ferrazza, C.; Field, R.; Flanagan, G.; Forrest, R.; Frank, M. J.; Franklin, M.; Freeman, J. C.; Furic, I.; Gallinaro, M.; Galyardt, J.; Garberson, F.; Garcia, J. E.; Garfinkel, A. F.; Garosi, P.; Gerberich, H.; Gerdes, D.; Gessler, A.; Giagu, S.; Giakoumopoulou, V.; Giannetti, P.; Gibson, K.; Gimmell, J. L.; Ginsburg, C. M.; Giokaris, N.; Giordani, M.; Giromini, P.; Giunta, M.; Giurgiu, G.; Glagolev, V.; Glenzinski, D.; Gold, M.; Goldschmidt, N.; Golossanov, A.; Gomez, G.; Gomez-Ceballos, G.; Goncharov, M.; González, O.; Gorelov, I.; Goshaw, A. T.; Goulianos, K.; Gresele, A.; Grinstein, S.; Grosso-Pilcher, C.; Group, R. C.; Grundler, U.; Guimaraes da Costa, J.; Gunay-Unalan, Z.; Haber, C.; Hahn, S. R.; Halkiadakis, E.; Han, B.-Y.; Han, J. Y.; Happacher, F.; Hara, K.; Hare, D.; Hare, M.; Harr, R. F.; Hartz, M.; Hatakeyama, K.; Hays, C.; Heck, M.; Heinrich, J.; Herndon, M.; Heuser, J.; Hewamanage, S.; Hidas, D.; Hill, C. S.; Hirschbuehl, D.; Hocker, A.; Hou, S.; Houlden, M.; Hsu, S.-C.; Hughes, R. E.; Huffman, B. T.; Hurwitz, M.; Husemann, U.; Hussein, M.; Huston, J.; Incandela, J.; Introzzi, G.; Iori, M.; Ivanov, A.; James, E.; Jang, D.; Jayatilaka, B.; Jeon, E. J.; Jha, M. K.; Jindariani, S.; Johnson, W.; Jones, M.; Joo, K. K.; Jun, S. Y.; Jung, J. E.; Junk, T. R.; Kamon, T.; Kar, D.; Karchin, P. E.; Kato, Y.; Kephart, R.; Ketchum, W.; Keung, J.; Khotilovich, V.; Kilminster, B.; Kim, D. H.; Kim, H. S.; Kim, H. W.; Kim, J. E.; Kim, M. J.; Kim, S. B.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kirsch, L.; Klimenko, S.; Kondo, K.; Kong, D. J.; Konigsberg, J.; Korytov, A.; Kotwal, A. V.; Kreps, M.; Kroll, J.; Krop, D.; Krumnack, N.; Kruse, M.; Krutelyov, V.; Kuhr, T.; Kulkarni, N. P.; Kurata, M.; Kwang, S.; Laasanen, A. T.; Lami, S.; Lammel, S.; Lancaster, M.; Lander, R. L.; Lannon, K.; Lath, A.; Latino, G.; Lazzizzera, I.; Lecompte, T.; Lee, E.; Lee, H. S.; Lee, J. S.; Lee, S. W.; Leone, S.; Lewis, J. D.; Lin, C.-J.; Linacre, J.; Lindgren, M.; Lipeles, E.; Lister, A.; Litvintsev, D. O.; Liu, C.; Liu, T.; Lockyer, N. S.; Loginov, A.; Lovas, L.; Lucchesi, D.; Lueck, J.; Lujan, P.; Lukens, P.; Lungu, G.; Lyons, L.; Lys, J.; Lysak, R.; MacQueen, D.; Madrak, R.; Maeshima, K.; Makhoul, K.; Maksimovic, P.; Malde, S.; Malik, S.; Manca, G.; Manousakis-Katsikakis, A.; Margaroli, F.; Marino, C.; Marino, C. P.; Martin, A.; Martin, V.; Martínez, M.; Martínez-Ballarín, R.; Mastrandrea, P.; Mathis, M.; Mattson, M. E.; Mazzanti, P.; McFarland, K. S.; McIntyre, P.; McNulty, R.; Mehta, A.; Mehtala, P.; Menzione, A.; Mesropian, C.; Miao, T.; Mietlicki, D.; Miladinovic, N.; Miller, R.; Mills, C.; Milnik, M.; Mitra, A.; Mitselmakher, G.; Miyake, H.; Moed, S.; Moggi, N.; Mondragon, M. N.; Moon, C. S.; Moore, R.; Morello, M. J.; Morlock, J.; Movilla Fernandez, P.; Mülmenstädt, J.; Mukherjee, A.; Muller, Th.; Murat, P.; Mussini, M.; Nachtman, J.; Nagai, Y.; Naganoma, J.; Nakamura, K.; Nakano, I.; Napier, A.; Nett, J.; Neu, C.; Neubauer, M. S.; Neubauer, S.; Nielsen, J.; Nodulman, L.; Norman, M.; Norniella, O.; Nurse, E.; Oakes, L.; Oh, S. H.; Oh, Y. D.; Oksuzian, I.; Okusawa, T.; Orava, R.; Osterberg, K.; Pagan Griso, S.; Pagliarone, C.; Palencia, E.; Papadimitriou, V.; Papaikonomou, A.; Paramanov, A. A.; Parks, B.; Pashapour, S.; Patrick, J.; Pauletta, G.; Paulini, M.; Paus, C.; Peiffer, T.; Pellett, D. E.; Penzo, A.; Phillips, T. J.; Piacentino, G.; Pianori, E.; Pinera, L.; Pitts, K.; Plager, C.; Pondrom, L.; Potamianos, K.; Poukhov, O.; Pounder, N. L.; Prokoshin, F.; Pronko, A.; Ptohos, F.; Pueschel, E.; Punzi, G.; Pursley, J.; Rademacker, J.; Rahaman, A.; Ramakrishnan, V.; Ranjan, N.; Redondo, I.; Renton, P.; Renz, M.; Rescigno, M.; Richter, S.; Rimondi, F.; Ristori, L.; Robson, A.; Rodrigo, T.; Rodriguez, T.; Rogers, E.; Rolli, S.; Roser, R.; Rossi, M.; Rossin, R.; Roy, P.; Ruiz, A.; Russ, J.; Rusu, V.; Rutherford, B.; Saarikko, H.; Safonov, A.; Sakumoto, W. K.; Santi, L.; Sartori, L.; Sato, K.; Saveliev, V.; Savoy-Navarro, A.; Schlabach, P.; Schmidt, A.; Schmidt, E. E.; Schmidt, M. A.; Schmidt, M. P.; Schmitt, M.; Schwarz, T.; Scodellaro, L.; Scribano, A.; Scuri, F.; Sedov, A.; Seidel, S.; Seiya, Y.; Semenov, A.; Sexton-Kennedy, L.; Sforza, F.; Sfyrla, A.; Shalhout, S. Z.; Shears, T.; Shepard, P. F.; Shimojima, M.; Shiraishi, S.; Shochet, M.; Shon, Y.; Shreyber, I.; Simonenko, A.; Sinervo, P.; Sisakyan, A.; Slaughter, A. J.; Slaunwhite, J.; Sliwa, K.; Smith, J. R.; Snider, F. D.; Snihur, R.; Soha, A.; Somalwar, S.; Sorin, V.; Squillacioti, P.; Stanitzki, M.; St. Denis, R.; Stelzer, B.; Stelzer-Chilton, O.; Stentz, D.; Strologas, J.; Strycker, G. L.; Suh, J. S.; Sukhanov, A.; Suslov, I.; Taffard, A.; Takashima, R.; Takeuchi, Y.; Tanaka, R.; Tang, J.; Tecchio, M.; Teng, P. K.; Thom, J.; Thome, J.; Thompson, G. A.; Thomson, E.; Tipton, P.; Ttito-Guzmán, P.; Tkaczyk, S.; Toback, D.; Tokar, S.; Tollefson, K.; Tomura, T.; Tonelli, D.; Torre, S.; Torretta, D.; Totaro, P.; Trovato, M.; Tsai, S.-Y.; Tu, Y.; Turini, N.; Ukegawa, F.; Uozumi, S.; van Remortel, N.; Varganov, A.; Vataga, E.; Vázquez, F.; Velev, G.; Vellidis, C.; Vidal, M.; Vila, I.; Vilar, R.; Vogel, M.; Volobouev, I.; Volpi, G.; Wagner, P.; Wagner, R. G.; Wagner, R. L.; Wagner, W.; Wagner-Kuhr, J.; Wakisaka, T.; Wallny, R.; Wang, S. M.; Warburton, A.; Waters, D.; Weinberger, M.; Weinelt, J.; Wester, W. C., III; Whitehouse, B.; Whiteson, D.; Wicklund, A. B.; Wicklund, E.; Wilbur, S.; Williams, G.; Williams, H. H.; Wilson, P.; Winer, B. L.; Wittich, P.; Wolbers, S.; Wolfe, C.; Wolfe, H.; Wright, T.; Wu, X.; Würthwein, F.; Yagil, A.; Yamamoto, K.; Yamaoka, J.; Yang, U. K.; Yang, Y. C.; Yao, W. M.; Yeh, G. P.; Yi, K.; Yoh, J.; Yorita, K.; Yoshida, T.; Yu, G. B.; Yu, I.; Yu, S. S.; Yun, J. C.; Zanetti, A.; Zeng, Y.; Zhang, X.; Zheng, Y.; Zucchelli, S.

    2011-02-01

    The collection of a large number of B-hadron decays to hadronic final states at the CDF II Detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper-decay-time distribution. A lifetime measurement must correct for this bias, and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper, we present an analytic method for bias correction without using simulation, thereby removing any uncertainty due to the differences between data and simulation. This method is presented in the form of a measurement of the lifetime of the B- using the mode B-→D0π-. The B- lifetime is measured as τB-=1.663±0.023±0.015ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.

  3. Optimized goniometer for determination of the scattering phase function of suspended particles: simulations and measurements

    NASA Astrophysics Data System (ADS)

    Foschum, Florian; Kienle, Alwin

    2013-08-01

    We present simulations and measurements with an optimized goniometer for determination of the scattering phase function of suspended particles. We applied the Monte Carlo method, using a radially layered cylindrical geometry and mismatched boundary conditions, in order to investigate the influence of reflections caused by the interfaces of the glass cuvette and the scatterer concentration on the accurate determination of the scattering phase function. Based on these simulations we built an apparatus which allows direct measurement of the phase function from ϑ=7 deg to ϑ=172 deg without any need for correction algorithms. Goniometric measurements on polystyrene and SiO2 spheres proved this concept. Using the validated goniometer, we measured the phase function of yeast cells, demonstrating the improvement of the new system compared to standard goniometers. Furthermore, the scattering phase function of different fat emulsions, like Intralipid, was determined precisely.

  4. Evaluation of ride quality measurement procedures by subjective experiments using simulators

    NASA Technical Reports Server (NTRS)

    Klauder, L. T., Jr.; Clevenson, S. A.

    1975-01-01

    Since ride quality is, by definition, a matter of passenger response, there is need for a qualification procedure (QP) for establishing the degree to which any particular ride quality measurement procedure (RQMP) does correlate with passenger responses. Once established, such a QP will provide very useful guidance for optimal adjustment of the various parameters which any given RQMP contains. A QP is proposed based on use of a ride motion simulator and on test subject responses to recordings of actual vehicle motions. Test subject responses are used to determine simulator gain settings for the individual recordings such as to make all of the simulated rides equally uncomfortable to the test subjects. Simulator platform accelerations vs. time are recorded with each ride at its equal discomfort gain setting. The equal discomfort platform acceleration recordings are then digitzed.

  5. Integrated P1 Hohlraum/Capsule Simulations with Comparison to Neutron and X-Ray Measurements

    NASA Astrophysics Data System (ADS)

    Eder, D. C.; Spears, B. K.; Town, R. P.; Jones, O. S.; Munro, D. H.; Peterson, J. L.; Ma, T.; Pak, A. K.; Benedetti, L. R.; Hatchett, S. P.; Knauer, J. P.; MacKinnon, A. J.; Yeamans, C. B.; McNaney, J. M.; Casey, D. T.; NIF Team

    2013-10-01

    We discuss integrated hohlraum/capsule simulations that drive a DT symcap capsule downward in a NIF experiment by increasing/decreasing the peak power in the upper/lower laser beams by 8%. This laser asymmetry results in a radiation drive P1/P0 at the capsule ablation surface of 2% and a downward capsule velocity of 125 microns/ns. The simulation shows small (<1%) changes in the P2 and P4 moments of the x-ray self-emission as compared to a simulation with no laser asymmetry. The calculated reduction in yield due to the induced P1 is 20%. Simulations for DT layered capsules for comparable velocities have yields an order of magnitude lower than simulations with stationary capsules. The velocity is measured by comparing the arrival times of DD and DT neutrons at detectors located at different locations. Preliminary data from a recent shot gives a downward velocity of order 100 microns/ns consistent with simulations. We also compare pre- and post-shot simulations with x-ray images at different energies. The ability to correct for capsule velocity, e.g., due to different upper/lower crossbeam transfer energies, is another tool in the quest for ignition. This work performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-640047.

  6. Measurement and Simulation of Signal Fluctuations Caused by Propagation through Trees

    NASA Technical Reports Server (NTRS)

    Durden, Stephen L.; Klein, Jeffrey D.; Zebker, Howard A.

    1993-01-01

    We present measured magnitude and phase fluctuations of UHF, L band, and C band signals that were transmitted from the ground through a forest canopy to an airborne radar. We find that the measured fluctuations are similar to those calculated by a simple Monte Carlo simulation. Both observed and calculated RMS fluctuations are typically several decibels in magnitude and tens of degrees in phase at all three frequencies.

  7. Rain simulator as a standardized laboratory measurement of soil structural stability

    NASA Astrophysics Data System (ADS)

    Iglesias, Luz; Cancelo González, Javier; Benito, Elena; Álvarez, Manuel; Barral, Maria Teresa; Díaz-Fierros, Francisco

    2010-05-01

    Rainfall simulations are used since the 30's by scientist and technicians to study the soil erosion and soil hydrology. The basis of the rainfall simulation is that can reproduce the natural soil degradation processes, more accurately than the traditional methods used for the determination of structural stability. A rainfall simulator was built in 2006 based on those made by Guitián and Méndez (1961), and Morin (1967), to obtain standardized laboratory measurements of soil structural stability and a final implementation were made in the rainfall simulator to incorporate a intermittent fan-like water yet system with four sieves of 250 micrometres where the soil samples can be placed, and allow the simultaneous measurement of soil losses in the samples. Data obtained in the rainfall simulator, using different soils of the study basins, are related with the Ig Henin index and the results of the Emerson structural stability test. At the same time with the laboratory test, 10 water sampling surveys were carried out during the hydrological years 2004/05 and 2005/06, in two basins located in the humid region of NW Spain belonging to the Anllons River basin, one of the main basins of Galicia-Costa, that has been subject of detailed hydrological studies since 2000 (Rial, M., 2007 and Devesa, R., 2009) and had continuous records of streamflow. The selected subbasins have 57,62 and 50,05 square kilometres respectively, and presents significative geological differences; being one of them formed, mainly, by schists and a lower area with granites and, the other one formed mainly by gabbros. The suspended sediments in the samples were separated by centrifugation and weighted in the laboratory to study the possible relationship between soil losses in the rainfall simulations and the sediment fluxes in the river. The analysis revealed a good relationship between the sediments delivery to the streams and soil losses measured in the rainfall simulations.

  8. SIMULATIONS AND MEASUREMENTS OF A HEAVILY HOM-DAMPED MULTI-CELL SRF CAVITY

    SciTech Connect

    Haipeng Wang; Robert Rimmer; Frank Marhauser

    2007-07-02

    After an initial cavity shape optimization [1] and cryomodule development [2] for an Ampere-class FEL ERL, we have simulated a complete 5-cell high-current (HC) cavity structure with six waveguide (WG) couplers for Higher Order Mode (HOM) damping and fundamental power coupling. The time-domain wakefield simulations of the MAFIA codes have been used to calculate the cavities broadband HOM impedance spectrum. Microwave Studio (MWS) has also been used to evaluate the external Q of the fundamental power coupler (FPC) and the R/Qs of the HOMs. A half scale 1497MHz single-cell model cavity and a 5-cell copper cavity including dummy HOM WG loads were fabricated to bench measure and confirm the design performance. Details of the multi-beam wakefield simulations, the HOM damping measurements and multi-peak data fitting analysis techniques are presented.

  9. Comparisons between GRNTRN simulations and beam measurements of proton lateral broadening distributions

    NASA Astrophysics Data System (ADS)

    Mertens, Christopher; Moyers, Michael; Walker, Steven; Tweed, John

    Recent developments in NASA's High Charge and Energy Transport (HZETRN) code have included lateral broadening of primary ion beams due to small-angle multiple Coulomb scattering, and coupling of the ion-nuclear scattering interactions with energy loss and straggling. The new version of HZETRN based on Green function methods, GRNTRN, is suitable for modeling transport with both space environment and laboratory boundary conditions. Multiple scattering processes are a necessary extension to GRNTRN in order to accurately model ion beam experiments, to simulate the physical and biological-effective radiation dose, and to develop new methods and strategies for light ion radiation therapy. In this paper we compare GRNTRN simulations of proton lateral scattering distributions with beam measurements taken at Loma Linda Medical University. The simulated and measured lateral proton distributions will be compared for a 250 MeV proton beam on aluminum, polyethylene, polystyrene, bone, iron, and lead target materials.

  10. Large-Eddy Simulations and Lidar Measurements of Vortex-Pair Breakup in Aircraft Wakes

    NASA Technical Reports Server (NTRS)

    Lewellen, D. C.; Lewellen, W. S.; Poole, L. R.; DeCoursey, R. J.; Hansen, G. M.; Hostetler, C. A.; Kent, G. S.

    1998-01-01

    Results of large-eddy simulations of an aircraft wake are compared with results from ground-based lidar measurements made at NASA Langley Research Center during the Subsonic Assessment Near-Field Interaction Flight Experiment field tests. Brief reviews of the design of the field test for obtaining the evolution of wake dispersion behind a Boeing 737 and of the model developed for simulating such wakes are given. Both the measurements and the simulations concentrate on the period from a few seconds to a few minutes after the wake is generated, during which the essentially two-dimensional vortex pair is broken up into a variety of three-dimensional eddies. The model and experiment show similar distinctive breakup eddies induced by the mutual interactions of the vortices, after perturbation by the atmospheric motions.

  11. Automated office blood pressure.

    PubMed

    Myers, Martin G; Godwin, Marshall

    2012-05-01

    Manual blood pressure (BP) is gradually disappearing from clinical practice with the mercury sphygmomanometer now considered to be an environmental hazard. Manual BP is also subject to measurement error on the part of the physician/nurse and patient-related anxiety which can result in poor quality BP measurements and office-induced (white coat) hypertension. Automated office (AO) BP with devices such as the BpTRU (BpTRU Medical Devices, Coquitlam, BC) has already replaced conventional manual BP in many primary care practices in Canada and has also attracted interest in other countries where research studies using AOBP have been undertaken. The basic principles of AOBP include multiple readings taken with a fully automated recorder with the patient resting alone in a quiet room. When these principles are followed, office-induced hypertension is eliminated and AOBP exhibits a much stronger correlation with the awake ambulatory BP as compared with routine manual BP measurements. Unlike routine manual BP, AOBP correlates as well with left ventricular mass as does the awake ambulatory BP. AOBP also simplifies the definition of hypertension in that the cut point for a normal AOBP (< 135/85 mm Hg) is the same as for the awake ambulatory BP and home BP. This article summarizes the currently available evidence supporting the use of AOBP in routine clinical practice and proposes an algorithm in which AOBP replaces manual BP for the diagnosis and management of hypertension.

  12. Life Span as the Measure of Performance and Learning in a Business Gaming Simulation

    ERIC Educational Resources Information Center

    Thavikulwat, Precha

    2012-01-01

    This study applies the learning curve method of measuring learning to participants of a computer-assisted business gaming simulation that includes a multiple-life-cycle feature. The study involved 249 participants. It verified the workability of the feature and estimated the participants' rate of learning at 17.4% for every doubling of experience.…

  13. Scale issues in soil hydrology related to measurement and simulation: A case study in Colorado

    Technology Transfer Automated Retrieval System (TEKTRAN)

    State variables, such as soil water content (SWC), are typically measured or inferred at very small scales while being simulated at larger scales relevant to spatial management or hillslope areas. Thus there is an implicit spatial disparity that is often ignored. Surface runoff, on the other hand, ...

  14. Epithelial cancers and photon migration: Monte Carlo simulations and diffuse reflectance measurements

    NASA Astrophysics Data System (ADS)

    Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David

    2015-07-01

    Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.

  15. A Simulation Study of Rater Agreement Measures with 2x2 Contingency Tables

    ERIC Educational Resources Information Center

    Ato, Manuel; Lopez, Juan Jose; Benavente, Ana

    2011-01-01

    A comparison between six rater agreement measures obtained using three different approaches was achieved by means of a simulation study. Rater coefficients suggested by Bennet's [sigma] (1954), Scott's [pi] (1955), Cohen's [kappa] (1960) and Gwet's [gamma] (2008) were selected to represent the classical, descriptive approach, [alpha] agreement…

  16. Monte Carlo simulation of near infrared autofluorescence measurements of in vivo skin.

    PubMed

    Wang, Shuang; Zhao, Jianhua; Lui, Harvey; He, Qingli; Zeng, Haishan

    2011-12-02

    The autofluorescence properties of normal human skin in the near-infrared (NIR) spectral range were studied using Monte Carlo simulation. The light-tissue interactions including scattering, absorption and anisotropy propagation of the regenerated autofluorescence photons in the skin tissue were taken into account in the theoretical modeling. Skin was represented as a turbid seven-layered medium. To facilitate the simulation, ex vivo NIR autofluorescence spectra and images from different skin layers were measured from frozen skin vertical sections to define the intrinsic fluorescence properties. Monte Carlo simulation was then used to study how the intrinsic fluorescence spectra were distorted by the tissue reabsorption and scattering during in vivo measurements. We found that the reconstructed model skin spectra were in good agreement with the measured in vivo skin spectra from the same anatomical site as the ex vivo tissue sections, demonstrating the usefulness of this modeling. We also found that difference exists over the melanin fluorescent wavelength range (880-910 nm) between the simulated spectrum and the measured in vivo skin spectrum from a different anatomical site. This difference suggests that melanin contents may affect in vivo skin autofluorescence properties, which deserves further investigation.

  17. Backscatter and depolarization measurements of aerosolized biological simulants using a chamber lidar system

    NASA Astrophysics Data System (ADS)

    Brown, David M.; Thrush, Evan P.; Thomas, Michael E.; Santarpia, Josh; Quizon, Jason; Carter, Christopher C.

    2010-04-01

    To ensure agent optical cross sections are well understood from the UV to the LWIR, volume integrated measurements of aerosolized agent material at a few key wavelengths is required to validate existing simulations. Ultimately these simulations will be used to assess the detection performance of various classes of lidar technology spanning the entire range of the optical spectrum. The present work demonstrates an optical measurement architecture based on lidar allowing the measurement of backscatter and depolarization ratio from biological aerosols released in a refereed, 1-m cubic chamber. During 2009, various upgrades have been made to the chamber LIDAR system, which operates at 1.064 μm with sub nanosecond pulses at a 120 Hz repetition rate. The first build of the system demonstrated a sensitivity of aerosolized Bacillus atrophaeus (BG) on the order of 5×105 ppl with 1 GHz InGaAs detectors. To increase the sensitivity and reduce noise, the InGaAs detectors were replaced with larger-area silicon avalanche photodiodes for the second build of the system. In addition, computer controlled step variable neutral density filters are now incorporated to facilitate calibrating the system for absolute back-scatter measurements. Calibrated hard target measurements will be combined with data from the ground truth instruments for cross-section determination of the material aerosolized in the chamber. Measured results are compared to theoretical simulations of cross-sections.

  18. Measurement with microscopic MRI and simulation of flow in different aneurysm models

    SciTech Connect

    Edelhoff, Daniel Frank, Frauke; Heil, Marvin; Suter, Dieter; Walczak, Lars; Weichert, Frank; Schmitz, Inge

    2015-10-15

    Purpose: The impact and the development of aneurysms depend to a significant degree on the exchange of liquid between the regular vessel and the pathological extension. A better understanding of this process will lead to improved prediction capabilities. The aim of the current study was to investigate fluid-exchange in aneurysm models of different complexities by combining microscopic magnetic resonance measurements with numerical simulations. In order to evaluate the accuracy and applicability of these methods, the fluid-exchange process between the unaltered vessel lumen and the aneurysm phantoms was analyzed quantitatively using high spatial resolution. Methods: Magnetic resonance flow imaging was used to visualize fluid-exchange in two different models produced with a 3D printer. One model of an aneurysm was based on histological findings. The flow distribution in the different models was measured on a microscopic scale using time of flight magnetic resonance imaging. The whole experiment was simulated using fast graphics processing unit-based numerical simulations. The obtained simulation results were compared qualitatively and quantitatively with the magnetic resonance imaging measurements, taking into account flow and spin–lattice relaxation. Results: The results of both presented methods compared well for the used aneurysm models and the chosen flow distributions. The results from the fluid-exchange analysis showed comparable characteristics concerning measurement and simulation. Similar symmetry behavior was observed. Based on these results, the amount of fluid-exchange was calculated. Depending on the geometry of the models, 7% to 45% of the liquid was exchanged per second. Conclusions: The result of the numerical simulations coincides well with the experimentally determined velocity field. The rate of fluid-exchange between vessel and aneurysm was well-predicted. Hence, the results obtained by simulation could be validated by the experiment. The

  19. Post-Shot Simulations of NIC Experiments with Comparison to X-ray Measurements

    NASA Astrophysics Data System (ADS)

    Eder, David; Jones, Oggie; Suter, Larry; Moore, Alastair; Schneider, Marilyn

    2012-10-01

    National Ignition Campaign experiments at NIF are ongoing and post-shot simulations play an important role in understanding the physical processes occurring in the quest for demonstrating fusion burn. In particular, it is important to understand the x-ray environment inside the hohlraum targets, which is studied using various x-ray diagnostics. The Dante instrument measures the time dependent x-ray emission escaping out of the hohlraum laser entrance holes (LEHs) and the SXI instrument provides a time-integrated image of both soft and hard x-rays. We compare calculated total x-ray emission with Dante data as well as the relative high energy Mband emission that contributes to capsule preheat. We correct our calculated x-ray emission to account for differences between simulation and data on LEH closure using SXI data. We provide results for both ``standard candle'' simulation with no added multipliers and for simulations with time-dependent multipliers that are used to obtain agreement with shock timing and implosion velocity data. The physics justification for the use of multipliers is to account for potential missing energy or incorrect ablation modeling. The relative importance of these two effects can be studied through comparison of post-shot simulations with x-ray measurements.

  20. Simulations and measurements of annealed pyrolytic graphite-metal composite baseplates

    NASA Astrophysics Data System (ADS)

    Streb, F.; Ruhl, G.; Schubert, A.; Zeidler, H.; Penzel, M.; Flemmig, S.; Todaro, I.; Squatrito, R.; Lampke, T.

    2016-03-01

    We investigated the usability of anisotropic materials as inserts in aluminum-matrix-composite baseplates for typical high performance power semiconductor modules using finite-element simulations and transient plane source measurements. For simulations, several physical modules can be used, which are suitable for different thermal boundary conditions. By comparing different modules and options of heat transfer we found non-isothermal simulations to be closest to reality for temperature distribution at the surface of the heat sink. We optimized the geometry of the graphite inserts for best heat dissipation and based on these results evaluated the thermal resistance of a typical power module using calculation time optimized steady-state simulations. Here we investigated the influence of thermal contact conductance (TCC) between metal matrix and inserts on the heat dissipation. We found improved heat dissipation compared to the plain metal baseplate for a TCC of 200 kW/m2/K and above.To verify the simulations we evaluated cast composite baseplates with two different insert geometries and measured their averaged lateral thermal conductivity using a transient plane source (HotDisk) technique at room temperature. For the composite baseplate we achieved local improvements in heat dissipation compared to the plain metal baseplate.

  1. Temperature and wavelength dependent measurement and simulation of Fresnel lenses for concentrating photovoltaics

    NASA Astrophysics Data System (ADS)

    Hornung, Thorsten; Bachmaier, Andreas; Nitz, Peter; Gombert, Andreas

    2010-05-01

    Fresnel lenses are often used as primary optical components in concentrating photovoltaics (CPV). When applied in the field, varying conditions during operation lead to variations in lens temperature which has a strong impact on the optical efficiency of the lenses. A setup for indoor characterization with the ability to heat lens plates allows for the assessment of the quality of Fresnel lenses by means of their irradiance profiles in the focal plane. To analyze the measured temperature dependency we simulate thermal deformations of the lens geometry with finite element method (FEM) tools and use the resulting lens geometry as an input to ray tracing simulations. A close match between computer simulations and measurements of the irradiance profile in the focal plane is achieved, validating our simulation approach. This allows us to judge and optimize the temperature dependence of new lens designs before building and testing prototypes. The simulation enables us to analyze and understand all superimposed effects in detail. The developed tools in combination with detailed solar resource data and knowledge of the CPV system will be the basis for future assessment of overall performance and further optimization of optics for CPV applications.

  2. Qualification of concentrating mirror systems with the Hermes measurement system and the Helios simulation program

    NASA Astrophysics Data System (ADS)

    Kleih, Juergen

    1991-02-01

    An overview of methods (direct and indirect) for measuring highly concentrated solar radiation, which were used for qualifying solar power plants (from the parabolic mirror up to the tower plant) is provided. In particular, it goes into the Hermes measuring system which was used to measure two membrane mirrors (of diameter 17 and 7.5 m respectively). Measurements were made of maximum radiant flux densities of more than 2 MW/sq m for the 17 m mirror and of more than 9 MW/sq m for the 7.5 m mirror. The HELIOS simulation program was used to check the measurement results. The agreement between measurement and calculation was satisfactory over all.

  3. A new satellite simulator tool for global model-measurements intercomparisons

    NASA Astrophysics Data System (ADS)

    Khlystova, Iryna; Schreier, Mathias; Bovensmann, Heinrich; Sausen, Robert; Burrows, John P.

    A new satellite simulation tool has been developed at the University of Bremen in cooperation with the DLR IPA in Muenchen. The original objective of this tool was to simplify and unify all typical comparison steps performed repeatedly by different research groups for comparisons of the global measurements of an atmospheric trace species with corresponding model fields. To answer the main requirements, the SatSim tool was designed as an extendable (based on concepts of Object-Oriented Programming) and flexible relative to the format of the input data tool. The latter allows the integration of the SatSim into a chemistry-transport model facility as a post-processing routine as well as its independent usage. Additionally, as it has become clear through the development process, SatSim can be also used as a validation tool for different satellite measurements. Being independent of the retrieval procedure, which is required in order to obtain a trace-species information from satellites radiometric measurements, this tool allows comparisons of the modelled fields of several atmospheric trace species as if they were measured by satellite instruments. Such approach provides an insight into the differences of the instrumental measurement precision caused only by the difference in the ground tracks geometry and related differences in the cloud coverage of the observed scenes. An example of the simulated SCIAMACHY and MOPITT CO observations based on the ECHAM5/Messy1 simulated global CO fields will be presented.

  4. Comparison of Different Measurement Techniques and a CFD Simulation in Complex Terrain

    NASA Astrophysics Data System (ADS)

    Schulz, Christoph; Hofsäß, Martin; Anger, Jan; Rautenberg, Alexander; Lutz, Thorsten; Cheng, Po Wen; Bange, Jens

    2016-09-01

    This paper deals with a comparison of data collected by measurements and a simulation for a complex terrain test site in southern Germany. Lidar, met mast, unmanned aerial vehicle (UAV) measurements of wind speed and direction and Computational Fluid Dynamics (CFD) data are compared to each other. The site is characterised regarding its flow features and the suitability for a wind turbine test field. A Delayed-Detached-Eddy- Simulation (DES) was employed using measurement data to generate generic turbulent inflow. A good agreement of the wind profiles between the different approaches was reached. The terrain slope leads to a speed-up, a change of turbulence intensity as well as to flow angle variations.

  5. Simulation of interface states effect on the scanning capacitance microscopy measurement of p-n junctions

    NASA Astrophysics Data System (ADS)

    Yang, J.; Kong, F. C. J.

    2002-12-01

    A two-dimensional numerical simulation model of interface states in scanning capacitance microscopy (SCM) measurements of p-n junctions is presented. In the model, amphoteric interface states with two transition energies in the Si band gap are represented as fixed charges to account for their behavior in SCM measurements. The interface states are shown to cause a stretch-out and a parallel shift of the capacitance-voltage characteristics in the depletion and neutral regions of p-n junctions, respectively. This explains the discrepancy between the SCM measurement and simulation near p-n junctions, and thus modeling interface states is crucial for SCM dopant profiling of p-n junctions.

  6. A virtual reality endoscopic simulator augments general surgery resident cancer education as measured by performance improvement.

    PubMed

    White, Ian; Buchberg, Brian; Tsikitis, V Liana; Herzig, Daniel O; Vetto, John T; Lu, Kim C

    2014-06-01

    Colorectal cancer is the second most common cause of death in the USA. The need for screening colonoscopies, and thus adequately trained endoscopists, particularly in rural areas, is on the rise. Recent increases in required endoscopic cases for surgical resident graduation by the Surgery Residency Review Committee (RRC) further emphasize the need for more effective endoscopic training during residency to determine if a virtual reality colonoscopy simulator enhances surgical resident endoscopic education by detecting improvement in colonoscopy skills before and after 6 weeks of formal clinical endoscopic training. We conducted a retrospective review of prospectively collected surgery resident data on an endoscopy simulator. Residents performed four different clinical scenarios on the endoscopic simulator before and after a 6-week endoscopic training course. Data were collected over a 5-year period from 94 different residents performing a total of 795 colonoscopic simulation scenarios. Main outcome measures included time to cecal intubation, "red out" time, and severity of simulated patient discomfort (mild, moderate, severe, extreme) during colonoscopy scenarios. Average time to intubation of the cecum was 6.8 min for those residents who had not undergone endoscopic training versus 4.4 min for those who had undergone endoscopic training (p < 0.001). Residents who could be compared against themselves (pre vs. post-training), cecal intubation times decreased from 7.1 to 4.3 min (p < 0.001). Post-endoscopy rotation residents caused less severe discomfort during simulated colonoscopy than pre-endoscopy rotation residents (4 vs. 10%; p = 0.004). Virtual reality endoscopic simulation is an effective tool for both augmenting surgical resident endoscopy cancer education and measuring improvement in resident performance after formal clinical endoscopic training.

  7. Contributions of numerical simulation data bases to the physics, modeling and measurement of turbulence

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Spalart, Philippe R.

    1987-01-01

    The use of simulation data bases for the examination of turbulent flows is an effective research tool. Studies of the structure of turbulence have been hampered by the limited number of probes and the impossibility of measuring all desired quantities. Also, flow visualization is confined to the observation of passive markers with limited field of view and contamination caused by time-history effects. Computer flow fields are a new resource for turbulence research, providing all the instantaneous flow variables in three-dimensional space. Simulation data bases also provide much-needed information for phenomenological turbulence modeling. Three dimensional velocity and pressure fields from direct simulations can be used to compute all the terms in the transport equations for the Reynolds stresses and the dissipation rate. However, only a few, geometrically simple flows have been computed by direct numerical simulation, and the inventory of simulation does not fully address the current modeling needs in complex turbulent flows. The availability of three-dimensional flow fields also poses challenges in developing new techniques for their analysis, techniques based on experimental methods, some of which are used here for the analysis of direct-simulation data bases in studies of the mechanics of turbulent flows.

  8. Double-Pulse Two-Micron IPDA Lidar Simulation for Airborne Carbon Dioxide Measurements

    NASA Technical Reports Server (NTRS)

    Refaat, Tamer F.; Singh, Upendra N.; Yu, Jirong; Petros, Mulugeta

    2015-01-01

    An advanced double-pulsed 2-micron integrated path differential absorption lidar has been developed at NASA Langley Research Center for measuring atmospheric carbon dioxide. The instrument utilizes a state-of-the-art 2-micron laser transmitter with tunable on-line wavelength and advanced receiver. Instrument modeling and airborne simulations are presented in this paper. Focusing on random errors, results demonstrate instrument capabilities of performing precise carbon dioxide differential optical depth measurement with less than 3% random error for single-shot operation from up to 11 km altitude. This study is useful for defining CO2 measurement weighting, instrument setting, validation and sensitivity trade-offs.

  9. Measurement of transient strain and surface temperature on simulated turbine blades using noncontacting techniques

    NASA Technical Reports Server (NTRS)

    Calfo, F. D.; Pollack, F. G.

    1978-01-01

    Noncontacting techniques were used to measure strain and temperature in thermally cycled simulated turbine blades. An electro-optical extensometer was used to measure the displacement between parallel targets mounted on the leading edge of the blades throughout a complete heating and cooling cycle. An infrared photographic pyrometry method was used to measure blade steady state surface temperature. The blade was cyclically heated and cooled by moving it into and out of a Mach 1 hot-gas stream. Transient leading edge strain and steady state surface temperature distributions are presented for blades of three different configurations.

  10. Fission prompt gamma-ray multiplicity distribution measurements and simulations at DANCE

    SciTech Connect

    Chyzh, A; Wu, C Y; Ullmann, J; Jandel, M; Bredeweg, T; Couture, A; Norman, E

    2010-08-24

    The nearly energy independence of the DANCE efficiency and multiplicity response to {gamma} rays makes it possible to measure the prompt {gamma}-ray multiplicity distribution in fission. We demonstrate this unique capability of DANCE through the comparison of {gamma}-ray energy and multiplicity distribution between the measurement and numerical simulation for three radioactive sources {sup 22}Na, {sup 60}Co, and {sup 88}Y. The prospect for measuring the {gamma}-ray multiplicity distribution for both spontaneous and neutron-induced fission is discussed.

  11. Flow dynamics at a river confluence on Mississippi River: field measurement and large eddy simulation

    NASA Astrophysics Data System (ADS)

    Le, Trung; Khosronejad, Ali; Bartelt, Nicole; Woldeamlak, Solomon; Peterson, Bonnie; Dewall, Petronella; Sotiropoulos, Fotis; Saint Anthony Falls Laboratory, University of Minnesota Team; Minnesota Department of Transportation Team

    2015-11-01

    We study the dynamics of a river confluence on Mississippi River branch in the city of Minneapolis, Minnesota, United States. Field measurements by Acoustic Doppler Current Profiler using on-board GPS tracking were carried out for five campaigns in the summer of 2014 and 2015 to collect both river bed elevation data and flow fields. Large Eddy Simulation is carried out to simulate the flow field with the total of 100 million grid points for the domain length of 3.2 km. The simulation results agree well with field measurements at measured cross-sections. The results show the existence of wake mode on the mixing interface of two branches near the upstream junction corner. The mutual interaction between the shear layers emanating from the river banks leading to the formation of large scale energetic structures that leads to ``switching'' side of the flow coherent structures. Our result here is a feasibility study for the use of eddy-resolving simulations in predicting complex flow dynamics in medium-size natural rivers. This work is funded by Minnesota Dept. Transportation and Minnesota Institute of Supercomputing.

  12. Metabolic rate control during extravehicular activity simulations and measurement techniques during actual EVAS

    NASA Technical Reports Server (NTRS)

    Horrigan, D. J.

    1975-01-01

    A description of the methods used to control and measure metabolic rate during ground simulations is given. Work levels attained at the Space Environment Simulation Laboratory are presented. The techniques and data acquired during ground simulations are described and compared with inflight procedures. Data from both the Skylab and Apollo Program were utilized and emphasis is given to the methodology, both in simulation and during flight. The basic techniques of work rate assessment are described. They include oxygen consumption, which was useful for averages over long time periods, heart rate correlations based on laboratory calibrations, and liquid cooling garment temperature changes. The relative accuracy of these methods as well as the methods of real-time monitoring at the Mission Control Center are discussed. The advantages and disadvantages of each of the metabolic measurement techniques are discussed. Particular emphasis is given to the problem of utilizing oxygen decrement for short time periods and heart rate at low work levels. A summary is given of the effectiveness of work rate control and measurements; and current plans for future EVA monitoring are discussed.

  13. Ionic diffusion in quartz studied by transport measurements, SIMS and atomistic simulations

    NASA Astrophysics Data System (ADS)

    Sartbaeva, Asel; Wells, Stephen A.; Redfern, Simon A. T.; Hinton, Richard W.; Reed, Stephen J. B.

    2005-02-01

    Ionic diffusion in the quartz-β-eucryptite system is studied by DC transport measurements, SIMS and atomistic simulations. Transport data show a large transient increase in ionic current at the α-β phase transition of quartz (the Hedvall effect). The SIMS data indicate two diffusion processes, one involving rapid Li+ motion and the other involving penetration of Al and Li atoms into quartz at the phase transition. Atomistic simulations explain why the fine microstructure of twin domain walls in quartz near the transition does not hinder Li+ diffusion.

  14. Comparison of CFD simulations and measurements of flow affected by coanda effect

    NASA Astrophysics Data System (ADS)

    Fišer, Jan; Jedelský, Jan; Vach, Tomáš; Forman, Matěj; Jícha, Miroslav

    2012-04-01

    The article deals with experimental research and numerical simulations of specific phenomena in fluid flows called Coanda effect (CE), which has numerous important engineering applications. Although many researchers have concerned with wall jets, the physics of this flow still remains not well understood. This study is focused on analysis of behaviour of jet flow close to the wall and influence of its inclination. The flow has been visualized using smoke and velocity was measured by means of Hot Wire Anemometry (HWA). CFD simulations have been performed on the same geometry and compared with experiments in order to find a tool for correct prediction of the CE.

  15. Nonadiabatic molecular dynamics simulation: An approach based on quantum measurement picture

    SciTech Connect

    Feng, Wei; Xu, Luting; Li, Xin-Qi; Fang, Weihai; Yan, YiJing

    2014-07-15

    Mixed-quantum-classical molecular dynamics simulation implies an effective quantum measurement on the electronic states by the classical motion of atoms. Based on this insight, we propose a quantum trajectory mean-field approach for nonadiabatic molecular dynamics simulations. The new protocol provides a natural interface between the separate quantum and classical treatments, without invoking artificial surface hopping algorithm. Moreover, it also bridges two widely adopted nonadiabatic dynamics methods, the Ehrenfest mean-field theory and the trajectory surface-hopping method. Excellent agreement with the exact results is illustrated with representative model systems, including the challenging ones for traditional methods.

  16. Comparison of beam transport simulations to measurements at the Los Alamos Proton Storage Ring

    SciTech Connect

    Wilkinson, C.; Neri, F.; Fitzgerald, D.H.; Blind, B.; Macek, R.; Plum, M.; Sander, O.; Thiessen, H.A.

    1997-10-01

    The ability to model and simulate beam behavior in the Proton Storage Ring (PSR) of the Los Alamos Neutron Science Center (LANSCE) is an important diagnostic and predictive tool. This paper gives the results of an effort to model the ring apertures and lattice and use beam simulation programs to track the beam. The results are then compared to measured activation levels from beam loss in the ring. The success of the method determines its usefulness in evaluating the effects of planned upgrades to the Proton Storage Ring.

  17. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  18. Arrayed waveguide grating interrogator for fiber Bragg grating sensors: measurement and simulation.

    PubMed

    Koch, Jan; Angelmahr, Martin; Schade, Wolfgang

    2012-11-01

    A fiber Bragg grating (FBG) interrogation system based on an intensity demodulation and demultiplexing of an arrayed waveguide grating (AWG) module is examined in detail. The influence of the spectral line shape of the FBG on the signal obtained from the AWG device is discussed by accomplishing the measurement and simulation of the system. The simulation of the system helps to create quickly and precisely calibration functions for nonsymmetric, tilted, or nonapodized FBGs. Experiments show that even small sidebands of nonapodized FBGs have strong influences on the signal resulted by an AWG device with a Gaussian profile.

  19. Numerical simulations to assess the tracer dilution method for measurement of landfill methane emissions.

    PubMed

    Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T

    2016-10-01

    Landfills are a significant contributor to anthropogenic methane emissions, but measuring these emissions can be challenging. This work uses numerical simulations to assess the accuracy of the tracer dilution method, which is used to estimate landfill emissions. Atmospheric dispersion simulations with the Weather Research and Forecast model (WRF) are run over Sandtown Landfill in Delaware, USA, using observation data to validate the meteorological model output. A steady landfill methane emissions rate is used in the model, and methane and tracer gas concentrations are collected along various transects downwind from the landfill for use in the tracer dilution method. The calculated methane emissions are compared to the methane emissions rate used in the model to find the percent error of the tracer dilution method for each simulation. The roles of different factors are examined: measurement distance from the landfill, transect angle relative to the wind direction, speed of the transect vehicle, tracer placement relative to the hot spot of methane emissions, complexity of topography, and wind direction. Results show that percent error generally decreases with distance from the landfill, where the tracer and methane plumes become well mixed. Tracer placement has the largest effect on percent error, and topography and wind direction both have significant effects, with measurement errors ranging from -12% to 42% over all simulations. Transect angle and transect speed have small to negligible effects on the accuracy of the tracer dilution method. These tracer dilution method simulations provide insight into measurement errors that might occur in the field, enhance understanding of the method's limitations, and aid interpretation of field data.

  20. Atomic force microscope adhesion measurements and atomistic molecular dynamics simulations at different humidities

    NASA Astrophysics Data System (ADS)

    Seppä, Jeremias; Reischl, Bernhard; Sairanen, Hannu; Korpelainen, Virpi; Husu, Hannu; Heinonen, Martti; Raiteri, Paolo; Rohl, Andrew L.; Nordlund, Kai; Lassila, Antti

    2017-03-01

    Due to their operation principle atomic force microscopes (AFMs) are sensitive to all factors affecting the detected force between the probe and the sample. Relative humidity is an important and often neglected—both in experiments and simulations—factor in the interaction force between AFM probe and sample in air. This paper describes the humidity control system designed and built for the interferometrically traceable metrology AFM (IT-MAFM) at VTT MIKES. The humidity control is based on circulating the air of the AFM enclosure via dryer and humidifier paths with adjustable flow and mixing ratio of dry and humid air. The design humidity range of the system is 20–60 %rh. Force–distance adhesion studies at humidity levels between 25 %rh and 53 %rh are presented and compared to an atomistic molecular dynamics (MD) simulation. The uncertainty level of the thermal noise method implementation used for force constant calibration of the AFM cantilevers is 10 %, being the dominant component of the interaction force measurement uncertainty. Comparing the simulation and the experiment, the primary uncertainties are related to the nominally 7 nm radius and shape of measurement probe apex, possible wear and contamination, and the atomistic simulation technique details. The interaction forces are of the same order of magnitude in simulation and measurement (5 nN). An elongation of a few nanometres of the water meniscus between probe tip and sample, before its rupture, is seen in simulation upon retraction of the tip in higher humidity. This behaviour is also supported by the presented experimental measurement data but the data is insufficient to conclusively verify the quantitative meniscus elongation.

  1. Dose reduction in CT using bismuth shielding: measurements and Monte Carlo simulations.

    PubMed

    Chang, Kyung-Hwan; Lee, Wonho; Choo, Dong-Myung; Lee, Choon-Sik; Kim, Youhyun

    2010-03-01

    In this research, using direct measurements and Monte Carlo calculations, the potential dose reduction achieved by bismuth shielding in computed tomography was evaluated. The patient dose was measured using an ionisation chamber in a polymethylmethacrylate (PMMA) phantom that had five measurement points at the centre and periphery. Simulations were performed using the MCNPX code. For both the bare and the bismuth-shielded phantom, the differences of dose values between experiment and simulation were within 9%. The dose reductions due to the bismuth shielding were 1.2-55% depending on the measurement points, X-ray tube voltage and the type of shielding. The amount of dose reduction was significant for the positions covered by the bismuth shielding (34 - 46% for head and 41 - 55% for body phantom on average) and negligible for other peripheral positions. The artefact on the reconstructed images were minimal when the distance between the shielding and the organs was >1 cm, and hence the shielding should be selectively located to protect critical organs such as the eye lens, thyroid and breast. The simulation results using the PMMA phantom was compared with those using a realistically voxelised phantom (KTMAN-2). For eye and breast, the simulation results using the PMMA and KTMAN-2 phantoms were similar with each other, while for thyroid the simulation results were different due to the discrepancy of locations and the sizes of the phantoms. The dose reductions achieved by bismuth and lead shielding were compared with each other and the results showed that the difference of the dose reductions achieved by the two materials was less than 2-3%.

  2. Dose reduction in CT using bismuth shielding: measurements and Monte Carlo simulations

    PubMed Central

    Chang, Kyung-Hwan; Lee, Wonho; Choo, Dong-Myung; Lee, Choon-Sik; Kim, Youhyun

    2010-01-01

    In this research, using direct measurements and Monte Carlo calculations, the potential dose reduction achieved by bismuth shielding in computed tomography was evaluated. The patient dose was measured using an ionisation chamber in a polymethylmethacrylate (PMMA) phantom that had five measurement points at the centre and periphery. Simulations were performed using the MCNPX code. For both the bare and the bismuth-shielded phantom, the differences of dose values between experiment and simulation were within 9 %. The dose reductions due to the bismuth shielding were 1.2–55 % depending on the measurement points, X-ray tube voltage and the type of shielding. The amount of dose reduction was significant for the positions covered by the bismuth shielding (34 − 46 % for head and 41 − 55 % for body phantom on average) and negligible for other peripheral positions. The artefact on the reconstructed images were minimal when the distance between the shielding and the organs was >1 cm, and hence the shielding should be selectively located to protect critical organs such as the eye lens, thyroid and breast. The simulation results using the PMMA phantom was compared with those using a realistically voxelised phantom (KTMAN-2). For eye and breast, the simulation results using the PMMA and KTMAN-2 phantoms were similar with each other, while for thyroid the simulation results were different due to the discrepancy of locations and the sizes of the phantoms. The dose reductions achieved by bismuth and lead shielding were compared with each other and the results showed that the difference of the dose reductions achieved by the two materials was less than 2–3 %. PMID:19959602

  3. A Thorax Simulator for Complex Dynamic Bioimpedance Measurements With Textile Electrodes.

    PubMed

    Ulbrich, Mark; Muhlsteff, Jens; Teichmann, Daniel; Leonhardt, Steffen; Walter, Marian

    2015-06-01

    Bioimpedance measurements on the human thorax are suitable for assessment of body composition or hemodynamic parameters, such as stroke volume; they are non-invasive, easy in application and inexpensive. When targeting personal healthcare scenarios, the technology can be integrated into textiles to increase ease, comfort and coverage of measurements. Bioimpedance is generally measured using two electrodes injecting low alternating currents (0.5-10 mA) and two additional electrodes to measure the corresponding voltage drop. The impedance is measured either spectroscopically (bioimpedance spectroscopy, BIS) between 5 kHz and 1 MHz or continuously at a fixed frequency around 100 kHz (impedance cardiography, ICG). A thorax simulator is being developed for testing and calibration of bioimpedance devices and other new developments. For the first time, it is possible to mimic the complete time-variant properties of the thorax during an impedance measurement. This includes the dynamic real part and dynamic imaginary part of the impedance with a peak-to-peak value of 0.2 Ω and an adjustable base impedance (24.6 Ω ≥ Z0 ≥ 51.6 Ω). Another novelty is adjustable complex electrode-skin contact impedances for up to 8 electrodes to evaluate bioimpedance devices in combination with textile electrodes. In addition, an electrocardiographic signal is provided for cardiographic measurements which is used in ICG devices. This provides the possibility to generate physiologic impedance changes, and in combination with an ECG, all parameters of interest such as stroke volume (SV), pre-ejection period (PEP) or extracellular resistance (Re) can be simulated. The speed of all dynamic signals can be altered. The simulator was successfully tested with commercially available BIS and ICG devices and the preset signals are measured with high correlation (r = 0.996).

  4. An assessment of a software simulation tool for lidar atmosphere and ocean measurements

    NASA Astrophysics Data System (ADS)

    Powell, K. A.; Vaughan, M.; Burton, S. P.; Hair, J. W.; Hostetler, C. A.; Kowch, R. S.

    2014-12-01

    A high-fidelity lidar simulation tool is used to generate synthetic lidar backscatter data that closely matches the expected performance of various lidars, including the noise characteristics inherent to analog detection and uncertainties related to the measurement environment. This tool supports performance trade studies and scientific investigations for both the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), which flies aboard Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), and the NASA Langley Research Center airborne High Spectral Resolution Lidar (HSRL). CALIOP measures profiles of attenuated backscatter coefficients (532 and 1064 nm) and volume depolarization ratios at 532 nm. HSRL measures the same profiles plus volume depolarization at 1064 nm and a molecular-only profile which allows for the direct retrieval of aerosol extinction and backscatter profiles at 532 nm. The simulation tool models both the fundamental physics of the lidar instruments and the signals generated from aerosols, clouds, and the ocean surface and subsurface. This work presents the results of a study conducted to verify the accuracy of the simulated data using data from both HSRL and CALIOP. The tool was tuned to CALIOP instrument settings and the model atmosphere was defined using profiles of attenuated backscatter and depolarization obtained by HSRL during underflights of CALIPSO. The validated HSRL data provide highly accurate measurements of the particulate intensive and extensive optical properties and thus were considered as the truth atmosphere. The resulting simulated data were processed through the CALIPSO data analysis system. Comparisons showed good agreement between the simulated and CALIOP data. This verifies the accuracy of the tool to support studies involving the characterization of instrument components and advanced data analysis techniques. The capability of the tool to simulate ocean surface scattering and subsurface

  5. Sensitivity study of large-scale particle image velocimetry measurement of river discharge using numerical simulation

    NASA Astrophysics Data System (ADS)

    Hauet, Alexandre; Creutin, Jean-Dominique; Belleudy, Philippe

    2008-01-01

    SummaryThis study deals with the uncertainty of large-scale particle image velocimetry (LSPIV) measurements in rivers. LSPIV belongs to the methods of local remote sensing of rivers, like Radar- and Lidar-based techniques. These methods have many potential advantages, in comparison with classical river gauging, but they have a fundamental drawback: they are indirect measurements. As such they need to be assessed in reference to direct measurements. A first validation method consists in the comparison of LSPIV measurements with classic gauging results, in field and laboratory experiments. Unfortunately, in both cases, it is impossible in practice to control all the parameters and to distinguish the impact of the various error sources. In the present study we propose a more theoretical assessment of LSPIV potential through numerical simulation. The idea is simply to mathematically formulate the present state of knowledge of the measurement including both the physics of the phenomenon (the illuminated river) and the physics of the sensor (the camera and the PIV tracking). The dilemma about when to start this type of simulation is the following: The simulation is satisfactory if we can validate it which means to be able to compare simulations and observations over a wide range of conditions. The simulation is useful to get preliminary insights about the most important measurement conditions to organize validation studies. Our simulator is composed of three blocks: The river block represents the unidirectional river flow by the association of the EDM model and a theoretical vertical velocity profile giving a 3D velocity distribution. This hydraulic model is complemented by features representing free surface tracers, the illumination of the free-surface (shadows and sun reflection) and the effect of the wind. The camera block transforms the river state parameters into raster images according to the intrinsic and extrinsic parameters of the camera. The LSPIV analysis

  6. Design Office within the Classroom.

    ERIC Educational Resources Information Center

    Campbell, Kumari

    1980-01-01

    To help architectural students adapt to the realities of the work environment, Gerard Campbell of Holland College has set up his classroom as a design office. Working as a team, the students prepare a complete set of working drawings and construction documents, simulating an actual design process. (JOW)

  7. Calibrated simulations of Z opacity experiments that reproduce the experimentally measured plasma conditions

    NASA Astrophysics Data System (ADS)

    Nagayama, T.; Bailey, J. E.; Loisel, G.; Rochau, G. A.; MacFarlane, J. J.; Golovkin, I.

    2016-02-01

    Recently, frequency-resolved iron opacity measurements at electron temperatures of 170-200 eV and electron densities of (0.7 - 4.0 )× 1022cm-3 revealed a 30 - 400 % disagreement with the calculated opacities [J. E. Bailey et al., Nature (London) 517, 56 (2015), 10.1038/nature14048]. The discrepancies have a high impact on astrophysics, atomic physics, and high-energy density physics, and it is important to verify our understanding of the experimental platform with simulations. Reliable simulations are challenging because the temporal and spatial evolution of the source radiation and of the sample plasma are both complex and incompletely diagnosed. In this article, we describe simulations that reproduce the measured temperature and density in recent iron opacity experiments performed at the Sandia National Laboratories Z facility. The time-dependent spectral irradiance at the sample is estimated using the measured time- and space-dependent source radiation distribution, in situ source-to-sample distance measurements, and a three-dimensional (3D) view-factor code. The inferred spectral irradiance is used to drive 1D sample radiation hydrodynamics simulations. The images recorded by slit-imaged space-resolved spectrometers are modeled by solving radiation transport of the source radiation through the sample. We find that the same drive radiation time history successfully reproduces the measured plasma conditions for eight different opacity experiments. These results provide a quantitative physical explanation for the observed dependence of both temperature and density on the sample configuration. Simulated spectral images for the experiments without the FeMg sample show quantitative agreement with the measured spectral images. The agreement in spectral profile, spatial profile, and brightness provides further confidence in our understanding of the backlight-radiation time history and image formation. These simulations bridge the static-uniform picture of the data

  8. Calibrated simulations of Z opacity experiments that reproduce the experimentally measured plasma conditions.

    PubMed

    Nagayama, T; Bailey, J E; Loisel, G; Rochau, G A; MacFarlane, J J; Golovkin, I

    2016-02-01

    Recently, frequency-resolved iron opacity measurements at electron temperatures of 170-200 eV and electron densities of (0.7-4.0)×10(22)cm(-3) revealed a 30-400% disagreement with the calculated opacities [J. E. Bailey et al., Nature (London) 517, 56 (2015)]. The discrepancies have a high impact on astrophysics, atomic physics, and high-energy density physics, and it is important to verify our understanding of the experimental platform with simulations. Reliable simulations are challenging because the temporal and spatial evolution of the source radiation and of the sample plasma are both complex and incompletely diagnosed. In this article, we describe simulations that reproduce the measured temperature and density in recent iron opacity experiments performed at the Sandia National Laboratories Z facility. The time-dependent spectral irradiance at the sample is estimated using the measured time- and space-dependent source radiation distribution, in situ source-to-sample distance measurements, and a three-dimensional (3D) view-factor code. The inferred spectral irradiance is used to drive 1D sample radiation hydrodynamics simulations. The images recorded by slit-imaged space-resolved spectrometers are modeled by solving radiation transport of the source radiation through the sample. We find that the same drive radiation time history successfully reproduces the measured plasma conditions for eight different opacity experiments. These results provide a quantitative physical explanation for the observed dependence of both temperature and density on the sample configuration. Simulated spectral images for the experiments without the FeMg sample show quantitative agreement with the measured spectral images. The agreement in spectral profile, spatial profile, and brightness provides further confidence in our understanding of the backlight-radiation time history and image formation. These simulations bridge the static-uniform picture of the data interpretation and the

  9. Calibrated simulations of Z opacity experiments that reproduce the experimentally measured plasma conditions

    DOE PAGES

    Nagayama, T.; Bailey, J. E.; Loisel, G.; ...

    2016-02-05

    Recently, frequency-resolved iron opacity measurements at electron temperatures of 170–200 eV and electron densities of (0.7 – 4.0) × 1022 cm–3 revealed a 30–400% disagreement with the calculated opacities [J. E. Bailey et al., Nature (London) 517, 56 (2015)]. The discrepancies have a high impact on astrophysics, atomic physics, and high-energy density physics, and it is important to verify our understanding of the experimental platform with simulations. Reliable simulations are challenging because the temporal and spatial evolution of the source radiation and of the sample plasma are both complex and incompletely diagnosed. In this article, we describe simulations that reproducemore » the measured temperature and density in recent iron opacity experiments performed at the Sandia National Laboratories Z facility. The time-dependent spectral irradiance at the sample is estimated using the measured time- and space-dependent source radiation distribution, in situ source-to-sample distance measurements, and a three-dimensional (3D) view-factor code. The inferred spectral irradiance is used to drive 1D sample radiation hydrodynamics simulations. The images recorded by slit-imaged space-resolved spectrometers are modeled by solving radiation transport of the source radiation through the sample. We find that the same drive radiation time history successfully reproduces the measured plasma conditions for eight different opacity experiments. These results provide a quantitative physical explanation for the observed dependence of both temperature and density on the sample configuration. Simulated spectral images for the experiments without the FeMg sample show quantitative agreement with the measured spectral images. The agreement in spectral profile, spatial profile, and brightness provides further confidence in our understanding of the backlight-radiation time history and image formation. Furthermore, these simulations bridge the static-uniform picture of the

  10. Patient-specific simulations and measurements of the magneto-hemodynamic effect in human primary vessels.

    PubMed

    Kyriakou, Adamos; Neufeld, Esra; Szczerba, Dominik; Kainz, Wolfgang; Luechinger, Roger; Kozerke, Sebastian; McGregor, Robert; Kuster, Niels

    2012-02-01

    This paper investigates the main characteristics of the magneto-hemodynamic (MHD) response for application as a biomarker of vascular blood flow. The induced surface potential changes of a volunteer exposed to a 3 T static B0 field of a magnetic resonance imaging (MRI) magnet were measured over time at multiple locations by an electrocardiogram device and compared to simulation results. The flow simulations were based on boundary conditions derived from MRI flow measurements restricted to the aorta and vena cava. A dedicated and validated low-frequency electromagnetic solver was applied to determine the induced temporal surface potential change from the obtained 4D flow distribution using a detailed whole-body model of the volunteer. The simulated MHD signal agreed with major characteristics of the measured signal (temporal location of main peak, magnitude, variation across chest and along torso) except in the vicinity of the heart. The MHD signal is mostly influenced by the aorta; however, more vessels and better boundary conditions are needed to analyze the finer details of the response. The results show that the MHD signal is strongly position dependent with highly variable but reproducibly measurable distinguished characteristics. Additional investigations are necessary before determining whether the MHD effect is a reliable reference for location-specific information on blood flow.

  11. Digital design and fabrication of simulation model for measuring orthodontic force.

    PubMed

    Liu, Yun-Feng; Zhang, Peng-Yuan; Zhang, Qiao-Fang; Zhang, Jian-Xing; Chen, Jie

    2014-01-01

    Three dimensional (3D) forces are the key factors for determining movement of teeth during orthodontic treatment. Designing precise forces and torques on tooth before treatment can result accurate tooth movements, but it is too difficult to realize. In orthodontic biomechanical systems, the periodontal tissues, including bones, teeth, and periodontal ligaments (PDL), are affected by braces, and measuring the forces applied on the teeth by braces should be based on a simulated model composed of these three types of tissues. This study explores the design and fabrication of a simulated oral model for 3D orthodontic force measurements. Based on medical image processing, tissue reconstruction, 3D printing, and PDL simulation and testing, a model for measuring force was designed and fabricated, which can potentially be used for force prediction, design of treatment plans, and precise clinical operation. The experiment illustrated that bi-component silicones with 2:8 ratios had similar mechanical properties to PDL, and with a positioning guide, the teeth were assembled in the mandible sockets accurately, and so a customized oral model for 3D orthodontic force measurement was created.

  12. Validation of MTF measurement for CBCT system using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Hao, Ting; Gao, Feng; Zhao, Huijuan; Zhou, Zhongxing

    2016-03-01

    To evaluate the spatial resolution performance of cone beam computed tomography (CBCT) system, accurate measurement of the modulation transfer function (MTF) is required. This accuracy depends on the MTF measurement method and CBCT reconstruction algorithms. In this work, the accuracy of MTF measurement of CBCT system using wire phantom is validated by Monte Carlo simulation. A Monte Carlo simulation software tool BEAMnrc/EGSnrc was employed to model X-ray radiation beams and transport. Tungsten wires were simulated with different diameters and radial distances from the axis of rotation. We adopted filtered back projection technique to reconstruct images from 360° acquisition. The MTFs for four reconstruction kernels were measured from corresponding reconstructed wire images, while the ram-lak kernel increased the MTF relative to the cosine, hamming and hann kernel. The results demonstrated that the MTF degraded radially from the axis of rotation. This study suggested that an increase in the MTF for the CBCT system is possible by optimizing scanning settings and reconstruction parameters.

  13. Measurements and time-domain simulations of multiphonics in the trombone.

    PubMed

    Velut, Lionel; Vergez, Christophe; Gilbert, Joël

    2016-10-01

    Multiphonic sounds of brass instruments are studied in this article. They are produced by playing a note on a brass instrument while simultaneously singing another note in the mouthpiece. This results in a peculiar sound, heard as a chord or a cluster of more than two notes in most cases. This effect is used in different artistic contexts. Measurements of the mouth pressure, the pressure inside the mouthpiece, and the radiated sound are recorded while a trombone player performs a multiphonic, first by playing an F3 and singing a C4, then playing an F3 and singing a note with a decreasing pitch. Results highlight the quasi-periodic nature of the multiphonic sound and the appearance of combination tones due to intermodulation between the played and the sung sounds. To assess the ability of a given brass instrument physical model to reproduce the measured phenomenon, time-domain simulations of multiphonics are carried out. A trombone model consisting in an exciter and a resonator nonlinearly coupled is forced while self-oscillating to reproduce simultaneous singing and playing. Comparison between simulated and measured signals is discussed. Spectral content of the simulated pressure match very well with the measured one, at the cost of a high forcing pressure.

  14. A Comparative Study of Simulated and Measured Gear-Flap Flow Interaction

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Mineck, Raymond E.; Yao, Chungsheng; Jenkins, Luther N.; Fares, Ehab

    2015-01-01

    The ability of two CFD solvers to accurately characterize the transient, complex, interacting flowfield asso-ciated with a realistic gear-flap configuration is assessed via comparison of simulated flow with experimental measurements. The simulated results, obtained with NASA's FUN3D and Exa's PowerFLOW® for a high-fidelity, 18% scale semi-span model of a Gulfstream aircraft in landing configuration (39 deg flap deflection, main landing gear on and off) are compared to two-dimensional and stereo particle image velocimetry measurements taken within the gear-flap flow interaction region during wind tunnel tests of the model. As part of the bench-marking process, direct comparisons of the mean and fluctuating velocity fields are presented in the form of planar contour plots and extracted line profiles at measurement planes in various orientations stationed in the main gear wake. The measurement planes in the vicinity of the flap side edge and downstream of the flap trailing edge are used to highlight the effects of gear presence on tip vortex development and the ability of the computational tools to accurately capture such effects. The present study indicates that both computed datasets contain enough detail to construct a relatively accurate depiction of gear-flap flow interaction. Such a finding increases confidence in using the simulated volumetric flow solutions to examine the behavior of pertinent aer-odynamic mechanisms within the gear-flap interaction zone.

  15. Numerical simulation and analysis of accurate blood oxygenation measurement by using optical resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Yu, Tianhao; Li, Qian; Li, Lin; Zhou, Chuanqing

    2016-10-01

    Accuracy of photoacoustic signal is the crux on measurement of oxygen saturation in functional photoacoustic imaging, which is influenced by factors such as defocus of laser beam, curve shape of large vessels and nonlinear saturation effect of optical absorption in biological tissues. We apply Monte Carlo model to simulate energy deposition in tissues and obtain photoacoustic signals reaching a simulated focused surface detector to investigate corresponding influence of these factors. We also apply compensation on photoacoustic imaging of in vivo cat cerebral cortex blood vessels, in which signals from different lateral positions of vessels are corrected based on simulation results. And this process on photoacoustic images can improve the smoothness and accuracy of oxygen saturation results.

  16. Comparison of Numerically Simulated and Experimentally Measured Performance of a Rotating Detonation Engine

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Fotia, Matthew L.; Hoke, John; Schauer, Fred

    2015-01-01

    A quasi-two-dimensional, computational fluid dynamic (CFD) simulation of a rotating detonation engine (RDE) is described. The simulation operates in the detonation frame of reference and utilizes a relatively coarse grid such that only the essential primary flow field structure is captured. This construction and other simplifications yield rapidly converging, steady solutions. Viscous effects, and heat transfer effects are modeled using source terms. The effects of potential inlet flow reversals are modeled using boundary conditions. Results from the simulation are compared to measured data from an experimental RDE rig with a converging-diverging nozzle added. The comparison is favorable for the two operating points examined. The utility of the code as a performance optimization tool and a diagnostic tool are discussed.

  17. Simulation of complex glazing products; from optical data measurements to model based predictive controls

    SciTech Connect

    Kohler, Christian

    2012-04-01

    Complex glazing systems such as venetian blinds, fritted glass and woven shades require more detailed optical and thermal input data for their components than specular non light-redirecting glazing systems. Various methods for measuring these data sets are described in this paper. These data sets are used in multiple simulation tools to model the thermal and optical properties of complex glazing systems. The output from these tools can be used to generate simplified rating values or as an input to other simulation tools such as whole building annual energy programs, or lighting analysis tools. I also describe some of the challenges of creating a rating system for these products and which factors affect this rating. A potential future direction of simulation and building operations is model based predictive controls, where detailed computer models are run in real-time, receiving data for an actual building and providing control input to building elements such as shades.

  18. Driver steering dynamics measured in car simulator under a range of visibility and roadmaking conditions

    NASA Technical Reports Server (NTRS)

    Allen, R. W.; Mcruer, D. T.

    1977-01-01

    A simulation experiment was conducted to determine the effect of reduced visibility on driver lateral (steering) control. The simulator included a real car cab and a single lane road image projected on a screen six feet in front of the driver. Simulated equations of motion controlled apparent car lane position in response to driver steering actions, wind gusts, and road curvature. Six drivers experienced a range of visibility conditions at various speeds with assorted roadmaking configurations (mark and gap lengths). Driver describing functions were measured and detailed parametric model fits were determined. A pursuit model employing a road curvature feedforward was very effective in explaining driver behavior in following randomly curving roads. Sampled-data concepts were also effective in explaining the combined effects of reduced visibility and intermittent road markings on the driver's dynamic time delay. The results indicate the relative importance of various perceptual variables as the visual input to the driver's steering control process is changed.

  19. Application of Numerical Simulation and Vibration Measurements for Seismic Damage Assessment of Railway Structures

    NASA Astrophysics Data System (ADS)

    Uehan, Fumiaki; Meguro, Kimiro

    In this study, the authors discuss methods to assess the future/actual damage to RC structures by using numerical simulations and vibration measurements. First, the applicability of the Applied Element Method (AEM) is examined as an assessment tool for the seismic performance of RC structures with/without retrofit. Cyclic loading tests and seismic response of RC structures are simulated. Next, a method to improve the accuracy of vibration diagnoses of earthquake damaged RC structures is discussed by using damage assessment criteria calculated with the AEM. The AEM could simulate the damage behavior of RC columns, jacketed RC columns and an actual railway viaduct. The change of natural frequencies due to damage to RC columns and an actual railway viaduct with steel jacket were also correctly estimated. Seismic performance check of structures and development of assessment criteria for damage inspection can be effectively done by the AEM.

  20. Measurement of contact angles in a simulated microgravity environment generated by a large gradient magnetic field

    NASA Astrophysics Data System (ADS)

    Liu, Yong-Ming; Chen, Rui-Qing; Wu, Zi-Qing; Zhu, Jing; Shi, Jian-Yu; Lu, Hui-Meng; Shang, Peng; Yin, Da-Chuan

    2016-09-01

    The contact angle is an important parameter that is essential for studying interfacial phenomena. The contact angle can be measured using commercially available instruments. However, these well-developed instruments may not function or may be unsuitable for use in some special environments. A simulated microgravity generated by a large gradient magnetic field is such an environment in which the current measurement instruments cannot be installed. To measure the contact angle in this environment, new tools must be designed and manufactured to be compatible with the size and physical environment. In this study, we report the development and construction of a new setup that was specifically designed for use in a strong magnetic field to measure the contact angle between a levitated droplet and a solid surface. The application of the setup in a large gradient magnetic field was tested, and the contact angles were readily measured.

  1. Accuracy of flowmeters measuring horizontal groundwater flow in an unconsolidated aquifer simulator.

    USGS Publications Warehouse

    Bayless, E.R.; Mandell, Wayne A.; Ursic, James R.

    2011-01-01

    Borehole flowmeters that measure horizontal flow velocity and direction of groundwater flow are being increasingly applied to a wide variety of environmental problems. This study was carried out to evaluate the measurement accuracy of several types of flowmeters in an unconsolidated aquifer simulator. Flowmeter response to hydraulic gradient, aquifer properties, and well-screen construction was measured during 2003 and 2005 at the U.S. Geological Survey Hydrologic Instrumentation Facility in Bay St. Louis, Mississippi. The flowmeters tested included a commercially available heat-pulse flowmeter, an acoustic Doppler flowmeter, a scanning colloidal borescope flowmeter, and a fluid-conductivity logging system. Results of the study indicated that at least one flowmeter was capable of measuring borehole flow velocity and direction in most simulated conditions. The mean error in direction measurements ranged from 15.1 degrees to 23.5 degrees and the directional accuracy of all tested flowmeters improved with increasing hydraulic gradient. The range of Darcy velocities examined in this study ranged 4.3 to 155 ft/d. For many plots comparing the simulated and measured Darcy velocity, the squared correlation coefficient (r2) exceeded 0.92. The accuracy of velocity measurements varied with well construction and velocity magnitude. The use of horizontal flowmeters in environmental studies appears promising but applications may require more than one type of flowmeter to span the range of conditions encountered in the field. Interpreting flowmeter data from field settings may be complicated by geologic heterogeneity, preferential flow, vertical flow, constricted screen openings, and nonoptimal screen orientation.

  2. Validity and reliability of a three-dimensional dental cast simulator for arch dimension measurements

    PubMed Central

    Nouri, Mahtab; Asefi, Sohrab; Baghban, Alireza Akbarzadeh; Aminian, Amin; Shamsa, Mohammad; Massudi, Reza

    2014-01-01

    Background: The accuracy and reproducibility of measurements in a locally made three dimensional (3D) simulator was assessed and compared with manual caliper measurements. Materials and Methods: A total of 20 casts were scanned by our laser scanner. Software capabilities included dimensional measurements, transformation and rotation of the cast as a whole, separation and rotation of each tooth and clip far. Two orthodontists measured the intercanine width, intermolar width and canine, molar and arch depth on the casts and in 3D simulator. For calculating the reliability coefficient and comparing random and systematic errors between the two methods, intra-class correlation coefficient of reliability (ICC), Dahlberg and paired t-test were used, respectively. The ICC and Dahlberg's formula were also applied to assess intra-examiner and inter-examiner reliability of measurements on the casts and in the simulator (P < 0.05). Results: Canine and molar depth measurements had low reliability on the casts. Reliability between methods for the remaining three variables was 0.87, 0.98 and 0.98 in the maxilla and 0.92, 0.77 and 0.94 in the mandible, respectively. The method error was between 0.31 and 0.48 mm. The mean intra-observer difference were 0.086 and 0.23 mm in the 3D method and caliper. The inter-observer differences were 0.21 and 0.42 mm, respectively. Conclusion: The maximum average absolute difference between the two methods was <0.5 mm, indicating that the new system is indeed clinically acceptable. The examiner reliability was higher in 3D measurements. PMID:25540660

  3. Extraction of cilium beat parameters by the combined application of photoelectric measurements and computer simulation.

    PubMed

    Gheber, L; Priel, Z

    1997-01-01

    Photoelectric signals were created and used to investigate the features of the signals as a function of the ciliary beat parameters. Moreover, correlation between the simulated and the measured signals permitted measurement of the cilium beat parameters. The simulations of the signals were based on generation of a series of time-frozen top-view frames of an active ciliary area and determination of the amount of light passing through an observation area in each of these frames. All the factors that might contribute to the shape of the signals, namely, partial ciliary transmittance of light, three-dimensional ciliary beat (composed of recovery, effective, and pause parts), phase distribution on the ciliary surface, and the large number of cilia that contribute to the photoelectric signal, were taken into account in generation of the signals. Changes in the ciliary parameters influenced the shape of the photoelectric signals, and the different phases of the beat could not be directly and unequivocally identified in the signals. The degree of temporal asymmetry of the beat and the portion of the cycle occupied by the pause significantly influenced the shapes of both the lower and the upper parts of the signal and the slopes of the signal. Increases in the angle of the arc swept by the cilium during the effective stroke smoothed the signals and increased the duration of the upper part of the signal. The angle of the arc projected by the cilium onto the cell surface during the recovery stroke had minor effects on the signal's shape. Characteristics of the metachronal wave also influenced the signal's shape markedly. Decreases in ciliary spacing smoothed the signals, whereas ciliary length had a minor influence on the simulated photoelectric signals. Comparison of the simulated and the measured signals showed that the beat parameters of the best-fitting simulated signals converged to values that agree well with the accepted range of beat parameters in mucociliary systems.

  4. Extraction of cilium beat parameters by the combined application of photoelectric measurements and computer simulation.

    PubMed Central

    Gheber, L; Priel, Z

    1997-01-01

    Photoelectric signals were created and used to investigate the features of the signals as a function of the ciliary beat parameters. Moreover, correlation between the simulated and the measured signals permitted measurement of the cilium beat parameters. The simulations of the signals were based on generation of a series of time-frozen top-view frames of an active ciliary area and determination of the amount of light passing through an observation area in each of these frames. All the factors that might contribute to the shape of the signals, namely, partial ciliary transmittance of light, three-dimensional ciliary beat (composed of recovery, effective, and pause parts), phase distribution on the ciliary surface, and the large number of cilia that contribute to the photoelectric signal, were taken into account in generation of the signals. Changes in the ciliary parameters influenced the shape of the photoelectric signals, and the different phases of the beat could not be directly and unequivocally identified in the signals. The degree of temporal asymmetry of the beat and the portion of the cycle occupied by the pause significantly influenced the shapes of both the lower and the upper parts of the signal and the slopes of the signal. Increases in the angle of the arc swept by the cilium during the effective stroke smoothed the signals and increased the duration of the upper part of the signal. The angle of the arc projected by the cilium onto the cell surface during the recovery stroke had minor effects on the signal's shape. Characteristics of the metachronal wave also influenced the signal's shape markedly. Decreases in ciliary spacing smoothed the signals, whereas ciliary length had a minor influence on the simulated photoelectric signals. Comparison of the simulated and the measured signals showed that the beat parameters of the best-fitting simulated signals converged to values that agree well with the accepted range of beat parameters in mucociliary systems

  5. Analogue Materials Measured Under Simulated Lunar and Asteroid Environments: Application to Thermal Infrared Measurements of Airless Bodies

    NASA Astrophysics Data System (ADS)

    Donaldson Hanna, K. L.; Pieters, C. M.; Patterson, W., III; Moriarty, D.

    2012-12-01

    Remote sensing observations provide key insights into the composition and evolution of planetary surfaces. A fundamentally important component to any remote sensing study of planetary surfaces is laboratory measurements of well-characterized samples measured under the appropriate environmental conditions. The near-surface vacuum environment of airless bodies like the Moon and asteroids creates a thermal gradient in the upper hundred microns of regolith. Lab studies of particulate rocks and minerals as well as selected lunar soils under vacuum and lunar-like conditions have identified significant effects of this thermal gradient on thermal infrared (TIR) spectral measurements [e.g. Logan et al. 1973, Salisbury and Walter 1989, Thomas et al. 2010, Donaldson Hanna et al. 2012]. Compared to ambient conditions, these effects include: (1) the Christiansen feature (CF), an emissivity maximum diagnostic of mineralogy and average composition, shifts to higher wavenumbers and (2) an increase in spectral contrast of the CF relative to the Reststrahlen bands (RB), the fundamental molecular vibration bands due to Si-O stretching and bending. Such lab studies demonstrate the high sensitivity of TIR emissivity spectra to environmental conditions under which they are measured. The Asteroid and Lunar Environment Chamber (ALEC) is the newest addition to the RELAB at Brown University. The vacuum chamber simulates the space environment experienced by the near-surface soils of the Moon and asteroids. The internal rotation stage allows for six samples and two blackbodies to be measured without breaking vacuum (<10-4 mbar). Liquid nitrogen is used to cool the interior of the chamber, creating a cold, low emission environment (mimicking the space environment) for heated samples to radiate into. Sample cups can be heated in one of three configurations: (1) from below using heaters embedded in the base of the sample cup, (2) from above using a solar-like radiant heat source, and (3) from

  6. Simulated and measured Hp(10) response of the personal dosemeter Seibersdorf.

    PubMed

    Hranitzky, C; Stadtmann, H

    2007-01-01

    The Hp(10) energy response of the personal dosemeter Seibersdorf and its two different filtered LiF:Mg,Ti (TLD-100) thermoluminescence (TL) detectors are investigated. A close-to-reality simulation model of the personal dosemeter badge including the wrapped detector card was implemented with the MCNP Monte Carlo N-particle transport code. The comparison of measured and computationally calculated response using a semi-empirical TL efficiency function is carried out to provide information about the quality of the results of both methods, experiment and simulation. Similar to the experimental calibration conditions, the irradiation of dosemeters centred on the front surface of the International Organization for Standardization (ISO) water slab phantom is simulated using ISO-4037 reference photon radiation qualities with mean energies between 24 keV and 1.25 MeV and corresponding ISO conversion coefficients. The comparison of the simulated and measured relative Hp(10) energy responses resulted in good agreement within some percent except for the filtered TL element at lower photon energies.

  7. Macromolecular Crowding Studies of Amino Acids Using NMR Diffusion Measurements and Molecular Dynamics Simulations

    NASA Astrophysics Data System (ADS)

    Virk, Amninder; Stait-Gardner, Timothy; Willis, Scott; Torres, Allan; Price, William

    2015-02-01

    Molecular crowding occurs when the total concentration of macromolecular species in a solution is so high that a considerable proportion of the volume is physically occupied and therefore not accessible to other molecules. This results in significant changes in the solution properties of the molecules in such systems. Macromolecular crowding is ubiquitous in biological systems due to the generally high intracellular protein concentrations. The major hindrance to understanding crowding is the lack of direct comparison of experimental data with theoretical or simulated data. Self-diffusion is sensitive to changes in the molecular weight and shape of the diffusing species, and the available diffusion space (i.e., diffusive obstruction). Consequently, diffusion measurements are a direct means for probing crowded systems including the self-association of molecules. In this work, nuclear magnetic resonance measurements of the self-diffusion of four amino acids (glycine, alanine, valine and phenylalanine) up to their solubility limit in water were compared directly with molecular dynamics simulations. The experimental data were then analyzed using various models of aggregation and obstruction. Both experimental and simulated data revealed that the diffusion of both water and the amino acids were sensitive to the amino acid concentration. The direct comparison of the simulated and experimental data afforded greater insights into the aggregation and obstruction properties of each amino acid.

  8. Adsorption of acetaldehyde on ice as seen from computer simulation and infrared spectroscopy measurements.

    PubMed

    Darvas, Mária; Lasne, Jérôme; Laffon, Carine; Parent, Philippe; Picaud, Sylvain; Jedlovszky, Pál

    2012-03-06

    Detailed investigation of the adsorption of acetaldehyde on I(h) ice is performed under tropospheric conditions by means of grand canonical Monte Carlo computer simulations and compared to infrared spectroscopy measurements. The experimental and simulation results are in a clear accordance with each other. The simulations indicate that the adsorption process follows Langmuir behavior in the entire pressure range of the vapor phase of acetaldehyde. Further, it was found that the adsorption layer is strictly monomolecular, and the adsorbed acetaldehyde molecules are bound to the ice surface by only one hydrogen bond, typically formed with the dangling H atoms at the ice surface, in agreement with the experimental results. Besides this hydrogen bonding, at high surface coverages dipolar attraction between neighboring acetaldehyde molecules also contributes considerably to the energy gain of the adsorption. The acetaldehyde molecules adopt strongly tilted orientations relative to the ice surface, the tilt angle being scattered between 50° and 90° (i.e., perpendicular orientation). The range of the preferred tilt angles narrows, and the preference for perpendicular orientation becomes stronger upon saturation of the adsorption layer. The CH(3) group of the acetaldehyde molecules points as straight away from the ice surface within the constraint imposed by the tilt angle adopted by the molecule as possible. The heat of adsorption at infinitely low coverage is found to be -36 ± 2 kJ/mol from the infrared spectroscopy measurement, which is in excellent agreement with the computer simulation value of -34.1 kJ/mol.

  9. The method of infrared image simulation based on the measured image

    NASA Astrophysics Data System (ADS)

    Lou, Shuli; Liu, Liang; Ren, Jiancun

    2015-10-01

    The development of infrared imaging guidance technology has promoted the research of infrared imaging simulation technology and the key of infrared imaging simulation is the generation of IR image. The generation of IR image is worthful in military and economy. In order to solve the problem of credibility and economy of infrared scene generation, a method of infrared scene generation based on the measured image is proposed. Through researching on optical properties of ship-target and sea background, ship-target images with various gestures are extracted from recorded images based on digital image processing technology. The ship-target image is zoomed in and out to simulate the relative motion between the viewpoint and the target according to field of view and the distance between the target and the sensor. The gray scale of ship-target image is adjusted to simulate the radiation change of the ship-target according to the distance between the viewpoint and the target and the atmospheric transmission. Frames of recorded infrared images without target are interpolated to simulate high frame rate of missile. Processed ship-target images and sea-background infrared images are synthetized to obtain infrared scenes according to different viewpoints. Experiments proved that this method is flexible and applicable, and the fidelity and the reliability of synthesis infrared images can be guaranteed.

  10. Three-dimensional simulation of ultrasound propagation through trabecular bone structures measured by synchrotron microtomography.

    PubMed

    Bossy, Emmanuel; Padilla, Frédéric; Peyrin, Françoise; Laugier, Pascal

    2005-12-07

    Three-dimensional numerical simulations of ultrasound transmission were performed through 31 trabecular bone samples measured by synchrotron microtomography. The synchrotron microtomography provided high resolution 3D mappings of bone structures, which were used as the input geometry in the simulation software developed in our laboratory. While absorption (i.e. the absorption of ultrasound through dissipative mechanisms) was not taken into account in the algorithm, the simulations reproduced major phenomena observed in real through-transmission experiments in trabecular bone. The simulated attenuation (i.e. the decrease of the transmitted ultrasonic energy) varies linearly with frequency in the MHz frequency range. Both the speed of sound (SOS) and the slope of the normalized frequency-dependent attenuation (nBUA) increase with the bone volume fraction. Twenty-five out of the thirty-one samples exhibited negative velocity dispersion. One sample was rotated to align the main orientation of the trabecular structure with the direction of ultrasonic propagation, leading to the observation of a fast and a slow wave. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures. As an illustration, comparison between results obtained on bone modelled either as a fluid or a solid structure suggested the major role of mode conversion of the incident acoustic wave to shear waves in bone to explain the large contribution of scattering to the overall attenuation.

  11. Three-dimensional simulation of ultrasound propagation through trabecular bone structures measured by synchrotron microtomography

    NASA Astrophysics Data System (ADS)

    Bossy, Emmanuel; Padilla, Frédéric; Peyrin, Françoise; Laugier, Pascal

    2005-12-01

    Three-dimensional numerical simulations of ultrasound transmission were performed through 31 trabecular bone samples measured by synchrotron microtomography. The synchrotron microtomography provided high resolution 3D mappings of bone structures, which were used as the input geometry in the simulation software developed in our laboratory. While absorption (i.e. the absorption of ultrasound through dissipative mechanisms) was not taken into account in the algorithm, the simulations reproduced major phenomena observed in real through-transmission experiments in trabecular bone. The simulated attenuation (i.e. the decrease of the transmitted ultrasonic energy) varies linearly with frequency in the MHz frequency range. Both the speed of sound (SOS) and the slope of the normalized frequency-dependent attenuation (nBUA) increase with the bone volume fraction. Twenty-five out of the thirty-one samples exhibited negative velocity dispersion. One sample was rotated to align the main orientation of the trabecular structure with the direction of ultrasonic propagation, leading to the observation of a fast and a slow wave. Coupling numerical simulation with real bone architecture therefore provides a powerful tool to investigate the physics of ultrasound propagation in trabecular structures. As an illustration, comparison between results obtained on bone modelled either as a fluid or a solid structure suggested the major role of mode conversion of the incident acoustic wave to shear waves in bone to explain the large contribution of scattering to the overall attenuation.

  12. Ground return signal simulation and retrieval algorithm of spaceborne integrated path DIAL for CO2 measurements

    NASA Astrophysics Data System (ADS)

    Liu, Bing-Yi; Wang, Jun-Yang; Liu, Zhi-Shen

    2014-11-01

    Spaceborne integrated path differential absorption (IPDA) lidar is an active-detection system which is able to perform global CO2 measurement with high accuracy of 1ppmv at day and night over ground and clouds. To evaluate the detection performance of the system, simulation of the ground return signal and retrieval algorithm for CO2 concentration are presented in this paper. Ground return signals of spaceborne IPDA lidar under various ground surface reflectivity and atmospheric aerosol optical depths are simulated using given system parameters, standard atmosphere profiles and HITRAN database, which can be used as reference for determining system parameters. The simulated signals are further applied to the research on retrieval algorithm for CO2 concentration. The column-weighted dry air mixing ratio of CO2 denoted by XCO2 is obtained. As the deviations of XCO2 between the initial values for simulation and the results from retrieval algorithm are within the expected error ranges, it is proved that the simulation and retrieval algorithm are reliable.

  13. Simulation of decay processes and radiation transport times in radioactivity measurements

    NASA Astrophysics Data System (ADS)

    García-Toraño, E.; Peyres, V.; Bé, M.-M.; Dulieu, C.; Lépy, M.-C.; Salvat, F.

    2017-04-01

    The Fortran subroutine package PENNUC, which simulates random decay pathways of radioactive nuclides, is described. The decay scheme of the active nuclide is obtained from the NUCLEIDE database, whose web application has been complemented with the option of exporting nuclear decay data (possible nuclear transitions, branching ratios, type and energy of emitted particles) in a format that is readable by the simulation subroutines. In the case of beta emitters, the initial energy of the electron or positron is sampled from the theoretical Fermi spectrum. De-excitation of the atomic electron cloud following electron capture and internal conversion is described using transition probabilities from the LLNL Evaluated Atomic Data Library and empirical or calculated energies of released X rays and Auger electrons. The time evolution of radiation showers is determined by considering the lifetimes of nuclear and atomic levels, as well as radiation propagation times. Although PENNUC is designed to operate independently, here it is used in conjunction with the electron-photon transport code PENELOPE, and both together allow the simulation of experiments with radioactive sources in complex material structures consisting of homogeneous bodies limited by quadric surfaces. The reliability of these simulation tools is demonstrated through comparisons of simulated and measured energy spectra from radionuclides with complex multi-gamma spectra, nuclides with metastable levels in their decay pathways, nuclides with two daughters, and beta plus emitters.

  14. Measurement and numerical simulation of a small centrifugal compressor characteristics at small or negative flow rate

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Kaname; Okada, Mizuki; Inokuchi, Yuzo; Yamasaki, Nobuhiko; Yamagata, Akihiro

    2017-04-01

    For centrifugal compressors used in automotive turbochargers, the extension of the surge margin is demanded because of lower engine speed. In order to estimate the surge line exactly, it is required to acquire the compressor characteristics at small or negative flow rate. In this paper, measurement and numerical simulation of the characteristics at small or negative flow rate are carried out. In the measurement, an experimental facility with a valve immediately downstream of the compressor is used to suppress the surge. In the numerical work, a new boundary condition that specifies mass flow rate at the outlet boundary is used to simulate the characteristics around the zero flow rate region. Furthermore, flow field analyses at small or negative flow rate are performed with the numerical results. The separated and re-circulated flow fields are investigated by visualization to identify the origin of losses.

  15. Diagnostics of the Solar corona from Comparison Between Faraday Rotation Measurements and MHD Simulations

    NASA Astrophysics Data System (ADS)

    LE CHAT, G.; Kasper, J. C.; Cohen, O.; Spangler, S.

    2013-05-01

    Faraday rotation observations of natural radio sources allow remote diagnostics of the density and magnetic field of the solar corona. We use linear polarization observations made with the NRAO Very Large Array at frequencies of 1465 and 1665 MHz of 33 polarized radio sources occulted by the solar corona within 5 to 14 solar radii. The measurements were made during May 1997 (Mancuso and Spangler, 2000), March 2005 and april 2005 (Ingleby et al., 2005), corresponding to Carrington rotation number 1922, 1923, 2027 and 2028. We compare the observed Faraday rotation values with values extracted from MHD steady-state simulations of the solar corona using the BATS-R-US model. The simulations are driven by magnetogram data taken at the same time as the observed data. We present the agreement between the model and the Faraday rotation measurements, and we discuss the contraints imposed on models of the quiet corona and CMEs by these observations.

  16. Comparison of Simulated and Measured Fluid-Surface Oscillation Frequencies in a Channel

    NASA Astrophysics Data System (ADS)

    Trapuzzano, Matthew; Pierre, Kiesha; Tufekcioglu, Emre; Guldiken, Rasim; Tejada-Martinez, Andres; Crane, Nathan

    2016-11-01

    Many important processes from agriculture to manufacturing depend on the wetting of fluids on rough or textured surfaces. This has traditionally been studied from a macro-perspective. The effects of these surface features can be dramatically altered by vibrations that overcome energy barriers to contact line motion caused by surface roughness. In order to study these effects in confined geometries and at different length scales, a validated model is required. This presentation will compare the measured and simulated frequencies of capillary vibrations in a cylindrical glass tube. Fluid surface vibrations are excited externally through deformation of the interface. The resulting surface oscillations are observed with a high speed video camera and the dominant oscillation frequencies are calculated. The measured oscillation frequencies are compared to predictions from transient CFD simulations across a range of interface diameters from 400 um to 1.5 mm. These results may be used to inform studies of wetting under vibration. NSF CMMI-1361919.

  17. Simulation and Experimental Measurements of Inductively Coupled CF4 and CF4/Ar Plasmas

    NASA Technical Reports Server (NTRS)

    Hash, D. B.; Bose, D.; Rao, M. V. V. S.; Cruden, B. A.; Meyyappan, M.; Sharma, S. P.; Arnold, James O. (Technical Monitor)

    2000-01-01

    The recently developed code SEMS (semiconductor equipment modeling software)is applied to the simulation of CF4 and CF4/Ar inductively coupled plasmas (ICP). This work builds upon the earlier nitrogen, transformer coupled plasma (TCP) SEMS research by demonstrating its accuracy for more complex reactive mixtures, moving closer to the realization of a virtual plasma reactor. Attention is given to the etching of and/or formation of carbonaceous films on the quartz dielectric window and diagnostic aperatures. The simulations are validated through comparisons with experimental measurements using FTIR (Fourier Transform Infrared) and UV absorption spectroscopy for CFx and SiFx neutral radicals, QMS (quadrupole mass spectrometry) for the ions, and Langmuir probe measurements of electron number density and temperature in an ICP GEC reference cell.

  18. On the Use of Integrated Daylighting and Energy Simulations to Drive the Design of a Large Net-Zero Energy Office Building: Preprint

    SciTech Connect

    Guglielmetti, R.; Pless, S.; Torcellini, P.

    2010-08-01

    This paper illustrates the challenges of integrating rigorous daylight and electric lighting simulation data with whole-building energy models, and defends the need for such integration to achieve aggressive energy savings. Through a case study example, we examine the ways daylighting -- and daylighting simulation -- drove the design of a large net-zero energy project. We give a detailed review of the daylighting and electric lighting design process for the National Renewable Energy Laboratory's Research Support Facility (RSF), a 220,000 ft2 net-zero energy project the author worked on as a daylighting consultant. A review of the issues involved in simulating and validating the daylighting performance of the RSF will be detailed, including daylighting simulation, electric lighting control response, and integration of Radiance simulation data into the building energy model. Daylighting was a key strategy in reaching the contractual energy use goals for the RSF project; the building's program, layout, orientation and interior/furniture design were all influenced by the daylighting design, and simulation was critical in ensuring these many design components worked together in an integrated fashion, and would perform as required to meet a very aggressive energy performance goal, as expressed in a target energy use intensity.

  19. GPS Radiation Measurements: Instrument Modeling and Simulation (Project w14_gpsradiation)

    SciTech Connect

    Sullivan, John P.

    2016-11-29

    The following topics are covered: electron response simulations and typical calculated response. Monte Carlo calculations of the response of future charged particle instruments (dosimeters) intended to measure the flux of charged particles in space were performed. The electron channels are called E1- E11 – each of which is intended to detect a different range of electron energies. These instruments are on current and future GPS satellites.

  20. Laser spectroscopic real time measurements of methanogenic activity under simulated Martian subsurface analog conditions

    NASA Astrophysics Data System (ADS)

    Schirmack, Janosch; Böhm, Michael; Brauer, Chris; Löhmannsröben, Hans-Gerd; de Vera, Jean-Pierre; Möhlmann, Diedrich; Wagner, Dirk

    2014-08-01

    On Earth, chemolithoautothrophic and anaerobic microorganisms such as methanogenic archaea are regarded as model organisms for possible subsurface life on Mars. For this reason, the methanogenic strain Methanosarcina soligelidi (formerly called Methanosarcina spec. SMA-21), isolated from permafrost-affected soil in northeast Siberia, has been tested under Martian thermo-physical conditions. In previous studies under simulated Martian conditions, high survival rates of these microorganisms were observed. In our study we present a method to measure methane production as a first attempt to study metabolic activity of methanogenic archaea during simulated conditions approaching conditions of Mars-like environments. To determine methanogenic activity, a measurement technique which is capable to measure the produced methane concentration with high precision and with high temporal resolution is needed. Although there are several methods to detect methane, only a few fulfill all the needed requirements to work within simulated extraterrestrial environments. We have chosen laser spectroscopy, which is a non-destructive technique that measures the methane concentration without sample taking and also can be run continuously. In our simulation, we detected methane production at temperatures down to -5 °C, which would be found on Mars either temporarily in the shallow subsurface or continually in the deep subsurface. The pressure of 50 kPa which we used in our experiments, corresponds to the expected pressure in the Martian near subsurface. Our new device proved to be fully functional and the results indicate that the possible existence of methanogenic archaea in Martian subsurface habitats cannot be ruled out.

  1. Performance simulation of a spaceborne infrared coherent lidar for measuring tropospheric wind profiles.

    NASA Astrophysics Data System (ADS)

    Baron, Philippe; Ishii, Shoken; Kyoka, Gamo; Mizutani, Kohei; Chikako, Takahashi; Itabe, Toshikazu; Iwasaki, Toshiki; Kubota, Takuji; Okamoto, Kozo; Oki, Riko; Satoh, Masaki; Satoh, Yohei

    2014-05-01

    An effort has begun in Japan to develop a spaceborne instrument for measuring tropospheric winds. This project is a collaboration between the Japan Aerospace Exploration Agency (JAXA), the Meteorological Research Institute (MRI, Japan) and the National Institute of Information and Communications Technology (NICT, Japan) [1,2]. The aim is to measure the horizontal wind field in the troposphere on a global scale with a precision better than 3 ms-1, and a vertical and horizontal (along the satellite ground track) resolution better than 1 km and 100 km, respectively. In order to support the definition and the development of the instrument, an end-to-end simulator has been implemented including modules for i) simulating the time-dependent laser shot return power, ii) for averaging the spectral power of several returns and iii) for estimating the line-of-sight wind from the Doppler shift of the averaged spectra. The simulations take into account the satellite position and motion along the orbit track, the observational and instrumental characteristics, a 3-D representation of the relevant atmospheric parameters (i.e. wind field, cloud coverage and aerosols distribution) and the Earth surface characteristics. The simulator and the method for estimating the line-of-sight wind will be presented. We will show the results obtained for a payload composed of two 2-μm coherent LIDARs looking in orthogonal directions, and for a satellite moving on a low orbit. The precision, accuracy and the vertical and horizontal resolution of the wind estimates will be discussed. References: [1] S. Ishii, T. Iwasaki, M. Sato, R. Oki, K. Okamoto, T. Ishibashi, P. Baron, and T. Nishizawa, Future Doppler lidar wind measurement from space in Japan, Proc. of SPIE Vol. 8529, 2012 [2] S. Ishii, H. Iwai, K. Mizutani, P. Baron, T. Itabe, H. Fukuoka, T. Ishikawa, A. Sato and A. Asai, 2-μm coherent LIDAR for CO2 and wind measurements, Proc. of SPIE Vol. 8872, 2013

  2. Electro-optic and holographic measurement techniques for the atmospheric sciences. [considering spacecraft simulation applications

    NASA Technical Reports Server (NTRS)

    Moore, W. W., Jr.; Lemons, J. F.; Kurtz, R. L.; Liu, H.-K.

    1977-01-01

    A comprehensive examination is made of recent advanced research directions in the applications of electro-optical and holographic instrumentations and methods to atmospheric sciences problems. In addition, an overview is given of the in-house research program for environmental and atmospheric measurements with emphasis on particulates systems. Special treatment is made of the instrument methods and applications work in the areas of laser scattering spectrometers and pulsed holography sizing systems. Selected engineering tests data on space simulation chamber programs are discussed.

  3. Nano-scale simulative measuring model for tapping mode atomic force microscopy and analysis for measuring a nano-scale ladder-shape standard sample.

    PubMed

    Lin, Zone-Ching; Chou, Ming-Ho

    2010-07-01

    This study proposes to construct a nano-scale simulative measuring model of Tapping Mode Atomic Force Microscopy (TM-AFM), compare with the edge effect of simulative and measurement results. It combines with the Morse potential and vibration theory to calculate the tip-sample atomic interaction force between probe and sample. Used Silicon atoms (Si) arrange the shape of the rectangular cantilever probe and the nano-scale ladder-shape standard sample atomic model. The simulative measurements are compared with the results for the simulative measurements and experimental measurement. It is found that the scan rate and the probe tip's bevel angle are the two reasons to cause the surface error and edge effect of measuring the nano-scale ladder-shape standard sample by TM-AFM. And the bevel angle is about equal to the probe tip's bevel angle from the results of simulated and experimented on the vertical section of the sample edge. To compare with the edge effect between the simulation and experimental measurement, its error is small. It could be verified that the constructed simulative measuring model for TM-AFM in this article is reasonable.

  4. Influence of Assimilation of Subsurface Temperature Measurements on Simulations of Equatorial Undercurrent and South Equatorial Current Along the Pacific Equator

    NASA Technical Reports Server (NTRS)

    Halpern, David; Leetmaan, Ants; Reynolds, Richard W.; Ji, Ming

    1997-01-01

    Equatorial Pacific current and temperature fields were simulated with and without assimilation of subsurface temperature measurements for April 1992 - March 1995, and compared with moored bouy and research vessel current measurements.

  5. Direct measurement of ammonia in simulated human breath using an inkjet-printed polyaniline nanoparticle sensor.

    PubMed

    Hibbard, Troy; Crowley, Karl; Killard, Anthony J

    2013-05-24

    A sensor fabricated from the inkjet-printed deposition of polyaniline nanoparticles onto a screen-printed silver interdigitated electrode was developed for the detection of ammonia in simulated human breath samples. Impedance analysis showed that exposure to ammonia gas could be measured at 962 Hz at which changes in resistance dominate due to the deprotonation of the polymer film. Sensors required minimal calibration and demonstrated excellent intra-electrode baseline drift (≤1.67%). Gases typically present in breath did not interfere with the sensor. Temperature and humidity were shown to have characteristic impedimetric and temporal effects on the sensor that could be distinguished from the response to ammonia. While impedance responses to ammonia could be detected from a single simulated breath, quantification was improved after the cumulative measurement of multiple breaths. The measurement of ammonia after 16 simulated breaths was linear in the range of 40-2175 ppbv (27-1514 μg m(-3)) (r(2)=0.9963) with a theoretical limit of detection of 6.2 ppbv (4.1 μg m(-3)) (SN(-1)=3).

  6. Comparison of CFD Simulations with Experimental Measurements of Nozzle Clogging in Continuous Casting of Steels

    NASA Astrophysics Data System (ADS)

    Mohammadi-Ghaleni, Mahdi; Asle Zaeem, Mohsen; Smith, Jeffrey D.; O'Malley, Ronald

    2016-12-01

    Measurements of clog deposit thickness on the interior surfaces of a commercial continuous casting nozzle are compared with computational fluid dynamics (CFD) predictions of melt flow patterns and particle-wall interactions to identify the mechanisms of nozzle clogging. A submerged entry nozzle received from industry was encased in epoxy and carefully sectioned to allow measurement of the deposit thickness on the internal surfaces of the nozzle. CFD simulations of melt flow patterns and particle behavior inside the nozzle were performed by combining the Eulerian-Lagrangian approach and detached eddy simulation turbulent model, matching the geometry and operating conditions of the industrial test. The CFD results indicated that convergent areas of the interior cross section of the nozzle increased the velocity and turbulence of the flowing steel inside the nozzle and decreased the clog deposit thickness locally in these areas. CFD simulations also predicted a higher rate of attachment of particles in the divergent area between two convergent sections of the nozzle, which matched the observations made in the industrial nozzle measurements.

  7. Mission Simulation of Space Lidar Measurements for Seasonal and Regional CO2 Variations

    NASA Technical Reports Server (NTRS)

    Kawa, Stephan; Collatz, G. J.; Mao, J.; Abshire, J. B.; Sun, X.; Weaver, C. J.

    2010-01-01

    Results of mission simulation studies are presented for a laser-based atmospheric [82 sounder. The simulations are based on real-time carbon cycle process modeling and data analysis. The mission concept corresponds to the Active Sensing of [82 over Nights, Days, and Seasons (ASCENDS) recommended by the US National Academy of Sciences Decadal Survey of Earth Science and Applications from Space. One prerequisite for meaningful quantitative sensor evaluation is realistic CO2 process modeling across a wide range of scales, i.e., does the model have representative spatial and temporal gradients? Examples of model comparison with data will be shown. Another requirement is a relatively complete description of the atmospheric and surface state, which we have obtained from meteorological data assimilation and satellite measurements from MODIS and [ALIPS0. We use radiative transfer model calculations, an instrument model with representative errors ' and a simple retrieval approach to complete the cycle from "nature" run to "pseudo-data" CO2, Several mission and instrument configuration options are examined/ and the sensitivity to key design variables is shown. We use the simulation framework to demonstrate that within reasonable technological assumptions for the system performance, relatively high measurement precision can be obtained, but errors depend strongly on environmental conditions as well as instrument specifications. Examples are also shown of how the resulting pseudo - measurements might be used to address key carbon cycle science questions.

  8. Self-report measures of distractibility as correlates of simulated driving performance.

    PubMed

    Kass, Steven J; Beede, Kristen E; Vodanovich, Stephen J

    2010-05-01

    The present study investigated the relationship between self-reported measures pertaining to attention difficulties and simulated driving performance while distracted. Thirty-six licensed drivers participated in a simulator driving task while engaged in a cell phone conversation. The participants completed questionnaires assessing their tendency toward boredom, cognitive failures, and behaviors associated with attention deficit and hyperactivity. Scores on these measures were significantly correlated with various driving outcomes (e.g., speed, lane maintenance, reaction time). Significant relationships were also found between one aspect of boredom proneness (i.e., inability to generate interest or concentrate) and self-reports of past driving behavior (moving violations). The current study may aid in the understanding of how individual differences in driver distractibility may contribute to unsafe driving behaviors and accident involvement. Additionally, such measures may assist in the identification of individuals at risk for committing driving errors due to being easily distracted. The benefits and limitations of conducting and interpreting simulation research are discussed.

  9. Microdosimetric Monte-Carlo Simulations and Measurements of Heavy Ion Irradiation of a TEPC

    NASA Astrophysics Data System (ADS)

    Rollet, S.; Beck, P.; Bock, F.; Ferrari, A.; Latocha, M.; Uchihori, Y.; Wind, M.

    Microdosimetric methods are well suited for systematic study and quantification of the absorbed energy spatial and temporal distribution in irradiated matter A standard instrument used to measure the energy dissipated in microscopic sites by individual ionizing events is the Tissue Equivalent Proportional Counter TEPC The main focus of this work is to examine interactions of heavy ions with tissue using both experimental and numerical methods Measurements with a TEPC instrument were carried out recently in heavy ion radiation fields at the Heavy Ion Medical Accelerator HIMAC facility in Chiba which belongs to the National Institute of Radiological Sciences NIRS in Japan The instrument has been exposed to two kinds of heavy ions under different irradiation geometries and beam parameters The heavy ions used were Oxygen with energy of 400 MeV u and Iron of 300 MeV u For the simulation of the irradiation experiments two Monte Carlo codes are used namely FLUKA and GEANT4 Both codes are widely used for basic research and applications in radiation protection and dosimetry radiobiology radiotherapy and space Besides scoring average quantities both Monte Carlo codes have the capability to score energy deposition on an event by event basis Thus together with the total energy deposition a simulation of microdosimetric spectra is possible The comparison of measured and simulated lineal energy distribution show a satisfactory agreement both for irradiation with Oxygen ions of 400 MeV u and for Iron ions of 300 MeV u We will discuss in detail the

  10. Estimation of primary pH measurement uncertainty using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Damasceno, J. C.; Borges, R. M. H.; Couto, P. R. G.; Ordine, A. P.; Getrouw, M. A.; Borges, P. P.; Fraga, I. C. S.

    2006-06-01

    pH is a widely used control parameter for several industrial processes. Thus, its correct determination and uncertainty estimation are extremely important. The Guide to the Expression of Uncertainty in Measurement (ISO-GUM) has been extensively used for pH uncertainty estimation. This work uses Monte Carlo simulation to estimate pH uncertainty in a primary pH system for the measurements of a regional comparison (SIM 8.11P-1) in which INMETRO has participated. The results are compared with the ISO-GUM analytical estimation approach and good agreement was found.

  11. Application of Geant4 simulation for analysis of soil carbon inelastic neutron scattering measurements.

    PubMed

    Yakubova, Galina; Kavetskiy, Aleksandr; Prior, Stephen A; Torbert, H Allen

    2016-07-01

    Inelastic neutron scattering (INS) was applied to determine soil carbon content. Due to non-uniform soil carbon depth distribution, the correlation between INS signals with some soil carbon content parameter is not obvious; however, a proportionality between INS signals and average carbon weight percent in ~10cm layer for any carbon depth profile is demonstrated using Monte-Carlo simulation (Geant4). Comparison of INS and dry combustion measurements confirms this conclusion. Thus, INS measurements give the value of this soil carbon parameter.

  12. Simulation and Measurement of Absorbed Dose from 137 Cs Gammas Using a Si Timepix Detector

    NASA Technical Reports Server (NTRS)

    Stoffle, Nicholas; Pinsky, Lawrence; Empl, Anton; Semones, Edward

    2011-01-01

    The TimePix readout chip is a hybrid pixel detector with over 65k independent pixel elements. Each pixel contains its own circuitry for charge collection, counting logic, and readout. When coupled with a Silicon detector layer, the Timepix chip is capable of measuring the charge, and thus energy, deposited in the Silicon. Measurements using a NIST traceable 137Cs gamma source have been made at Johnson Space Center using such a Si Timepix detector, and this data is compared to simulations of energy deposition in the Si layer carried out using FLUKA.

  13. Simulations of an airborne laser absorption spectrometer for atmospheric CO2 measurements

    NASA Astrophysics Data System (ADS)

    Lin, B.; Ismail, S.; Harrison, F. W.; Browell, E. V.; Dobler, J. T.; Refaat, T.; Kooi, S. A.

    2012-12-01

    Atmospheric column amount of carbon dioxide (CO2), a major greenhouse gas of the atmosphere, has significantly increased from a preindustrial value of about 280 parts per million (ppm) to more than 390 ppm at present. Our knowledge about the spatiotemporal change and variability of the greenhouse gas, however, is limited. Thus, a near-term space mission of the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) is crucial to increase our understanding of global sources and sinks of CO2. Currently, NASA Langley Research Center (LaRC) and ITT Exelis are jointly developing and testing an airborne laser absorption spectrometer (LAS) as a prototype instrument for the mission. To assess the space capability of accurate atmospheric CO2 measurements, accurate modeling of the instrument and practical evaluation of space applications are the keys for the success of the ASCENDS mission. This study discusses the simulations of the performance of the airborne instrument and its CO2 measurements. The LAS is a multi-wavelength spectrometer operating on a 1.57 um CO2 absorption line. The Intensity-Modulated Continuous-Wave (IM-CW) approach is implemented in the instrument. To reach accurate CO2 measurements, transmitted signals are monitored internally as reference channels. A model of this kind of instrument includes all major components of the spectrometer, such as modulation generator, fiber amplifier, telescope, detector, transimpedance amplifier, matched filter, and other signal processors. The characteristics of these components are based on actual laboratory tests, product specifications, and general understanding of the functionality of the components. For simulations of atmospheric CO2 measurements, environmental conditions related to surface reflection, atmospheric CO2 and H2O profiles, thin clouds, and aerosol layers, are introduced into the model. Furthermore, all major noise sources such as those from detectors, background radiation, speckle, and

  14. Temperature distribution during RF ablation on ex vivo liver tissue: IR measurements and simulations

    NASA Astrophysics Data System (ADS)

    Macchi, Edoardo Gino; Gallati, Mario; Braschi, Giovanni; Cigada, Alfredo; Comolli, Lorenzo

    2015-05-01

    Radiofrequency thermal ablation is the first therapeutic option for the minimally invasive treatment of liver tumors. This medical procedure employs the Joule heat produced by a RF electromagnetic field to kill tumor cells. The outcome of the procedure is strongly affected by the temperature distribution near the RF applicator, however the measurement of this distribution, even in ex vivo experiments, is not straightforward since most traditional local temperature measurement techniques are not well-suited, due to both electromagnetic interferences and the sensor heat sink effect. Given the importance of the temperature field knowledge, in this paper special care was devoted to its measurement employing both infrared thermal imaging and NTC thermistors. Several RF ablation tests on ex vivo porcine liver tissue were carried out measuring the space-time evolution of temperature during the procedure (with spatial resolution ≤1 mm) and producing useful data for the design and the calibration of a numerical model. Electro-thermal numerical simulations of the experimental tests were performed using a mathematical model suitable for the heating phase of the procedure (up to 95 °C). The simulations results allowed to check the physical consistency of the measured data and suggested that a constant thermal conductivity is satisfactory for modeling the temperature evolution during RF ablation.

  15. Temperature distribution during RF ablation on ex vivo liver tissue: IR measurements and simulations

    NASA Astrophysics Data System (ADS)

    Macchi, Edoardo Gino; Gallati, Mario; Braschi, Giovanni; Cigada, Alfredo; Comolli, Lorenzo

    2014-09-01

    Radiofrequency thermal ablation is the first therapeutic option for the minimally invasive treatment of liver tumors. This medical procedure employs the Joule heat produced by a RF electromagnetic field to kill tumor cells. The outcome of the procedure is strongly affected by the temperature distribution near the RF applicator, however the measurement of this distribution, even in ex vivo experiments, is not straightforward since most traditional local temperature measurement techniques are not well-suited, due to both electromagnetic interferences and the sensor heat sink effect. Given the importance of the temperature field knowledge, in this paper special care was devoted to its measurement employing both infrared thermal imaging and NTC thermistors. Several RF ablation tests on ex vivo porcine liver tissue were carried out measuring the space-time evolution of temperature during the procedure (with spatial resolution ≤1 mm) and producing useful data for the design and the calibration of a numerical model. Electro-thermal numerical simulations of the experimental tests were performed using a mathematical model suitable for the heating phase of the procedure (up to 95 °C). The simulations results allowed to check the physical consistency of the measured data and suggested that a constant thermal conductivity is satisfactory for modeling the temperature evolution during RF ablation.

  16. SAR measurement due to mobile phone exposure in a simulated biological media.

    PubMed

    Behari, J; Nirala, Jay Prakash

    2012-09-01

    The specific absorption rate (SAR) measurements are carried out for compliance testing of personal 3G Mobile phone. The accuracy of this experimental setup has been checked by comparing the SAR in 10 gm of simulated tissue and an arbitrary shaped box. This has been carried out using a 3G mobile Phone at 1718.5 MHz, in a medium simulating brain and muscle phantom. The SAR measurement system consists of a stepper motor to move a monopole E-field probe in two dimensions inside an arbitrary shaped box. The phantom is filled with appropriate frequency-specific fluids with measured electrical properties (dielectric constant and conductivity). That is close to the average for gray and white matters of the brain at the frequencies of interest (1718.5 MHz). Induced fields are measured using a specially designed monopole probe in its close vicinity. The probe is immersed in the phantom material. The measured data for induced fields are used to compute SAR values at various locations with respect to the mobile phone location. It is concluded that these SAR values are position dependent and well below the safety criteria prescribed for human exposure.

  17. Two methods for transmission line simulation model creation based on time domain measurements

    NASA Astrophysics Data System (ADS)

    Rinas, D.; Frei, S.

    2011-07-01

    The emission from transmission lines plays an important role in the electromagnetic compatibility of automotive electronic systems. In a frequency range below 200 MHz radiation from cables is often the dominant emission factor. In higher frequency ranges radiation from PCBs and their housing becomes more relevant. Main sources for this emission are the conducting traces. The established field measurement methods according CISPR 25 for evaluation of emissions suffer from the need to use large anechoic chambers. Furthermore measurement data can not be used for simulation model creation in order to compute the overall fields radiated from a car. In this paper a method to determine the far-fields and a simulation model of radiating transmission lines, esp. cable bundles and conducting traces on planar structures, is proposed. The method measures the electromagnetic near-field above the test object. Measurements are done in time domain in order to get phase information and to reduce measurement time. On the basis of near-field data equivalent source identification can be done. Considering correlations between sources along each conductive structure in model creation process, the model accuracy increases and computational costs can be reduced.

  18. Quantitative comparisons between experimentally measured 2-D carbon radiation and Monte Carlo impurity (MCI) code simulations

    SciTech Connect

    Evans, T.E.; Leonard, A.W.; West, W.P.; Finkenthal, D.F.; Fenstermacher, M.E.; Porter, G.D.

    1998-08-01

    Experimentally measured carbon line emissions and total radiated power distributions from the DIII-D divertor and Scrape-Off Layer (SOL) are compared to those calculated with the Monte Carlo Impurity (MCI) model. A UEDGE background plasma is used in MCI with the Roth and Garcia-Rosales (RG-R) chemical sputtering model and/or one of six physical sputtering models. While results from these simulations do not reproduce all of the features seen in the experimentally measured radiation patterns, the total radiated power calculated in MCI is in relatively good agreement with that measured by the DIII-D bolometric system when the Smith78 physical sputtering model is coupled to RG-R chemical sputtering in an unaltered UEDGE plasma. Alternatively, MCI simulations done with UEDGE background ion temperatures along the divertor target plates adjusted to better match those measured in the experiment resulted in three physical sputtering models which when coupled to the RG-R model gave a total radiated power that was within 10% of measured value.

  19. Measurement of Primary Ejecta From Normal Incident Hypervelocity Impact on Lunar Regolith Simulant

    NASA Technical Reports Server (NTRS)

    Edwards, David L.; Cooke, William; Moser, Danielle; Swift, Wesley

    2007-01-01

    The National Aeronautics and Space Administration (NASA) continues to make progress toward long-term lunar habitation. Critical to the design of a lunar habitat is an understanding of the lunar surface environment. A subject for further definition is the lunar primary ejecta environment. The document NASA SP-8013 was developed for the Apollo program and is the latest definition of the primary ejecta environment. There is concern that NASA SP-8013 may over-estimate the lunar primary ejecta environment. NASA's Meteoroid Environment Office (MEO) has initiated several tasks to improve the accuracy of our understanding of the lunar surface primary ejecta environment. This paper reports the results of experiments on projectile impact into pumice targets, simulating lunar regolith. The Ames Vertical Gun Range (AVGR) was used to accelerate spherical Pyrex projectiles of 0.29g to velocities ranging between 2.5 km/s and 5.18 km/s. Impact on the pumice target occurred at normal incidence. The ejected particles were detected by thin aluminum foil targets placed around the pumice target in a 0.5 Torr vacuum. A simplistic technique to characterize the ejected particles was formulated. Improvements to this technique will be discussed for implementation in future tests.

  20. Investigation of photospheric temperature gradient variations using limb darkening measurements and simulations

    NASA Astrophysics Data System (ADS)

    Criscuoli, Serena; Foukal, Peter V.

    2016-05-01

    The temperature stratifications of magnetic elements and unmagnetized plasma are different, so that changes of the facular and network filling factor over the cycle modify the average temperature gradient in the photosphere.Such variations have been suggested to explain irradiance measurements obtained by the SIM spectrometers in he visible and infrared spectral ranges. On the other hand, limb darkening measurements show no dependence upon activity level. We investigate the sensitivity of limb darkening to changes in network area filling factor using a 3-D MHD model of the magnetized photosphere. We find that the expected limb darkening change due to the measured 11- yr variation in filling factor lies outside the formal 99% confidence limit of the limb darkening measurements. This poses important constraints for observational validation of 3D-MHD simulations.

  1. Simulation of Statistical Fluctuations in the Spin Precession Measurements at RHIC

    SciTech Connect

    Poblaguev, A. A.

    2014-02-25

    Measurements of the driven spin coherent precession Sx(t)=Sx(0) - Sx(1) sin(ωt+φ0) were initiated in RHIC Run13. The expected value of the precession amplitude Sx(1) ~ 2 x 10-4 is about the statistical error in a single measurement and data fit gives a biased estimate of the Sx(1). For a proper statistical interpretation of the results of the several measurements, statistical fluctuations were studied using Monte-Carlo simulation. Preliminary results of the spin precession measurements in RHIC Run13 are presented.

  2. Infiltration and Runoff Measurements on Steep Burned Hillslopes Using a Rainfall Simulator with Variable Rain Intensities

    USGS Publications Warehouse

    Kinner, David A.; Moody, John A.

    2008-01-01

    Multiple rainfall intensities were used in rainfall-simulation experiments designed to investigate the infiltration and runoff from 1-square-meter plots on burned hillslopes covered by an ash layer of varying thickness. The 1-square-meter plots were on north- and south-facing hillslopes in an area burned by the Overland fire northwest of Boulder near Jamestown on the Front Range of Colorado. A single-nozzle, wide-angle, multi-intensity rain simulator was developed to investigate the infiltration and runoff on steep (30- to 40-percent gradient) burned hillslopes covered with ash. The simulated rainfall was evaluated for spatial variability, drop size, and kinetic energy. Fourteen rainfall simulations, at three intensities (about 20 millimeters per hour [mm/h], 35 mm/h, and 50 mm/h), were conducted on four plots. Measurements during and after the simulations included runoff, rainfall, suspended-sediment concentrations, surface ash layer thickness, soil moisture, soil grain size, soil lost on ignition, and plot topography. Runoff discharge reached a steady state within 7 to 26 minutes. Steady infiltration rates with the 50-mm/h application rainfall intensity approached 20?35 mm/h. If these rates are projected to rainfall application intensities used in many studies of burned area runoff production (about 80 mm/h), the steady discharge rates are on the lower end of measurements from other studies. Experiments using multiple rainfall intensities (three) suggest that runoff begins at rainfall intensities around 20 mm/h at the 1-square-meter scale, an observation consistent with a 10-mm/h rainfall intensity threshold needed for runoff initiation that has been reported in the literature.

  3. Monte Carlo simulation of time-dependent, transport-limited fluorescent boundary measurements in frequency domain.

    PubMed

    Pan, Tianshu; Rasmussen, John C; Lee, Jae Hoon; Sevick-Muraca, Eva M

    2007-04-01

    Recently, we have presented and experimentally validated a unique numerical solver of the coupled radiative transfer equations (RTEs) for rapidly computing time-dependent excitation and fluorescent light propagation in small animal tomography. Herein, we present a time-dependent Monte Carlo algorithm to validate the forward RTE solver and investigate the impact of physical parameters upon transport-limited measurements in order to best direct the development of the RTE solver for optical tomography. Experimentally, the Monte Carlo simulations for both transport-limited and diffusion-limited propagations are validated using frequency domain photon migration measurements for 1.0%, 0.5%, and 0.2% intralipid solutions containing 1 microM indocyanine green in a 49 cm3 cylindrical phantom corresponding to the small volume employed in small animal tomography. The comparisons between Monte Carlo simulations and the numerical solutions result in mean percent error in amplitude and the phase shift less than 5.0% and 0.7 degrees, respectively, at excitation and emission wavelengths for varying anisotropic factors, lifetimes, and modulation frequencies. Monte Carlo simulations indicate that the accuracy of the forward model is enhanced using (i) suitable source models of photon delivery, (ii) accurate anisotropic factors, and (iii) accurate acceptance angles of collected photons. Monte Carlo simulations also show that the accuracy of the diffusion approximation in the small phantom depends upon (i) the ratio d(phantom)/l(tr), where d(phantom) is the phantom diameter and l(tr) is the transport mean free path; and (ii) the anisotropic factor of the medium. The Monte Carlo simulations validates and guides the future development of an appropriate RTE solver for deployment in small animal optical tomography.

  4. Effects of inflow velocity profile on two-dimensional hemodynamic analysis by ordinary and ultrasonic-measurement-integrated simulations.

    PubMed

    Kato, Takaumi; Sone, Shusaku; Funamoto, Kenichi; Hayase, Toshiyuki; Kadowaki, Hiroko; Taniguchi, Nobuyuki

    2016-09-01

    Two-dimensional ultrasonic-measurement-integrated (2D-UMI) simulation correctly reproduces hemodynamics even with an inexact inflow velocity distribution. This study aimed to investigate which is superior, a two-dimensional ordinary (2D-O) simulation with an accurate inflow velocity distribution or a 2D-UMI simulation with an inaccurate one. 2D-O and 2D-UMI simulations were performed for blood flow in a carotid artery with four upstream velocity boundary conditions: a velocity profile with backprojected measured Doppler velocities (condition A), and velocity profiles with a measured Doppler velocity distribution, a parabolic one, and a uniform one, magnitude being obtained by inflow velocity estimation (conditions B, C, and D, respectively). The error of Doppler velocity against the measurement data was sensitive to the inflow velocity distribution in the 2D-O simulation, but not in the 2D-UMI simulation with the inflow velocity estimation. Among the results in conditions B, C, and D, the error in the worst 2D-UMI simulation with condition D was 31 % of that in the best 2D-O simulation with condition B, implying the superiority of the 2D-UMI simulation with an inaccurate inflow velocity distribution over the 2D-O simulation with an exact one. Condition A resulted in a larger error than the other conditions in both the 2D-O and 2D-UMI simulations.

  5. Leasing physician office space.

    PubMed

    Murray, Charles

    2009-01-01

    When leasing office space, physicians should determine the effective lease rate (ELR) for each building they are considering before making a selection. The ELR is based on a number of factors, including building quality, building location, basic form of lease agreement, rent escalators and add-on factors in the lease, tenant improvement allowance, method of square footage measurement, quality of building management, and other variables. The ELR enables prospective physician tenants to accurately compare lease rates being quoted by building owners and to make leasing decisions based on objective criteria.

  6. Simulation vs. Reality: A Comparison of In Silico Distance Predictions with DEER and FRET Measurements

    PubMed Central

    Klose, Daniel; Klare, Johann P.; Grohmann, Dina; Kay, Christopher W. M.; Werner, Finn; Steinhoff, Heinz-Jürgen

    2012-01-01

    Site specific incorporation of molecular probes such as fluorescent- and nitroxide spin-labels into biomolecules, and subsequent analysis by Förster resonance energy transfer (FRET) and double electron-electron resonance (DEER) can elucidate the distance and distance-changes between the probes. However, the probes have an intrinsic conformational flexibility due to the linker by which they are conjugated to the biomolecule. This property minimizes the influence of the label side chain on the structure of the target molecule, but complicates the direct correlation of the experimental inter-label distances with the macromolecular structure or changes thereof. Simulation methods that account for the conformational flexibility and orientation of the probe(s) can be helpful in overcoming this problem. We performed distance measurements using FRET and DEER and explored different simulation techniques to predict inter-label distances using the Rpo4/7 stalk module of the M. jannaschii RNA polymerase. This is a suitable model system because it is rigid and a high-resolution X-ray structure is available. The conformations of the fluorescent labels and nitroxide spin labels on Rpo4/7 were modeled using in vacuo molecular dynamics simulations (MD) and a stochastic Monte Carlo sampling approach. For the nitroxide probes we also performed MD simulations with explicit water and carried out a rotamer library analysis. Our results show that the Monte Carlo simulations are in better agreement with experiments than the MD simulations and the rotamer library approach results in plausible distance predictions. Because the latter is the least computationally demanding of the methods we have explored, and is readily available to many researchers, it prevails as the method of choice for the interpretation of DEER distance distributions. PMID:22761805

  7. Comparing Measurements, Simulations, and Forecasts of Snow Water Equivalent Across the Great Lakes Basin

    NASA Astrophysics Data System (ADS)

    Bolinger, R. A.; Olheiser, C.; Krumwiede, B.; Gronewold, A.

    2014-12-01

    Basin-scale estimates of the water budget of the North American Great Lakes are based on a geographically broad (and, in some areas, relatively sparse) monitoring network that spans the United States-Canadian international border, and a limited ensemble of models. Of the various components of the Great Lakes water budget, snow water equivalent (and its contribution to runoff) represents one that is estimated by a regional rainfall-runoff simulation model (the NOAA large basin runoff model, or LBRM) and by a data assimilation model (via the NOAA National Operational Hydrological Remote Sensing Center Snow Data Assimilation System). Importantly, both products are employed in regional operational water budget and water level forecasts, including those developed by the US Army Corps of Engineers, the New York Power Authority, and Ontario Power Generation. While these forecasts are periodically evaluated for skill based on a comparison between water level projections and observations, we know of no study that has either compared LBRM simulations of SWE to corresponding NOHRSC estimates, or explored the potential benefits of assimilating NOHRSC estimates into the LBRM and propagating those benefits into water level-based management decisions. To address this gap in research and operational knowledge, we compare simulated and "observed" SWE for select sub-basins in the Great Lakes region. We refer to the NOHRSC-SNODAS product as an "observed" estimate of SWE because it combines airborne and surface measurements with satellite derived snow information and model simulations. Our findings indicate general agreement between LBRM-simulated and observation-based estimates of SWE, particularly with respect to the timing of most individual events and the timing of peak SWE. However, we find discontinuities in the timing and duration of snowmelt, the magnitude of the peak runoff, and the overall cumulative seasonal total runoff. Finally, we propagate these estimates of SWE into

  8. Measuring Hugoniot, reshock and release properties of natural snow and simulants

    SciTech Connect

    Furnish, M.D.; Boslough, M.B.

    1996-02-01

    We describe methods for measuring dynamical properties for underdense materials (e.g. snow) over a stress range of roughly 0. 1 - 4 GPa. Particular material properties measured by the present methods include Hugoniot states, reshock states and release paths. The underdense materials may pose three primary experimental difficulties. Snow in particular is perishable; it can melt or sublime during storage, preparation and testing. Many of these materials are brittle and crushable; they cannot withstand such treatment as traditional machining or launch in a gun system. Finally, with increasing porosity the calculated Hugoniot density becomes rapidly more sensitive to errors in wave time-of-arrival measurements. A family of 36 impact tests was conducted on snow and six proposed snow simulants at Sandia, yielding reliable Hugoniot states, somewhat less reliable reshock 3 states, and limited release property information. Natural snow of density {approximately}0.5 gm/cm{sup 3}, a lightweight concrete of density {approximately}0.7 gm/cm{sup 3} and a {open_quotes}snow-matching grout{close_quotes} of density {approximately}0.28 gm/cm 3 were the subjects of the majority of the tests. Hydrocode calculations using CTH were performed to elucidate sensitivities to edge effects as well as to assess the applicability of SESAME 2-state models to these materials. Simulations modeling snow as porous water provided good agreement for Hugoniot stresses to 1 GPa; a porous ice model was preferred for higher Hugoniot stresses. On the other hand, simulations of tests on snow, lightweight concrete and the snow-matching grout based on (respectively) porous ice, tuff and polyethylene showed a too-stiff response. Other methods for characterizing these materials are discussed. Based on the Hugoniot properties, the snow-matching grout appears to be a better snow simulant than does the lightweight concrete.

  9. Wind induced errors on solid precipitation measurements: an evaluation using time-dependent turbulence simulations

    NASA Astrophysics Data System (ADS)

    Colli, Matteo; Lanza, Luca Giovanni; Rasmussen, Roy; Mireille Thériault, Julie

    2014-05-01

    Among the different environmental sources of error for ground based solid precipitation measurements, wind is the main responsible for a large reduction of the catching performance. This is due to the aero-dynamic response of the gauge that affects the originally undisturbed airflow causing the deformation of the snowflakes trajectories. The application of composite gauge/wind shield measuring configurations allows the improvements of the collection efficiency (CE) at low wind speeds (Uw) but the performance achievable under severe airflow velocities and the role of turbulence still have to be explained. This work is aimed to assess the wind induced errors of a Geonor T200B vibrating wires gauge equipped with a single Alter shield. This is a common measuring system for solid precipitation, which constitutes of the R3 reference system in the ongoing WMO Solid Precipitation InterComparison Experiment (SPICE). The analysis is carried out by adopting advanced Computational Fluid Dynamics (CFD) tools for the numerical simulation of the turbulent airflow realized in the proximity of the catching section of the gauge. The airflow patterns were computed by running both time-dependent (Large Eddies Simulation) and time-independent (Reynolds Averaged Navier-Stokes) simulations. on the Yellowstone high performance computing system of the National Center for Atmospheric Research. The evaluation of CE under different Uw conditions was obtained by running a Lagrangian model for the calculation of the snowflakes trajectories building on the simulated airflow patterns. Particular attention has been paid to the sensitivity of the trajectories to different snow particles sizes and water content (corresponding to dry and wet snow). The results will be illustrated in comparative form between the different methodologies adopted and the existing infield CE evaluations based on double shield reference gauges.

  10. Phase contrast imaging simulation and measurements using polychromatic sources with small source-object distances

    SciTech Connect

    Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca; Oliva, Piernicola; Stumbo, Simone; Delogu, Pasquale; Zanette, Irene; Stefanini, Arnaldo

    2008-11-01

    Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devoted to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.

  11. Simulating soil-water movement through loess-veneered landscapes using nonconsilient saturated hydraulic conductivity measurements

    USGS Publications Warehouse

    Williamson, Tanja N.; Lee, Brad D.; Schoeneberger, Philip J.; McCauley, W. M.; Indorante, Samuel J.; Owens, Phillip R.

    2014-01-01

    Soil Survey Geographic Database (SSURGO) data are available for the entire United States, so are incorporated in many regional and national models of hydrology and environmental management. However, SSURGO does not provide an understanding of spatial variability and only includes saturated hydraulic conductivity (Ksat) values estimated from particle size analysis (PSA). This study showed model sensitivity to the substitution of SSURGO data with locally described soil properties or alternate methods of measuring Ksat. Incorporation of these different soil data sets significantly changed the results of hydrologic modeling as a consequence of the amount of space available to store soil water and how this soil water is moved downslope. Locally described soil profiles indicated a difference in Ksat when measured in the field vs. being estimated from PSA. This, in turn, caused a difference in which soil layers were incorporated in the hydrologic simulations using TOPMODEL, ultimately affecting how soil water storage was simulated. Simulations of free-flowing soil water, the amount of water traveling through pores too large to retain water against gravity, were compared with field observations of water in wells at five slope positions along a catena. Comparison of the simulated data with the observed data showed that the ability to model the range of conditions observed in the field varied as a function of three soil data sets (SSURGO and local field descriptions using PSA-derived Ksat or field-measured Ksat) and that comparison of absolute values of soil water storage are not valid if different characterizations of soil properties are used.

  12. Modeling of the radiation measurement device FALCON 5000 by MCNPX: simulated efficiency for the on-site measurement

    NASA Astrophysics Data System (ADS)

    Courageot, Estelle; Duong, Emilie; Gaillard-Lecanu, Emmanuelle; Jahan, Sylvie

    2014-06-01

    In order to evaluate the efficiency of the new spectrometry detector available in the STEP unit of the R&D from EDF, a numerical model of the FALCON 5000 radiation measurement system, based on the Ge technology, has been constructed thanks to the computer calculation code MCNPX. Due to the ignorance of the source term and the geometry of irradiation in our facilities, some cases of irradiation induce by a radioactive source has been planed to be simulated with different parameters which has been studied. For different energies and different distances and angles of incidence, simulations have been performed in order to assess an average the response of the modeled detector by MCNPX. Thanks to all these results, a global curve for the efficiency has been established taking into account of the energy, the distance between the source and the detector and the angle of incidence of the photon. The modeling of the detector, its validation and the efficiency curve have been presented in this paper.

  13. Advances in the simulation and automated measurement of well-sorted granular material: 2. Direct measures of particle properties

    USGS Publications Warehouse

    Buscombe, Daniel D.; Rubin, David M.

    2012-01-01

    1. In this, the second of a pair of papers on the structure of well-sorted natural granular material (sediment), new methods are described for automated measurements from images of sediment, of: 1) particle-size standard deviation (arithmetic sorting) with and without apparent void fraction; and 2) mean particle size in material with void fraction. A variety of simulations of granular material are used for testing purposes, in addition to images of natural sediment. Simulations are also used to establish that the effects on automated particle sizing of grains visible through the interstices of the grains at the very surface of a granular material continue to a depth of approximately 4 grain diameters and that this is independent of mean particle size. Ensemble root-mean squared error between observed and estimated arithmetic sorting coefficients for 262 images of natural silts, sands and gravels (drawn from 8 populations) is 31%, which reduces to 27% if adjusted for bias (slope correction between observed and estimated values). These methods allow non-intrusive and fully automated measurements of surfaces of unconsolidated granular material. With no tunable parameters or empirically derived coefficients, they should be broadly universal in appropriate applications. However, empirical corrections may need to be applied for the most accurate results. Finally, analytical formulas are derived for the one-step pore-particle transition probability matrix, estimated from the image's autocorrelogram, from which void fraction of a section of granular material can be estimated directly. This model gives excellent predictions of bulk void fraction yet imperfect predictions of pore-particle transitions.

  14. Measurements and simulations of the near-surface composition of evaporating ethanol-water droplets.

    PubMed

    Homer, Christopher J; Jiang, Xingmao; Ward, Timothy L; Brinker, C Jeffrey; Reid, Jonathan P

    2009-09-28

    The evolving composition of evaporating ethanol-water droplets (initially 32.6 or 45.3 microm radius) is probed by stimulated Raman scattering over the period 0.2 to 3 ms following droplet generation and with a surrounding nitrogen gas pressure in the range 10 to 100 kPa. The dependence of the evaporation rate on the relative humidity of the surrounding gas phase is also reported. The measured data are compared with both a quasi-steady state model and with numerical simulations of the evaporation process. Results from the numerical simulations are shown to agree closely with the measurements when the stimulated signal is assumed to arise from an outer shell with a probe depth of 2.9+/-0.4% of the droplet radius, consistent with a previous determination. Further, the time-dependent measurements are shown to be sensitive to the development of concentration gradients within evaporating droplets. This represents the first direct measurement of the spatial gradients in composition that arise during the evaporation of aerosol droplets and allows the influence of liquid phase diffusion within the condensed phase on droplet evaporation to be examined.

  15. Measurement and simulation of a Compton suppression system for safeguards application

    NASA Astrophysics Data System (ADS)

    Lee, Seung Kyu; Seo, Hee; Won, Byung-Hee; Lee, Chaehun; Shin, Hee-Sung; Na, Sang-Ho; Song, Dae-Yong; Kim, Ho-Dong; Park, Geun-Il; Park, Se-Hwan

    2015-11-01

    Plutonium (Pu) contents in spent nuclear fuels, recovered uranium (U) or uranium/transuranium (U/TRU) products must be measured in order to secure the safeguardability of a pyroprocessing facility. Self-induced X-Ray fluorescence (XRF) and gamma-ray spectroscopy are useful techniques for determining Pu-to-U ratios and Pu isotope ratios of spent fuel. Photon measurements of spent nuclear fuel by using high-resolution spectrometers such as high-purity germanium (HPGe) detectors show a large continuum background in the low-energy region, which is due in large part to Compton scattering of energetic gamma rays. This paper proposes a Compton suppression system for reducing of the Compton continuum background. In the present study, the system was configured by using an HPGe main detector and a BGO (bismuth germanate: Bi4Ge3O12) guard detector. The system performances for gamma-ray measurement and XRF were evaluated by means of Monte Carlo simulations and measurements of the radiation source. The Monte Carlo N-Particle eXtended (MCNPX) simulations were performed using the same geometry as for the experiments, and considered, for exact results, the production of secondary electrons and photons. As a performance test of the Compton suppression system, the peak-to-Compton ratio, which is a figure of merit to evaluate the gamma-ray detection, was enhanced by a factor of three or more when the Compton suppression system was used.

  16. Badge Office Process Analysis

    SciTech Connect

    Haurykiewicz, John Paul; Dinehart, Timothy Grant; Parker, Robert Young

    2016-05-12

    The purpose of this process analysis was to analyze the Badge Offices’ current processes from a systems perspective and consider ways of pursuing objectives set forth by SEC-PS, namely increased customer flow (throughput) and reduced customer wait times. Information for the analysis was gathered for the project primarily through Badge Office Subject Matter Experts (SMEs), and in-person observation of prevailing processes. Using the information gathered, a process simulation model was constructed to represent current operations and allow assessment of potential process changes relative to factors mentioned previously. The overall purpose of the analysis was to provide SEC-PS management with information and recommendations to serve as a basis for additional focused study and areas for potential process improvements in the future.

  17. Comprehensive testing to measure the response of butyl rubber to Hanford tank waste simulant

    SciTech Connect

    NIGREY,PAUL J.

    2000-05-01

    This report presents the findings of the Chemical Compatibility Program developed to evaluate plastic packaging components that may be incorporated in packaging mixed-waste forms for transportation. Consistent with the methodology outlined in this report, the authors performed the second phase of this experimental program to determine the effects of simulant Hanford tank mixed wastes on packaging seal materials. That effort involved the comprehensive testing of five plastic liner materials in an aqueous mixed-waste simulant. The testing protocol involved exposing the materials to {approximately}143, 286, 571, and 3,670 krad of gamma radiation and was followed by 7-, 14-, 28-, 180-day exposures to the waste simulant at 18, 50, and 60 C. Butyl rubber samples subjected to the same protocol were then evaluated by measuring seven material properties: specific gravity, dimensional changes, mass changes, hardness, compression set, vapor transport rates, and tensile properties. From the analyses, they determined that butyl rubber has relatively good resistance to radiation, this simulant, and a combination of these factors. These results suggest that butyl rubber is a relatively good seal material to withstand aqueous mixed wastes having similar composition to the one used in this study.

  18. Applying simulation model to uniform field space charge distribution measurements by the PEA method

    SciTech Connect

    Liu, Y.; Salama, M.M.A.

    1996-12-31

    Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished from space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.

  19. The Effect of Peripheral Compression on Syllable Perception Measured with a Hearing Impairment Simulator.

    PubMed

    Matsui, Toshie; Irino, Toshio; Nagae, Misaki; Kawahara, Hideki; Patterson, Roy D

    2016-01-01

    Hearing impaired (HI) people often have difficulty understanding speech in multi-speaker or noisy environments. With HI listeners, however, it is often difficult to specify which stage, or stages, of auditory processing are responsible for the deficit. There might also be cognitive problems associated with age. In this paper, a HI simulator, based on the dynamic, compressive gammachirp (dcGC) filterbank, was used to measure the effect of a loss of compression on syllable recognition. The HI simulator can counteract the cochlear compression in normal hearing (NH) listeners and, thereby, isolate the deficit associated with a loss of compression in speech perception. Listeners were required to identify the second syllable in a three-syllable "nonsense word", and between trials, the relative level of the second syllable was varied, or the level of the entire sequence was varied. The difference between the Speech Reception Threshold (SRT) in these two conditions reveals the effect of compression on speech perception. The HI simulator adjusted a NH listener's compression to that of the "average 80-year old" with either normal compression or complete loss of compression. A reference condition was included where the HI simulator applied a simple 30-dB reduction in stimulus level. The results show that the loss of compression has its largest effect on recognition when the second syllable is attenuated relative to the first and third syllables. This is probably because the internal level of the second syllable is attenuated proportionately more when there is a loss of compression.

  20. Dose evaluation of selective collimation effect in cephalography by measurement and Monte Carlo simulation.

    PubMed

    Lee, Boram; Shin, Gwisoon; Kang, Sunjung; Shin, Boram; Back, Ilhong; Park, Hyok; Park, Changseo; Lee, Jeongwoo; Lee, Wonho; Choi, Jonghak; Park, Ryeonghwang; Kim, Youhyun

    2012-01-01

    Recently, simulations based on the Monte Carlo code have been increasingly applied for physics phenomena, patient dose and quality assurance of radiation systems. The objective of this study was to use Monte Carlo simulation and measurement to verify dose and dose reduction in cephalography. The collimator was constructed with 3-mm thick lead plate, and attached to the tube head to remove regions of disinterest in the radiation field. A digital phantom patient was constructed to evaluate patient dose. In addition, detectors of pixel size 1×1 cm² and 0.1×0.1 cm² were constructed to check collimator location. The effective dose according to International Commission on Radiological Protection 103 was calculated with and without collimation. The effective doses for simulation with and without collimation were 5.09 and 11.32 µSv, respectively. The results of the calculated effective dose show 61.7 % reduction of field area and 55 % of effective dose. The Monte Carlo simulation is a good evaluation tool for patient dose.

  1. Comparisons of dense-plasma-focus kinetic simulations with experimental measurements

    SciTech Connect

    Schmidt, A.; Link, A.; Welch, D.; Ellsworth, J.; Falabella, S.; Tang, V.

    2014-06-01

    Dense-plasma-focus (DPF) Z-pinch devices are sources of copious high-energy electrons and ions, x rays, and neutrons. The mechanisms through which these physically simple devices generate such high-energy beams in a relatively short distance are not fully understood and past optimization efforts of these devices have been largely empirical. Previously we reported on fully kinetic simulations of a DPF and compared them with hybrid and fluid simulations of the same device. Here we present detailed comparisons between fully kinetic simulations and experimental data on a 1.2 kJ DPF with two electrode geometries, including neutron yield and ion beam energy distributions. A more intensive third calculation is presented which examines the effects of a fully detailed pulsed power driver model. We also compare simulated electromagnetic fluctuations with direct measurement of radiofrequency electromagnetic fluctuations in a DPF plasma. These comparisons indicate that the fully kinetic model captures the essential physics of these plasmas with high fidelity, and provide further evidence that anomalous resistivity in the plasma arises due to a kinetic instability near the lower hybrid frequency.

  2. Comprehensive testing to measure the response of fluorocarbon rubber (FKM) to Hanford tank waste simulant

    SciTech Connect

    NIGREY,PAUL J.; BOLTON,DENNIS L.

    2000-02-01

    This report presents the findings of the Chemical Compatibility Program developed to evaluate plastic packaging components that may be incorporated in packaging mixed-waste forms for transportation. Consistent with the methodology outlined in this report, the authors performed the second phase of this experimental program to determine the effects of simulant Hanford tank mixed wastes on packaging seal materials. That effort involved the comprehensive testing of five plastic liner materials in an aqueous mixed-waste simulant. The testing protocol involved exposing the materials to {approximately}143, 286, 571, and 3,670 Krad of gamma radiation and was followed by 7-, 14-, 28-, 180-day exposures to the waste simulant at 18, 50, and 60 C. Fluorocarbon (FKM) rubber samples subjected to the same protocol were then evaluated by measuring seven material properties: specific gravity, dimensional changes, mass changes, hardness, compression set, vapor transport rates, and tensile properties. From the analyses, they determined that FKM rubber is not a good seal material to withstand aqueous mixed wastes having similar composition to the one used in this study. They have determined that FKM rubber has limited chemical durability after exposure to gamma radiation followed by exposure to the Hanford tank simulant mixed waste at elevated temperatures above 18 C.

  3. Validation of BOUT + + ELM simulation by Comparison with ECEI Measurements in the KSTAR tokamak

    NASA Astrophysics Data System (ADS)

    Kim, Minwoo; Lee, Jaehyun; Choi, Minjun; Yun, Gunsu; Xu, X. Q.; Lee, Woochang; Park, Hyeon; Domier, C. W.; Luhmann, N. C., Jr.; Kstar Team

    2013-10-01

    Details of ELM dynamics has been measured in 2D using an electron cyclotron emission imaging (ECEI) diagnostic in the KSTAR tokamak. The observed ELM dynamics show complex evolution stages including linear growth, saturation, changes in mode number and rotation velocity, and localized crash. We studied the mode structure of the observed ELMs in the linear growth phase using 3-field BOUT + + simulations. The toroidal mode number (n) of ELMs, which was experimentally determined by an array of toroidal Mirnov coils, was fixed in the simulation. On the other hand, the pressure profile was adjusted to make the linear growth rate finite at the given n number. For direct comparison with the observed images, the simulation results were converted to synthetic ECEI images by taking into account instrumental broadening, intrinsic ECE broadening in the pedestal region, and system noises. The synthetic images were qualitatively well matched with the observations. As a next step, a simulation study in linear phase is planned for a self-consistent equilibrium including bootstrap current. Work supported by NRF Korea under contract no. 2013035905 and US DoE under contract no. DE-FG-02-99ER54531.

  4. Comparison and analysis of aircraft measurements and mesoscale atmospheric chemistry model simulations of tropospheric ozone

    NASA Technical Reports Server (NTRS)

    Pleim, Jonathan E.; Ching, Jason K. S.

    1994-01-01

    The Regional Acid Deposition Model (RADM) has been applied to several of the field experiments which were part of the Acid Models Operational and Diagnostic Evaluation Study (Acid MODES). The experiment which was of particular interest with regards to ozone photochemistry involved horizontal zig-zag flight patterns (ZIPPER) over an area from the eastern Ohio River valley to the Adirondacks of New York. Model simulations by both the standard resolution RADM (delta x = 80 km) and the nested grid RADM (delta x = 26.7 km) compare well to measurements in the low emission regions in central Pennsylvania and upstate New York, but underestimate in the high emission upper Ohio River valley. The nested simulation does considerably better, however, than the coarse grid simulation in terms of horizontal pattern and concentration magnitudes. Analysis of NO(x) and HO(x) concentrations and photochemical products rates of ozone show that the model's response to large point source emissions is very unsystematic both spatially and temporally. This is due to the models instability to realistically simulate the small scale (subgrid) gradients in precursor concentrations in and around large point source plumes.

  5. A simulation of the measurement of electrical conductivity in randomly generated two-phase rocks.

    NASA Astrophysics Data System (ADS)

    Mandolesi, Eric; Moorkamp, Max; Jones, Alan G.

    2014-05-01

    Geological models of the subsurface require detailed data, often unavailable from direct observation or well logs. Hence imaging the subsurface relies on models obtained by interpretation of geophysical data. Several electromagnetic (EM) geophysical methods focus on the EM properties of rocks and sediments to determine a reliable image of the subsurface, while the same electromagnetic properties are directly measured in laboratories. Often these laboratory measurements return equivocal results that are difficult to reconcile with field observations. Recently different numerical approaches have been investigated in order to understand the effects of the geometry and continuity of interconnected pathways of conductors on EM field measurements, often restricting the studies to direct current (DC) sources. Bearing in mind the time-varying nature of the natural electromagnetic sources that play a role in field measurements, we numerically simulate the effects of such EM sources on the conductivity measured on the surface of a randomly generated three-dimensional body embedded in a uniform host by using electromagnetic induction equations, thus simulating a magnetotelluric (MT) survey. A key point in such a simulation is the scalability of the problem: the deeper the target, the longer the period of the EM source is needed. On the other hand, a long period signal ignores small heterogeneous conductors in the target bulk of the material, averaging the different conductivities in a median value. Since most real rocks are poor conductors, we have modeled a two-phase mixture of rock and interconnected conductive elements (representing melts, saline fluids, sulphidic, carbonitic, or metallic sediments, etc.), randomly generated within the background host. We have compared the results from the simulated measurements with the target rock embedded at different depths with electrical conductivity predicted by both Hashin-Shtrikman (HS) bounds and an updated multi-phase Archie

  6. Uncertainty analysis of numerical model simulations and HFR measurements during high energy events

    NASA Astrophysics Data System (ADS)

    Donncha, Fearghal O.; Ragnoli, Emanuele; Suits, Frank; Updyke, Teresa; Roarty, Hugh

    2013-04-01

    The identification and decomposition of sensor and model shortcomings is a fundamental component of any coastal monitoring and predictive system. In this research, numerical model simulations are combined with high-frequency radar (HFR) measurements to provide insights into the statistical accuracy of the remote sensing unit. A combination of classical tidal analysis and quantitative measures of correlation evaluate the performance of both across the bay. A network of high frequency radars is deployed within the Chesapeake study site, on the East coast of the United States, as a backbone component of the Integrated Ocean Observing System (IOOS). This system provides real-time synoptic measurements of surface currents in the zonal and meridional direction at hourly intervals in areas where at least two stations overlap, and radial components elsewhere. In conjunction with this numerical simulations using EFDC (Environmental Fluid Dynamics Code), an advanced three-dimensional model, provide additional details on flows, encompassing both surface dynamics and volumetric transports, while eliminating certain fundamental error inherent in the HFR system such as geometric dilution of precision (GDOP) and range dependencies. The aim of this research is an uncertainty estimate of both these datasets allowing for a degree of inaccuracy in both. The analysis focuses on comparisons between both the vector and radial component of flows returned by the HFR relative to numerical predictions. The analysis provides insight into the reported accuracy of both the raw radial data and the post-processed vector current data computed from combining the radial data. Of interest is any loss of accuracy due to this post-processing. Linear regression techniques decompose the surface currents based on dominant flow processes (tide and wind); statistical analysis and cross-correlation techniques measure agreement between the processed signal and dominant forcing parameters. The tidal signal

  7. Surface force measurements and simulations of mussel-derived peptide adhesives on wet organic surfaces.

    PubMed

    Levine, Zachary A; Rapp, Michael V; Wei, Wei; Mullen, Ryan Gotchy; Wu, Chun; Zerze, Gül H; Mittal, Jeetain; Waite, J Herbert; Israelachvili, Jacob N; Shea, Joan-Emma

    2016-04-19

    Translating sticky biological molecules-such as mussel foot proteins (MFPs)-into synthetic, cost-effective underwater adhesives with adjustable nano- and macroscale characteristics requires an intimate understanding of the glue's molecular interactions. To help facilitate the next generation of aqueous adhesives, we performed a combination of surface forces apparatus (SFA) measurements and replica-exchange molecular dynamics (REMD) simulations on a synthetic, easy to prepare, Dopa-containing peptide (MFP-3s peptide), which adheres to organic surfaces just as effectively as its wild-type protein analog. Experiments and simulations both show significant differences in peptide adsorption on CH3-terminated (hydrophobic) and OH-terminated (hydrophilic) self-assembled monolayers (SAMs), where adsorption is strongest on hydrophobic SAMs because of orientationally specific interactions with Dopa. Additional umbrella-sampling simulations yield free-energy profiles that quantitatively agree with SFA measurements and are used to extract the adhesive properties of individual amino acids within the context of MFP-3s peptide adhesion, revealing a delicate balance between van der Waals, hydrophobic, and electrostatic forces.

  8. Surface force measurements and simulations of mussel-derived peptide adhesives on wet organic surfaces

    PubMed Central

    Levine, Zachary A.; Rapp, Michael V.; Wei, Wei; Mullen, Ryan Gotchy; Wu, Chun; Zerze, Gül H.; Mittal, Jeetain; Waite, J. Herbert; Israelachvili, Jacob N.; Shea, Joan-Emma

    2016-01-01

    Translating sticky biological molecules—such as mussel foot proteins (MFPs)—into synthetic, cost-effective underwater adhesives with adjustable nano- and macroscale characteristics requires an intimate understanding of the glue’s molecular interactions. To help facilitate the next generation of aqueous adhesives, we performed a combination of surface forces apparatus (SFA) measurements and replica-exchange molecular dynamics (REMD) simulations on a synthetic, easy to prepare, Dopa-containing peptide (MFP-3s peptide), which adheres to organic surfaces just as effectively as its wild-type protein analog. Experiments and simulations both show significant differences in peptide adsorption on CH3-terminated (hydrophobic) and OH-terminated (hydrophilic) self-assembled monolayers (SAMs), where adsorption is strongest on hydrophobic SAMs because of orientationally specific interactions with Dopa. Additional umbrella-sampling simulations yield free-energy profiles that quantitatively agree with SFA measurements and are used to extract the adhesive properties of individual amino acids within the context of MFP-3s peptide adhesion, revealing a delicate balance between van der Waals, hydrophobic, and electrostatic forces. PMID:27036002

  9. Noise levels from a model turbofan engine with simulated noise control measures applied

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Woodward, Richard P.

    1993-01-01

    A study of estimated full-scale noise levels based on measured levels from the Advanced Ducted Propeller (ADP) sub-scale model is presented. Testing of this model was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. Effective Perceived Noise Level (EPNL) estimates for the baseline configuration are documented, and also used as the control case in a study of the potential benefits of two categories of noise control. The effect of active noise control is evaluated by artificially removing various rotor-stator interaction tones. Passive noise control is simulated by applying a notch filter to the wind tunnel data. Cases with both techniques are included to evaluate hybrid active-passive noise control. The results for EPNL values are approximate because the original source data was limited in bandwidth and in sideline angular coverage. The main emphasis is on comparisons between the baseline and configurations with simulated noise control measures.

  10. Comparison of Monte Carlo simulated and measured performance parameters of miniPET scanner

    NASA Astrophysics Data System (ADS)

    Kis, S. A.; Emri, M.; Opposits, G.; Bükki, T.; Valastyán, I.; Hegyesi, Gy.; Imrek, J.; Kalinka, G.; Molnár, J.; Novák, D.; Végh, J.; Kerek, A.; Trón, L.; Balkay, L.

    2007-02-01

    In vivo imaging of small laboratory animals is a valuable tool in the development of new drugs. For this purpose, miniPET, an easy to scale modular small animal PET camera has been developed at our institutes. The system has four modules, which makes it possible to rotate the whole detector system around the axis of the field of view. Data collection and image reconstruction are performed using a data acquisition (DAQ) module with Ethernet communication facility and a computer cluster of commercial PCs. Performance tests were carried out to determine system parameters, such as energy resolution, sensitivity and noise equivalent count rate. A modified GEANT4-based GATE Monte Carlo software package was used to simulate PET data analogous to those of the performance measurements. GATE was run on a Linux cluster of 10 processors (64 bit, Xeon with 3.0 GHz) and controlled by a SUN grid engine. The application of this special computer cluster reduced the time necessary for the simulations by an order of magnitude. The simulated energy spectra, maximum rate of true coincidences and sensitivity of the camera were in good agreement with the measured parameters.

  11. Black carbon and trace gases over South Asia: Measurements and Regional Climate model simulations

    NASA Astrophysics Data System (ADS)

    Bhuyan, Pradip; Pathak, Binita; Parottil, Ajay

    2016-07-01

    Trace gases and aerosols are simulated with 50 km spatial resolution over South Asian CORDEX domain enclosing the Indian sub-continent and North-East India for the year 2012 using two regional climate models RegCM4 coupled with CLM4.5 and WRF-Chem 3.5. Both models are found to capture the seasonality in the simulated O3 and its precursors, NOx and CO and black carbon concentrations together with the meteorological variables over the Indian Subcontinent as well as over the sub-Himalayan North-Eastern region of India including Bangladesh. The model simulations are compared with the measurements made at Dibrugarh (27.3°N, 94.6°E, 111 m amsl). Both the models are found to capture the observed diurnal and seasonal variations in O3 concentrations with maximum in spring and minimum in monsoon, the correlation being better for WRF-Chem (R~0.77) than RegCM (R~0.54). Simulated NOx and CO is underestimated in all the seasons by both the models, the performance being better in the case of WRF-Chem. The observed difference may be contributed by the bias in the estimation of the O3 precursors NOx and CO in the emission inventories or the error in the simulation of the meteorological variables which influences O3 concentration in both the models. For example, in the pre-monsoon and winter season, the WRF-Chem model simulated shortwave flux overestimates the observation by ~500 Wm-2 while in the monsoon and post monsoon season, simulated shortwave flux is equivalent to the observation. The model predicts higher wind speed in all the seasons especially during night-time. In the post-monsoon and winter season, the simulated wind pattern is reverse to observation with daytime low and night-time high values. Rainfall is overestimated in all the seasons. RegCM-CLM4.5 is found to underestimate rainfall and other meteorological parameters. The WRF-Chem model closely captured the observed values of black carbon mass concentrations during pre-monsoon and summer monsoon seasons, but

  12. Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modeling

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Zhang, Pengjie; Jing, Yipeng

    2015-02-01

    Measuring the volume weighted velocity power spectrum suffers from a severe systematic error due to imperfect sampling of the velocity field from the inhomogeneous distribution of dark matter particles/halos in simulations or galaxies with velocity measurement. This "sampling artifact" depends on both the mean particle number density n¯P and the intrinsic large scale structure (LSS) fluctuation in the particle distribution. (1) We report robust detection of this sampling artifact in N -body simulations. It causes ˜12 % underestimation of the velocity power spectrum at k =0.1 h /Mpc for samples with n¯ P=6 ×10-3 (Mpc /h )-3 . This systematic underestimation increases with decreasing n¯P and increasing k . Its dependence on the intrinsic LSS fluctuations is also robustly detected. (2) All of these findings are expected based upon our theoretical modeling in paper I [P. Zhang, Y. Zheng, and Y. Jing, Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling, arXiv:1405.7125.]. In particular, the leading order theoretical approximation agrees quantitatively well with the simulation result for n¯ P≳6 ×10-4 (Mpc /h )-3 . Furthermore, we provide an ansatz to take high order terms into account. It improves the model accuracy to ≲1 % at k ≲0.1 h /Mpc over 3 orders of magnitude in n¯P and over typical LSS clustering from z =0 to z =2 . (3) The sampling artifact is determined by the deflection D field, which is straightforwardly available in both simulations and data of galaxy velocity. Hence the sampling artifact in the velocity power spectrum measurement can be self-calibrated within our framework. By applying such self-calibration in simulations, it is promising to determine the real large scale velocity bias of 1013M⊙ halos with ˜1 % accuracy, and that of lower mass halos with better accuracy. (4) In contrast to suppressing the velocity power spectrum at large scale, the sampling artifact causes an overestimation of the velocity

  13. Microclimatic effects of planted hydroponic structures in urban environment: measurements and simulations

    NASA Astrophysics Data System (ADS)

    Katsoulas, N.; Antoniadis, D.; Tsirogiannis, I. L.; Labraki, E.; Bartzanas, T.; Kittas, C.

    2016-11-01

    The objectives of this effort was to study the effect of vertical (green wall) and horizontal (pergola) green structures on the microclimate conditions of the building surroundings and estimate the thermal perception and heat stress conditions near the two structures. The experimental data were used to validate the results simulated by the recent version (V4.0 preview III) of ENVI-met software which was used to simulate the effect of different design parameters of a pergola and a green façade on microclimate and heat stress conditions. Further aim is to use these results for better design of green structures. The microclimate measurements were carried out in real scale structures (hydroponic pergola and hydroponic green wall) at the Kostakii Campus of the Technological Education Institute of Epirus (Arta, Greece). The validation results showed a very good agreement between measured and simulated values of air temperature, with Tair,sim = 0.98 Tair,meas in the Empty atrium and Tair,sim = 0.99 Tair,meas in the Atrium with pergola, with a determination coefficient R 2 of 0.98 and 0.93, respectively. The model was used to predict the effects of green structures on air temperature (Tair), relative humidity (RH), and mean radiant temperature (Tmrt). The output values of these parameters were used as input data in the RayMan pro (V 2.1) model for estimating the physiologically equivalent temperature (PET) of different case scenarios. The average daytime value of simulated air temperature in the atrium for the case without and with pergola during three different days was 29.2 and 28.9 °C while the corresponding measured values were 29.7 and 29.2 °C. The results showed that compared to the case with no pergola in the atrium, covering 100% the atrium area with a planted pergola reduced at the hottest part of the day Tmrt and PET values by 29.4 and 17.9 °C, respectively. Although the values of air temperature (measured and simulated) were not greatly affected by the

  14. Aeroelastic measurements and simulations of a small wind turbine operating in the built environment

    NASA Astrophysics Data System (ADS)

    Evans, S. P.; Bradney, D. R.; Clausen, P. D.

    2016-09-01

    Small wind turbines, when compared to large commercial scale wind turbines, often lag behind with respect to research investment, technological development, and experimental verification of design standards. In this study we assess the simplified load equations outlined in IEC 61400.2-2013 for use in determining fatigue loading of small wind turbine blades. We compare these calculated loads to fatigue damage cycles from both measured in-service operation, and aeroelastic modelling of a small 5 kW Aerogenesis wind turbine. Damage cycle ranges and corresponding stress ratios show good agreement when comparing both aeroelastic simulations and operational measurements. Loads calculated from simplified load equations were shown to significantly overpredict load ranges while underpredicting the occurrence of damage cycles per minute of operation by 89%. Due to the difficulty in measuring and acquiring operational loading, we recommend the use of aeroelastic modelling as a method of mitigating the over-conservative simplified load equation for fatigue loading.

  15. Coupling particle simulation with aerodynamic measurement in hypersonic rarefied wind tunnel in JAXA

    NASA Astrophysics Data System (ADS)

    Suzuki, Toshiyuki; Ozawa, Takashi; Fujita, Kazuhisa

    2012-11-01

    Characteristics of test flow produced by the hypersonic rarefied wind tunnel in JAXA are investigated experimentally and numerically. To probe the test flow, a stainless sphere model with a diameter of 5mm is put into the test flow. Its displacement due to the aerodynamic force is measured under the several operating conditions of the wind tunnel. A spatial variation of total pressure of the test flow is also measured by using a pitot pressure tube. The flowfield in the test section of wind tunnel is also analyzed by using the direct simulation Monte Carlo technique. The flow properties are deduced from the comparison between the measurement and the calculation. It is found from the study that the freestream Mach number of 16 and Knudsen number of 0.2 are achieved for the mass flow rate of 0.08g/s and the total temperature of 750K. The core flow diameter is estimated to be approximately 30mm.

  16. Optical scintillation measurements in a desert environment IV: simulated effects of scintillation on communications links

    NASA Astrophysics Data System (ADS)

    Suite, Michele; Rabinovich, W. S.; Mahon, Rita; Moore, Christopher; Ferraro, Mike; Burris, H. R., Jr.; Thomas, L. M.

    2011-09-01

    Optical scintillation is an effect that limits the performance of many optical systems including imagers and free space optical communication links. The Naval Research Laboratory is undertaking a series of measurement campaigns of optical scintillation in a variety of environments. In December of 2010 measurements were made over a one week period in the desert at China Lake, CA. The NRL TATS system was used to measure time resolved scintillation over a variety of different ranges and terrains. This data has been used to determine fade rate and duration as a function of weather and link margin. Temporal correlation of fades has also been calculated. This data allows simulation of a variety of communication protocols and the effects of those protocols on link throughput. In this paper we present a comparison of different protocols for both direct and retroreflector links.

  17. Wind tunnel measurements in the wake of a simple structure in a simulated atmospheric flow

    NASA Technical Reports Server (NTRS)

    Hansen, A. C.; Peterka, J. A.; Cermak, J. E.

    1975-01-01

    Measurements of longitudinal mean velocity and turbulence intensity were made in the wake of a rectangular model building in a simulated atmospheric boundary-layer wind. The model building was a 1:50 scale model of a structure used in a wake measurement program at the George C. Marshall Space Flight Center 8-tower boundary-layer facility. The approach wind profile and measurement locations were chosen to match the field site conditions. The wakes of the building in winds from azimuths of 0 and 47 degrees referenced to the normal to the building long axis were examined. The effect of two lines of trees upwind of the building on the wake and the importance of the ratio of the building height to boundary-layer thickness on the extent of the wake were determined.

  18. Virtual charge state separator as an advanced tool coupling measurements and simulations

    NASA Astrophysics Data System (ADS)

    Yaramyshev, S.; Vormann, H.; Adonin, A.; Barth, W.; Dahl, L.; Gerhard, P.; Groening, L.; Hollinger, R.; Maier, M.; Mickat, S.; Orzhekhovskaya, A.

    2015-05-01

    A new low energy beam transport for a multicharge uranium beam will be built at the GSI High Current Injector (HSI). All uranium charge states coming from the new ion source will be injected into GSI heavy ion high current HSI Radio Frequency Quadrupole (RFQ), but only the design ions U4 + will be accelerated to the final RFQ energy. A detailed knowledge about injected beam current and emittance for pure design U4 + ions is necessary for a proper beam line design commissioning and operation, while measurements are possible only for a full beam including all charge states. Detailed measurements of the beam current and emittance are performed behind the first quadrupole triplet of the beam line. A dedicated algorithm, based on a combination of measurements and the results of advanced beam dynamics simulations, provides for an extraction of beam current and emittance values for only the U4 + component of the beam. The proposed methods and obtained results are presented.

  19. Double-Pulse Two-micron LPDA Lidar Simulation for Airborne Carbon Dioxide Measurements

    NASA Astrophysics Data System (ADS)

    Refaat, Tamer F.; Singh, Upendra N.; Yu, Jirong; Petros, Mulugeta

    2016-06-01

    An advanced double-pulse 2-μm integrated path differential absorption lidar has been developed at NASA Langley Research Center for measuring atmospheric carbon dioxide. The instrument utilizes a state-of-the-art 2-μm laser transmitter with tunable on-line wavelength and advanced receiver. Instrument modeling and airborne simulations are presented in this paper. Focusing on random errors, results demonstrate instrument capabilities of performing precise carbon dioxide differential optical depth measurement with less than 3% random error for single-shot operation up to 11 km altitude. This study is useful for defining CO2 measurement weighting function for adaptive targeting, instrument setting, validation and sensitivity trade-offs.

  20. A Monte Carlo Simulation for Understanding Energy Measurements of Beta Particles Detected by the UCNb Experiment

    NASA Astrophysics Data System (ADS)

    Feng, Chi; UCNb Collaboration

    2011-10-01

    It is theorized that contributions to the Fierz interference term from scalar interaction beyond the Standard Model could be detectable in the spectrum of neutron beta-decay. The UCNb experiment run at the Los Alamos Neutron Science Center aims to accurately measure the neutron beta-decay energy spectrum to detect a nonzero interference term. The instrument consists of a cubic ``integrating sphere'' calorimeter attached with up to 4 photomultiplier tubes. The inside of the calorimeter is coated with white paint and a thin UV scintillating layer made of deuterated polystyrene to contain the ultracold neutrons. A Monte Carlo simulation using the Geant4 toolkit is developed in order to provide an accurate method of energy reconstruction. Offline calibration with the Kellogg Radiation Laboratory 140 keV electron gun and conversion electron sources will be used to validate the Monte Carlo simulation to give confidence in the energy reconstruction methods and to better understand systematics in the experiment data.

  1. Secondary fusion coupled deuteron/triton transport simulation and thermal-to-fusion neutron convertor measurement

    SciTech Connect

    Wang, G. B.; Wang, K.; Liu, H. G.; Li, R. D.

    2013-07-01

    A Monte Carlo tool RSMC (Reaction Sequence Monte Carlo) was developed to simulate deuteron/triton transportation and reaction coupled problem. The 'Forced particle production' variance reduction technique was used to improve the simulation speed, which made the secondary product play a major role. The mono-energy 14 MeV fusion neutron source was employed as a validation. Then the thermal-to-fusion neutron convertor was studied with our tool. Moreover, an in-core conversion efficiency measurement experiment was performed with {sup 6}LiD and {sup 6}LiH converters. Threshold activation foils was used to indicate the fast and fusion neutron flux. Besides, two other pivotal parameters were calculated theoretically. Finally, the conversion efficiency of {sup 6}LiD is obtained as 1.97x10{sup -4}, which matches well with the theoretical result. (authors)

  2. Simulation of Aldehyde Emissions from an Ethanol Fueled Spark Ignition Engine and Comparison with FTIR Measurements

    NASA Astrophysics Data System (ADS)

    Barros Zaránte, Paola Helena; Sodre, Jose Ricardo

    2016-09-01

    This paper presents a mathematical model that calculates aldehyde emissions in the exhaust of a spark ignition engine fueled with ethanol. The numerical model for aldehyde emissions was developed using FORTRAN software, with the input data obtained from a dedicated engine cycle simulation software, AVL BOOST. The model calculates formaldehyde and acetaldehyde emissions, formed from the partial oxidation of methane, ethane and unburned ethanol. The calculated values were compared with experimental data obtained by Fourier Transform Infrared Spectroscopy (FTIR). The experiments were performed with a mid-size sedan powered by a 1.4-liter spark ignition engine on a chassis dynamometer. In general, the results demonstrate that the concentrations of aldehydes and the source elements increased with engine speed and exhaust gas temperature. A reasonable agreement between simulated and measured values was achieved.

  3. Computer simulated building energy consumption for verification of energy conservation measures in network facilities

    NASA Technical Reports Server (NTRS)

    Plankey, B.

    1981-01-01

    A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.

  4. Continuous metabolic and cardiovascular measurements on a monkey subject during a simulated 6-day Spacelab mission

    NASA Technical Reports Server (NTRS)

    Pace, N.; Rahlmann, D. F.; Mains, R. C.; Kodama, A. M.; Mccutcheon, E. P.

    1979-01-01

    A 10-kg male pig-tailed monkey (Macaca nemestrina) was selected as an optimal species for spaceflight studies on weightlessness. Three days before the simulated launch, the animal was placed in a fiberglass pod system to provide continuous measurement of respiratory gas exchange. Attention is given to examining the effects of weightlessness on several basic parameters of metabolic and cardiovascular function in an adult nonhuman primate. The 10.7-day total simulated-experiment period consisted of preflight 2.6 days, inflight 6.3 days, and postflight 1.8 days. Statistically significant diurnal variation was noted in oxygen consumption and CO2 production rates, body temperature and HR, but not in respiratory quotient or blood pressure. The high quality of the continuous data obtained demonstrates the feasibility of performing sound physiological experimentation on nonhuman primates in the Spacelab environment.

  5. Monte Carlo simulation of non-invasive glucose measurement based on FMCW LIDAR

    NASA Astrophysics Data System (ADS)

    Xiong, Bing; Wei, Wenxiong; Liu, Nan; He, Jian-Jun

    2010-11-01

    Continuous non-invasive glucose monitoring is a powerful tool for the treatment and management of diabetes. A glucose measurement method, with the potential advantage of miniaturizability with no moving parts, based on the frequency modulated continuous wave (FMCW) LIDAR technology is proposed and investigated. The system mainly consists of an integrated near-infrared tunable semiconductor laser and a detector, using heterodyne technology to convert the signal from time-domain to frequency-domain. To investigate the feasibility of the method, Monte Carlo simulations have been performed on tissue phantoms with optical parameters similar to those of human interstitial fluid. The simulation showed that the sensitivity of the FMCW LIDAR system to glucose concentration can reach 0.2mM. Our analysis suggests that the FMCW LIDAR technique has good potential for noninvasive blood glucose monitoring.

  6. Pose Measurement Performance of the Argon Relative Navigation Sensor Suite in Simulated Flight Conditions

    NASA Technical Reports Server (NTRS)

    Galante, Joseph M.; Eepoel, John Van; Strube, Matt; Gill, Nat; Gonzalez, Marcelo; Hyslop, Andrew; Patrick, Bryan

    2012-01-01

    Argon is a flight-ready sensor suite with two visual cameras, a flash LIDAR, an on- board flight computer, and associated electronics. Argon was designed to provide sensing capabilities for relative navigation during proximity, rendezvous, and docking operations between spacecraft. A rigorous ground test campaign assessed the performance capability of the Argon navigation suite to measure the relative pose of high-fidelity satellite mock-ups during a variety of simulated rendezvous and proximity maneuvers facilitated by robot manipulators in a variety of lighting conditions representative of the orbital environment. A brief description of the Argon suite and test setup are given as well as an analysis of the performance of the system in simulated proximity and rendezvous operations.

  7. Numerical Simulation of Wake Vortices Measured During the Idaho Falls and Memphis Field Programs

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.

    1996-01-01

    A numerical large-eddy simulation model is under modification and testing for application to aircraft wake vortices. The model, having a meteorological framework, permits the interaction of wake vortices with environments characterized by crosswind shear, stratification, and humidity. As part of the validation process, model results are compared with measured field data from the 1990 Idaho Falls and the 1994-1995 Memphis field experiments. Cases are selected that represent different aircraft and a cross section of meteorological environments. Also included is one case with wake vortex generation in ground effect. The model simulations are initialized with the appropriate meteorological conditions and a post roll-up vortex system. No ambient turbulence is assumed in our initial set of experiments, although turbulence can be self generated by the interaction of the model wakes with the ground and environment.

  8. Measuring Landau damping in Particle-in-Cell simulations using particles of different charge-weights

    NASA Astrophysics Data System (ADS)

    Ren, C.; Sarkar, A.; Cao, Y.-X.; Huang, M. C.; Li, J.

    2016-10-01

    We study whether putting more particles in ``region of interest (ROI)'' in phase space can efficiently increase Particle-in-Cell (PIC) simulation accuracy. We use Landau damping of a plasma wave as a figure of merit and set the ROI near the phase velocity of the wave. Improvement in Landau damping rate measurement is observed in 1D PIC simulations when employing more particles in the ROI but the effect is not monotonic. This is partly due to energy transfer from particles of large charge weights to those of smaller weights through the electric fields. Possible strategies to mitigate the energy transfer will also be discussed. This work is supported by the National Science Foundation under Grant No. PHY-1314734 and by the Department of Energy under Grant No. DE-SC0012316.

  9. Temperature elevation by HIFU in ex vivo porcine muscle: MRI measurement and simulation study

    SciTech Connect

    Solovchuk, Maxim A.; Hwang, San Chao; Chang, Hsu; Thiriet, Marc; Sheu, Tony W. H.

    2014-05-15

    Purpose: High-intensity focused ultrasound is a rapidly developing medical technology with a large number of potential clinical applications. Computational model can play a pivotal role in the planning and optimization of the treatment based on the patient's image. Nonlinear propagation effects can significantly affect the temperature elevation and should be taken into account. In order to investigate the importance of nonlinear propagation effects, nonlinear Westervelt equation was solved. Weak nonlinear propagation effects were studied. The purpose of this study was to investigate the correlation between the predicted and measured temperature elevations and lesion in a porcine muscle. Methods: The investigated single-element transducer has a focal length of 12 cm, an aperture of 8 cm, and frequency of 1.08 MHz. Porcine muscle was heated for 30 s by focused ultrasound transducer with an acoustic power in the range of 24–56 W. The theoretical model consists of nonlinear Westervelt equation with relaxation effects being taken into account and Pennes bioheat equation. Results: Excellent agreement between the measured and simulated temperature rises was found. For peak temperatures above 85–90 °C “preboiling” or cavitation activity appears and lesion distortion starts, causing small discrepancy between the measured and simulated temperature rises. From the measurements and simulations, it was shown that distortion of the lesion was caused by the “preboiling” activity. Conclusions: The present study demonstrated that for peak temperatures below 85–90 °C numerical simulation results are in excellent agreement with the experimental data in three dimensions. Both temperature rise and lesion size can be well predicted. Due to nonlinear effect the temperature in the focal region can be increased compared with the linear case. The current magnetic resonance imaging (MRI) resolution is not sufficient. Due to the inevitable averaging the measured

  10. Simulations and measurements of beam loss patterns at the CERN Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Bruce, R.; Assmann, R. W.; Boccone, V.; Bracco, C.; Brugger, M.; Cauchi, M.; Cerutti, F.; Deboy, D.; Ferrari, A.; Lari, L.; Marsili, A.; Mereghetti, A.; Mirarchi, D.; Quaranta, E.; Redaelli, S.; Robert-Demolaize, G.; Rossi, A.; Salvachua, B.; Skordis, E.; Tambasco, C.; Valentino, G.; Weiler, T.; Vlachoudis, V.; Wollmann, D.

    2014-08-01

    The CERN Large Hadron Collider (LHC) is designed to collide proton beams of unprecedented energy, in order to extend the frontiers of high-energy particle physics. During the first very successful running period in 2010-2013, the LHC was routinely storing protons at 3.5-4 TeV with a total beam energy of up to 146 MJ, and even higher stored energies are foreseen in the future. This puts extraordinary demands on the control of beam losses. An uncontrolled loss of even a tiny fraction of the beam could cause a superconducting magnet to undergo a transition into a normal-conducting state, or in the worst case cause material damage. Hence a multistage collimation system has been installed in order to safely intercept high-amplitude beam protons before they are lost elsewhere. To guarantee adequate protection from the collimators, a detailed theoretical understanding is needed. This article presents results of numerical simulations of the distribution of beam losses around the LHC that have leaked out of the collimation system. The studies include tracking of protons through the fields of more than 5000 magnets in the 27 km LHC ring over hundreds of revolutions, and Monte Carlo simulations of particle-matter interactions both in collimators and machine elements being hit by escaping particles. The simulation results agree typically within a factor 2 with measurements of beam loss distributions from the previous LHC run. Considering the complex simulation, which must account for a very large number of unknown imperfections, and in view of the total losses around the ring spanning over 7 orders of magnitude, we consider this an excellent agreement. Our results give confidence in the simulation tools, which are used also for the design of future accelerators.

  11. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-07-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use

  12. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, M.; Buss, R.; Scherrer, S.; Margreth, M.; Zappa, M.

    2015-12-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexity of automatic DRP mapping approaches affects hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison and a deviation map were derived. The automatically derived DRP-maps were used in synthetic runoff simulations with an adapted version of the hydrological model PREVAH, and simulation results compared with those from simulations using the reference maps. The DRP-maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP-maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. We therefore recommend not only using expert

  13. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2014-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-inch chord, 2-D straight wing with NACA 23012 airfoil section. For six ice accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 x 10(exp 6) and a Mach number of 0.18 with an 18-inch chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For four of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3% with corresponding differences in stall angle of approximately one degree or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several of the ice

  14. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several

  15. Energy retrofit of an office building by substitution of the generation system: performance evaluation via dynamic simulation versus current technical standards

    NASA Astrophysics Data System (ADS)

    Testi, D.; Schito, E.; Menchetti, E.; Grassi, W.

    2014-11-01

    Constructions built in Italy before 1945 (about 30% of the total built stock) feature low energy efficiency. Retrofit actions in this field can lead to valuable energetic and economic savings. In this work, we ran a dynamic simulation of a historical building of the University of Pisa during the heating season. We firstly evaluated the energy requirements of the building and the performance of the existing natural gas boiler, validated with past billings of natural gas. We also verified the energetic savings obtainable by the substitution of the boiler with an air-to-water electrically-driven modulating heat pump, simulated through a cycle-based model, evaluating the main economic metrics. The cycle-based model of the heat pump, validated with manufacturers' data available only at specified temperature and load conditions, can provide more accurate results than the simplified models adopted by current technical standards, thus increasing the effectiveness of energy audits.

  16. A fully automated system for ultrasonic power measurement and simulation accordingly to IEC 61161:2006

    NASA Astrophysics Data System (ADS)

    Costa-Felix, Rodrigo P. B.; Alvarenga, André V.; Hekkenberg, Rob

    2011-02-01

    The ultrasonic power measurement, worldwide accepted, standard is the IEC 61161, presently in its 2nd edition (2006), but under review. To fulfil its requirements, considering that a radiation force balance is to be used as ultrasonic power detector, a large amount of raw data (mass measurement) shall be collected as function of time to perform all necessary calculations and corrections. Uncertainty determination demands calculation effort of raw and processed data. Although it is possible to be undertaken in an old-fashion way, using spread sheets and manual data collection, automation software are often used in metrology to provide a virtually error free environment concerning data acquisition and repetitive calculations and corrections. Considering that, a fully automate ultrasonic power measurement system was developed and comprehensively tested. A 0,1 mg of precision balance model CP224S (Sartorius, Germany) was used as measuring device and a calibrated continuous wave ultrasound check source (Precision Acoustics, UK) was the device under test. A 150 ml container filled with degassed water and containing an absorbing target at the bottom was placed on the balance pan. Besides the feature of automation software, a routine of power measurement simulation was implemented. It was idealized as a teaching tool of how ultrasonic power emission behaviour is with a radiation force balance equipped with an absorbing target. Automation software was considered as an effective tool for speeding up ultrasonic power measurement, while allowing accurate calculation and attractive graphical partial and final results.

  17. Simulations of the electron cloud buildup and its influence on the microwave transmission measurement

    NASA Astrophysics Data System (ADS)

    Haas, Oliver Sebastian; Boine-Frankenheim, Oliver; Petrov, Fedor

    2013-11-01

    An electron cloud density in an accelerator can be measured using the Microwave Transmission (MWT) method. The aim of our study is to evaluate the influence of a realistic, nonuniform electron cloud on the MWT. We conduct electron cloud buildup simulations for beam pipe geometries and bunch parameters resembling roughly the conditions in the CERN SPS. For different microwave waveguide modes the phase shift induced by a known electron cloud density is obtained from three different approaches: 3D Particle-In-Cell (PIC) simulation of the electron response, a 2D eigenvalue solver for waveguide modes assuming a dielectric response function for cold electrons, a perturbative method assuming a sufficiently smooth density profile. While several electron cloud parameters, such as temperature, result in minor errors in the determined density, the transversely inhomogeneous density can introduce a large error in the measured electron density. We show that the perturbative approach is sufficient to describe the phase shift under realistic electron cloud conditions. Depending on the geometry of the beam pipe, the external magnetic field configuration and the used waveguide mode, the electron cloud density can be concentrated at the beam pipe or near the beam pipe center, leading to a severe over- or underestimation of the electron density. Electron cloud distributions are very inhomogeneous, especially in dipoles. These inhomogeneities affect the microwave transmission measurement results. Electron density might be over- or underestimated, depending on setup. This can be quantified with several models, e.g. a perturbative approach.

  18. Forward and Inverse Modeling of Helioseismic Holography Measurements of MHD Simulations of Convection and Sunspot Flows

    NASA Astrophysics Data System (ADS)

    DeGrave, Kyle; Braun, Douglas; Birch, Aaron; Crouch, Ashley D.; Javornik, Brenda; Rempel, Matthias D.

    2016-05-01

    We test and validate newly-developed, empirically-derived sensitivity kernels for use in helioseismic analysis. These kernels are based on the Born approximation and derived from applying direct measurements to artificial realizations of incoming and scattered wavefields. These kernels are employed in a series of forward and inverse modeling of flows from the near-surface layers of two publicly available magnetohydrodynamic (MURaM-based) solar simulations - a quiet-Sun simulation, and one containing a sunspot. Forward travel times computed using the kernels generally compare favorably in non-magnetic regions. One finding of note is the presence of flow-like artifacts in the sunspot measurements which appear when the spot umbra or penumbra falls within the measurement pupils. Inversions for the horizontal flow components are able to reproduce the large-scale supergranule-sized flows in the upper 3Mm of both domains, but are compromised by noise at greater depths. In spite of the magnetic artifact, the moat flow surrounding the spot is at least qualitatively recovered. This work is supported by the NASA Heliophysics Division through NNH12CF68C, NNH12CF23C, and NNX16AG88G, and by the NSF Solar-Terrestrial Program through grant AGS-1127327.

  19. Kicker field simulation and measurement for the muon g-2 experiment at FNAL

    NASA Astrophysics Data System (ADS)

    Chang, Seung Pyo; Kim, Young Im; Choi, Jihoon; Semertzidis, Yannis; muon g-2 experiment Collaboration

    2017-01-01

    In the Muon g-2 experiment, muon beam is injected to the storage ring in a slightly tilted orbit whose center is 77 mm away from the center of the ring. The kicker is needed to send the muon beam to the central orbit. The magnetic kicker is designed for the experiment and about 0.1 Tm field integral is needed. The peak current pulse is 4200 A to make this field integral. This strong kicker pulse could make unwanted eddy current occur. This eddy current could spoil the main magnetic field of the storage ring. This could be a critical threat to the precision of experiment. The kicker field simulation has done using OPERA to estimate the effects. Also the kicker field should be measured based on Faraday effect. The measurement has tested in the lab before install the experiment area. In this presentation, the simulation and measurement results will be discussed. This work was supported by IBS-R017-D1-2016-a00.

  20. Monte Carlo Simulation Study of a Differential Calorimeter Measuring the Nuclear Heating in Material Testing Reactors

    NASA Astrophysics Data System (ADS)

    Amharrak, H.; Reynard-Carette, C.; Lyoussi, A.; Carette, M.; Brun, J.; De Vita, C.; Fourmentel, D.; Villard, J.-F.; Guimbal, P.

    2016-02-01

    The nuclear heating measurements in Material Testing Reactors (MTRs) are crucial for the study of nuclear materials and fuels under irradiation. The reference measurements of this nuclear heating are especially performed by a differential calorimeter including a graphite sample material. Then these measurements are used for other materials, other geometries, or other experimental conditions in order to predict the nuclear heating and thermal conditions induced in the irradiation devices. This paper will present new simulations with MCNP Monte-Carlo transport code to determine the gamma heating profile inside the calorimeter. The whole complex geometry of the sensor has been considered. We use as an input source in the model, the photon spectra calculated in various positions of CARMEN-1 irradiation program in OSIRIS reactor. After a description of the differential calorimeter device, the MCNP modeling used for the calculations of radial profile of nuclear heating inside the calorimeter elements will be introduced. The obtained results of different simulations will be detailed and discussed in this paper. The charged particle equilibrium inside the calorimeter elements will be studied. Then we will focus on parametric studies of the various components of the calorimeter. The influence of source type will be also took into account. Moreover the influence of the material used for the sample will be described.

  1. Real-time measurements to characterize dynamics of emulsion interface during simulated intestinal digestion.

    PubMed

    Pan, Yuanjie; Nitin, N

    2016-05-01

    Efficient delivery of bioactives remains a critical challenge due to their limited bioavailability and solubility. While many encapsulation systems are designed to modulate the digestion and release of bioactives within the human gastrointestinal tract, there is limited understanding of how engineered structures influence the delivery of bioactives. The objective of this study was to develop a real-time quantitative method to measure structural changes in emulsion interface during simulated intestinal digestion and to correlate these changes with the release of free fatty acids (FFAs). Fluorescence resonant energy transfer (FRET) was used for rapid in-situ measurement of the structural changes in emulsion interface during simulated intestinal digestion. By using FRET, changes in the intermolecular spacing between the two different fluorescent probes labeled emulsifier were characterized. Changes in FRET measurements were compared with the release of FFAs. The results showed that bile salts and pancreatic lipase interacted immediately with the emulsion droplets and disrupted the emulsion interface as evidenced by reduction in FRET efficacy compared to the control. Similarly, a significant amount of FFAs was released during digestion. Moreover, addition of a second layer of polymers at emulsion interface decreased the extent of interface disruption by bile salts and pancreatic lipase and impacted the amount or rate of FFA release during digestion. These results were consistent with the lower donor/acceptor ratio of the labeled probes from the FRET result. Overall, this study provides a novel approach to analyze the dynamics of emulsion interface during digestion and their relationship with the release of FFAs.

  2. Photovoltaic energy production map of Greece based on simulated and measured data

    NASA Astrophysics Data System (ADS)

    Vokas, Georgios A.; Lagogiannis, Konstantinos V.; Papageorgas, Panagiotis; Salame, Takla

    2017-02-01

    The aim of this research is in one hand to reveal the real energy production of a medium scale Photovoltaic (PV) plant located at different sites in Greece and on the other to compare measured data to the predicted ones resulted from one well-known, PV simulation software. During the last ten years a capacity of more than 2,5 GWp of PV systems has been installed in Greece. Almost 37% of the installations are ranged from 10 to 100 kWp due to favorable Feed-in-Tariff policy pricing, according to the Greek regulation. Previous investigations proved a remarkable difference between measured and predicted energy production in Greece regarding all PV systems technologies. For the purposes of this study more than 250 medium scale PV plants have been measured and more than 850 annually energy production data series for those parks have been collected. Those data constitute a great sample that has been compared to more than 225 simulations data resulted by a well-known web software for PV systems energy yield calculations with improved solar radiation database. Additionally, in order to have a visual feeling concerning the real PV energy yield footprint in Greece, an updated map has been developed and illustrated, providing a useful tool for both business and academic purposes.

  3. Technical Note: Intercomparison of formaldehyde measurements at the atmosphere simulation chamber SAPHIR

    NASA Astrophysics Data System (ADS)

    Wisthaler, A.; Apel, E. C.; Bossmeyer, J.; Hansel, A.; Junkermann, W.; Koppmann, R.; Meier, R.; Müller, K.; Solomon, S. J.; Steinbrecher, R.; Tillmann, R.; Brauers, T.

    2008-04-01

    The atmosphere simulation chamber SAPHIR at the Research Centre Jülich was used to test the suitability of state-of-the-art analytical instruments for the measurement of gas-phase formaldehyde (HCHO) in air. Five analyzers based on four different sensing principles were deployed: a differential optical absorption spectrometer (DOAS), cartridges for 2,4-dinitrophenylhydrazine (DNPH) derivatization followed by off-line high pressure liquid chromatography (HPLC) analysis, two different types of commercially available wet chemical sensors based on Hantzsch fluorimetry, and a proton-transfer-reaction mass spectrometer (PTR-MS). A new optimized mode of operation was used for the PTR-MS instrument which significantly enhanced its performance for online HCHO detection at low absolute humidities. The instruments were challenged with typical ambient levels of HCHO ranging from zero to several ppb. Synthetic air of high purity and particulate-filtered ambient air were used as sample matrices in the atmosphere simulation chamber onto which HCHO was spiked under varying levels of humidity and ozone. Measurements were compared to mixing ratios calculated from the chamber volume and the known amount of HCHO injected into the chamber; measurements were also compared between the different instruments. The formal and blind intercomparison exercise was conducted under the control of an independent referee. A number of analytical problems associated with the experimental set-up and with individual instruments were identified, the overall agreement between the methods was fair.

  4. Technical Note: Intercomparison of formaldehyde measurements at the atmosphere simulation chamber SAPHIR

    NASA Astrophysics Data System (ADS)

    Wisthaler, A.; Apel, E. C.; Bossmeyer, J.; Hansel, A.; Junkermann, W.; Koppmann, R.; Meier, R.; Müller, K.; Solomon, S. J.; Steinbrecher, R.; Tillmann, R.; Brauers, T.

    2007-11-01

    The atmosphere simulation chamber SAPHIR at the Research Centre Jülich was used to test the suitability of state-of-the-art analytical instruments for the measurement of gas-phase formaldehyde (HCHO) in air. Five analyzers based on four different sensing principles were deployed: a differential optical absorption spectrometer (DOAS), cartridges for 2,4-dinitro-phenyl-hydrazine (DNPH) derivatization followed by off-line high pressure liquid chromatography (HPLC) analysis, two different types of commercially available wet chemical sensors based on Hantzsch fluorimetry, and a proton-transfer-reaction mass spectrometer (PTR-MS). A new optimized mode of operation was used for the PTR-MS instrument which significantly enhanced its performance for on-line HCHO detection at low absolute humidities. The instruments were challenged with typical ambient levels of HCHO ranging from zero to several ppb. Synthetic air of high purity and particulate-filtered ambient air were used as sample matrices in the atmosphere simulation chamber onto which HCHO was spiked under varying levels of humidity and ozone. Measurements were compared to mixing ratios calculated from the chamber volume and the known amount of HCHO injected into the chamber; measurements were also compared between the different instruments. The formal and blind intercomparison exercise was conducted under the control of an independent referee. A number of analytical problems associated with the experimental set-up and with individual instruments were identified, the overall agreement between the methods was good.

  5. Spectral optical layer properties of cirrus from collocated airborne measurements and simulations

    NASA Astrophysics Data System (ADS)

    Finger, Fanny; Werner, Frank; Klingebiel, Marcus; Ehrlich, André; Jäkel, Evelyn; Voigt, Matthias; Borrmann, Stephan; Spichtinger, Peter; Wendisch, Manfred

    2016-06-01

    Spectral upward and downward solar irradiances from vertically collocated measurements above and below a cirrus layer are used to derive cirrus optical layer properties such as spectral transmissivity, absorptivity, reflectivity, and cloud top albedo. The radiation measurements are complemented by in situ cirrus crystal size distribution measurements and radiative transfer simulations based on the microphysical data. The close collocation of the radiative and microphysical measurements, above, beneath, and inside the cirrus, is accomplished by using a research aircraft (Learjet 35A) in tandem with the towed sensor platform AIRTOSS (AIRcraft TOwed Sensor Shuttle). AIRTOSS can be released from and retracted back to the research aircraft by means of a cable up to a distance of 4 km. Data were collected from two field campaigns over the North Sea and the Baltic Sea in spring and late summer 2013. One measurement flight over the North Sea proved to be exemplary, and as such the results are used to illustrate the benefits of collocated sampling. The radiative transfer simulations were applied to quantify the impact of cloud particle properties such as crystal shape, effective radius reff, and optical thickness τ on cirrus spectral optical layer properties. Furthermore, the radiative effects of low-level, liquid water (warm) clouds as frequently observed beneath the cirrus are evaluated. They may cause changes in the radiative forcing of the cirrus by a factor of 2. When low-level clouds below the cirrus are not taken into account, the radiative cooling effect (caused by reflection of solar radiation) due to the cirrus in the solar (shortwave) spectral range is significantly overestimated.

  6. Transmembrane flux and receptor desensitization measured with membrane vesicles. Homogeneity of vesicles investigated by computer simulation.

    PubMed Central

    Cash, D J; Langer, R M; Subbarao, K; Bradbury, J R

    1988-01-01

    The use of membrane vesicles to make quantitative studies of transmembrane transport and exchange processes involves an assumption of homogeneity of the membrane vesicles. In studies of 86Rb+ exchange mediated by acetylcholine receptor from the electric organ of Electrophorus electricus and of 36Cl- exchange mediated by GABA receptor from rat brain, measurements of ion exchange and receptor desensitization precisely followed first order kinetics in support of this assumption. In other measurements a biphasic decay of receptor activity was seen. To elucidate the molecular properties of receptors from such measurements it is important to appreciate what the requirements of vesicle monodispersity are for meaningful results and what the effect of vesicle heterogeneity would be. The experiments were simulated with single vesicle populations with variable defined size distributions as well as with mixtures of different populations of vesicles. The properties of the receptors and their density in the membrane could be varied. Different receptors could be present on the same or different membrane vesicles. The simulated measurements were not very sensitive to size dispersity. A very broad size distribution of a single vesicle population was necessary to give rise to detectable deviations from first order kinetics or errors in the determined kinetic constants. Errors could become significant with mixtures of different vesicle populations, where the dispersity in initial ion exchange rate constant, proportional to the receptor concentration per internal volume, became large. In this case the apparent rate of receptor desensitization would diverge in opposite directions from the input value when measured by two different methods, suggesting an experimental test for such kinetic heterogeneity. A biphasic decrease of receptor activity could not be attributed to vesicle heterogeneity and must be due to desensitization processes with different rates. Significant errors would not

  7. Accuracy of cutoff probe for measuring electron density: simulation and experiment

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Woong; You, Shin-Jae; Kim, Si-June; Lee, Jang-Jae; Kim, Jung-Hyung; Oh, Wang-Yuhl

    2016-09-01

    The electron density has been used for characterizing the plasma for basic research as well as industrial application. To measure the exact electron density, various type of microwave probe has been developed and improved. The cutoff probe is a promising technique inferring the electron density from the plasma resonance peak on the transmission spectrum. In this study, we present the accuracy of electron density inferred from cutoff probe. The accuracy was investigated by electromagnetic simulation and experiment. The discrepancy between the electron densities from the cutoff probe and other sophisticated microwave probes were investigated and discussed. We found that the cutoff probe has good accuracy in inferred electron density. corresponding author.

  8. Lateral Earth Pressure at Rest and Shear Modulus Measurements on Hanford Sludge Simulants

    SciTech Connect

    Wells, Beric E.; Jenks, Jeromy WJ; Boeringa, Gregory K.; Bauman, Nathan N.; Guzman, Anthony D.; Arduino, P.; Keller, P. J.

    2010-09-30

    This report describes the equipment, techniques, and results of lateral earth pressure at rest and shear modulus measurements on kaolin clay as well as two chemical sludge simulants. The testing was performed in support of the problem of hydrogen gas retention and release encountered in the double- shell tanks (DSTs) at the Hanford Site near Richland, Washington. Wastes from single-shell tanks (SSTs) are being transferred to double-shell tanks (DSTs) for safety reasons (some SSTs are leaking or are in danger of leaking), but the available DST space is limited.

  9. Measurements and simulations of ultralow emittance and ultrashort electron beams in the linac coherent light source.

    PubMed

    Ding, Y; Brachmann, A; Decker, F-J; Dowell, D; Emma, P; Frisch, J; Gilevich, S; Hays, G; Hering, Ph; Huang, Z; Iverson, R; Loos, H; Miahnahri, A; Nuhn, H-D; Ratner, D; Turner, J; Welch, J; White, W; Wu, J

    2009-06-26

    The Linac Coherent Light Source (LCLS) is an x-ray free-electron laser project presently in a commissioning phase at the SLAC National Accelerator Laboratory. We report here on very low-emittance measurements made at low bunch charge, and a few femtosecond bunch length produced by the LCLS bunch compressors. Start-to-end simulations associated with these beam parameters show the possibilities of generating hundreds of GW at 1.5 A x-ray wavelength and nearly a single longitudinally coherent spike at 1.5 nm with 2-fs duration.

  10. Impedance simulations and measurements on the LHC collimators with embedded beam position monitors

    NASA Astrophysics Data System (ADS)

    Biancacci, N.; Caspers, F.; Kuczerowski, J.; Métral, E.; Mounet, N.; Salvant, B.; Mostacci, A.; Frasciello, O.; Zobov, M.

    2017-01-01

    The LHC collimation system is a critical element for the safe operation of the LHC machine. The necessity of fast accurate positioning of the collimator's jaws, recently introduced the need to have button beam position monitors directly embedded in the jaws extremities of the LHC tertiary collimators and some secondary collimators. This addition led to a new design of these collimators including ferrites to damp higher order modes instead of rf fingers. In this work we will present the impedance bench measurements and simulations on a TCT (Transverse Tertiary Collimator) prototype including estimations for beam stability for the LHC.

  11. Measurement of gaseous emissions from a turbofan engine at simulated altitude conditions

    NASA Technical Reports Server (NTRS)

    Diehl, L. A.; Biaglow, J. A.

    1974-01-01

    Gaseous emission from a TFE 731-2 turbofan engine were measured over a range of fuel-air ratios from idle to full power at simulated from near sea level to 13,200 m. Carbon monoxide and unburned hydrocarbon emissions were highest at idle and lowest at high power settings; oxides of nitrogen exhibited the reverse trend. Carbon monoxide and unburned hydrocarbon levels decreased with increasing altitude. Oxides of nitrogen emissions were successfully correlated by a parametric group of combustor operating variables.

  12. Continuous metabolic and cardiovascular measurements on a monkey subject during a simulated 6-day Spacelab mission

    NASA Technical Reports Server (NTRS)

    Pace, N.; Rahlmann, D. F.; Mains, R. C.; Kodama, A. M.; Mccutcheon, E. P.

    1978-01-01

    An adult male pig-tailed monkey (Macaca nemestrina) with surgically implanted biotelemetry unit was inserted into a fiberglass pod system which was installed in a Spacelab mock-up to simulate a 6-day mission during which extensive physiological measurements were obtained. The purpose of the pod was to make possible the study of respiratory gas exchange. Body temperature and selected cardiovascular parameters were recorded continuously for 2.6 days prior to 'launch', 6.3 days during 'flight', and 1.8 days after 'landing'. The results are surveyed, and it is concluded that it is feasible to perform sound physiological experiments on nonhuman primates in the Spacelab environment

  13. Beam dynamics simulations and measurements at the Project X Test Facility

    SciTech Connect

    Gianfelice-Wendt, E.; Scarpine, V.E.; Webber, R.C.; /Fermilab

    2011-03-01

    Project X, under study at Fermilab, is a multitask high-power superconducting RF proton beam facility, aiming to provide high intensity protons for rare processes experiments and nuclear physics at low energy, and simultaneously for the production of neutrinos, as well as muon beams in the long term. A beam test facility - former known as High Intensity Neutrino Source (HINS) - is under commissioning for testing critical components of the project, e.g. dynamics and diagnostics at low beam energies, broadband beam chopping, RF power generation and distribution. In this paper we describe the layout of the test facility and present beam dynamics simulations and measurements.

  14. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    SciTech Connect

    Kovesdi, Casey Robert; Rice, Brandon Charles; Bower, Gordon Ross; Spielman, Zachary Alexander; Hill, Rachael Ann; LeBlanc, Katya Lee

    2015-11-01

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator’s eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  15. Pulse Analysis Spectroradiometer System for Measuring the Spectral Distribution of Flash Solar Simulators: Preprint

    SciTech Connect

    Andreas, A. M.; Myers, D. R.

    2008-07-01

    Flashing artificial light sources are used extensively in photovoltaic module performance testing and plant production lines. There are several means of attempting to measure the spectral distribution of a flash of light; however, many of these approaches generally capture the entire pulse energy. We report here on the design and performance of a system to capture the waveform of flash at individual wavelengths of light. Any period within the flash duration can be selected, over which to integrate the flux intensity at each wavelength. The resulting spectral distribution is compared with the reference spectrum, resulting in a solar simulator classification.

  16. Improved simulation of aerosol, cloud, and density measurements by shuttle lidar

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Morley, B. M.; Livingston, J. M.; Grams, G. W.; Patterson, E. W.

    1981-01-01

    Data retrievals are simulated for a Nd:YAG lidar suitable for early flight on the space shuttle. Maximum assumed vertical and horizontal resolutions are 0.1 and 100 km, respectively, in the boundary layer, increasing to 2 and 2000 km in the mesosphere. Aerosol and cloud retrievals are simulated using 1.06 and 0.53 microns wavelengths independently. Error sources include signal measurement, conventional density information, atmospheric transmission, and lidar calibration. By day, tenuous clouds and Saharan and boundary layer aerosols are retrieved at both wavelengths. By night, these constituents are retrieved, plus upper tropospheric, stratospheric, and mesospheric aerosols and noctilucent clouds. Density, temperature, and improved aerosol and cloud retrievals are simulated by combining signals at 0.35, 1.06, and 0.53 microns. Particlate contamination limits the technique to the cloud free upper troposphere and above. Error bars automatically show effect of this contamination, as well as errors in absolute density nonmalization, reference temperature or pressure, and the sources listed above. For nonvolcanic conditions, relative density profiles have rms errors of 0.54 to 2% in the upper troposphere and stratosphere. Temperature profiles have rms errors of 1.2 to 2.5 K and can define the tropopause to 0.5 km and higher wave structures to 1 or 2 km.

  17. Simulation of a laser radar to improve visiblity measurements in dense fog

    NASA Astrophysics Data System (ADS)

    Streicher, Juergen

    1992-12-01

    Lidar is the short form of light detection and ranging. The first application of a lidar system was, as in the radar technique, the determination of the distance to large-sized particles (target recognition). Nowadays, it is of more interest to measure the structure of the atmosphere in far distances (remote sensing) to get, for example, information about the mass concentration of the industrial pollution or the visibility conditions in dense fog. In this case the action and reaction of the laser light with the particles is made by very small and different scatterers (molecules, atoms, or aerosols) and, therefore, extremely complex. A simulation program that helps to determine the visibility with a lidar has been developed to present the effects of the components of the system (laser, transmitter, receiver) as well as the parameters of the atmosphere (inhomogeneities, fog, clouds) in a convenient way. A change in any parameter is taken into account instantaneously, so this program can be called an almost real time simulator. A computer with a graphic user interface was chosen to realize this as simply as possible: The Commodore Amiga. The simulation is written in `C' to get the best performance for the calculations.

  18. Effects of mood induction via music on cardiovascular measures of negative emotion during simulated driving.

    PubMed

    Fairclough, Stephen H; van der Zwaag, Marjolein; Spiridon, Elena; Westerink, Joyce

    2014-04-22

    A study was conducted to investigate the potential of mood induction via music to influence cardiovascular correlates of negative emotions experience during driving behaviour. One hundred participants were randomly assigned to one of five groups, four of whom experienced different categories of music: High activation/positive valence (HA/PV), high activation/negative valence (HA/NV), low activation/positive valence (LA/PV) and low activation/negative valence (LA/NV). Following exposure to their respective categories of music, participants were required to complete a simulated driving journey with a fixed time schedule. Negative emotion was induced via exposure to stationary traffic during the simulated route. Cardiovascular reactivity was measured via blood pressure, heart rate and cardiovascular impedance. Subjective self-assessment of anger and mood was also recorded. Results indicated that low activation music, regardless of valence, reduced systolic reactivity during the simulated journey relative to HA/NV music and the control (no music) condition. Self-reported data indicated that participants were not consciously aware of any influence of music on their subjective mood. It is concluded that cardiovascular reactivity to negative mood may be mediated by the emotional properties of music.

  19. Measurements and simulation of forest leaf area index and net primary productivity in Northern China.

    PubMed

    Wang, P; Sun, R; Hu, J; Zhu, Q; Zhou, Y; Li, L; Chen, J M

    2007-11-01

    Large scale process-based modeling is a useful approach to estimate distributions of global net primary productivity (NPP). In this paper, in order to validate an existing NPP model with observed data at site level, field experiments were conducted at three sites in northern China. One site is located in Qilian Mountain in Gansu Province, and the other two sites are in Changbaishan Natural Reserve and Dunhua County in Jilin Province. Detailed field experiments are discussed and field data are used to validate the simulated NPP. Remotely sensed images including Landsat Enhanced Thematic Mapper plus (ETM+, 30 m spatial resolution in visible and near infrared bands) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER, 15m spatial resolution in visible and near infrared bands) are used to derive maps of land cover, leaf area index, and biomass. Based on these maps, field measured data, soil texture and daily meteorological data, NPP of these sites are simulated for year 2001 with the boreal ecosystem productivity simulator (BEPS). The NPP in these sites ranges from 80 to 800 gCm(-2)a(-1). The observed NPP agrees well with the modeled NPP. This study suggests that BEPS can be used to estimate NPP in northern China if remotely sensed images of high spatial resolution are available.

  20. Measuring Water Content and Desorption Isotherms in Soil Simulants Under Martian Conditions

    NASA Astrophysics Data System (ADS)

    Hudson, T.; Aharonson, O.; Schorghofer, N.; Hecht, M. H.; Bridges, N.; Green, J. R.

    2003-12-01

    Theoretical predictions as well as recent spacecraft observations indicate that large quantities of ice is present in the high latitudes upper decimeters to meters of the Martian regolith. At shallower depths and warmer locations small amounts of H2O, either adsorbed or free, may be present transiently. We seek to simulate Mars surface conditions and to observe the effects of temperature cycling (diurnal and seasonal scale) on the water content profiles of several soil simulants. To model the upper Martian regolith, we begin by using crushed JSC Mars-1 palagonite with particles in the 50 micron to sub-micron size range. Spheres of pure silica in the 10 to 40 mm range may also be used to study the effects of grain surface morphology and composition. Simulants with various water contents are brought to Mars pressures and monitored. A line source heat-pulse probe is being prepared to monitor water content profiles in real-time and to be calibrated against water content samples measured with thermogravimetric (TG) analysis. Initial experiments will allow us to monitor water content; more refined investigations will permit the determination of desorption isotherms.

  1. The wildgeographer avatar shows how to measure soil erosion rates by means of a rainfall simulator

    NASA Astrophysics Data System (ADS)

    Cerdà, Artemi; González Pelayo, Óscar; Pereira, Paulo; Novara, Agata; Iserloh, Thomas; Prosdocimi, Massimo

    2015-04-01

    This contribution to the immersed worlds wish to develop the avatar that will teach the students and other scientists how to develop measurements of soil erosion, surface runoff and wetting fronts by means of simulated rainfall experiments. Rainfall simulation is a well established and knows methodology to measure the soil erosion rates and soil hydrology under controlled conditions (Cerdà 1998a; Cerdà, 1998b; Cerdà and Jurgensen, 2011; Dunkerley, 2012; Iserloh et al., 2012; Iserloh et al., 2013; Ziadat and Taimeh, 2013; Butzen et al., 2014). However, is a method that requires a long training and expertise to avoid mismanagement and mistaken. To use and avatar can help in the teaching of the technique and the dissemination of the findings. This contribution will show to other avatars how to develop an experiment with simulated rainfall and will help to take the right decision in the design of the experiments. Following the main parts of the experiments and measurements the Wildgeographer avatar must develop: 1. Determine the objectives and decide which rainfall intensity and distribution, and which plot size to be used. Choose between a laboratory or a field rainfall simulation. 2. Design of the rainfall simulator to achieve the objectives: type of rainfall simulator (sprayer or drop former) and calibrate. 3. The experiments are carried out. 4. The results are show. Acknowledgements To the "Ministerio de Economía and Competitividad" of Spanish Government for finance the POSTFIRE project (CGL2013- 47862-C2-1-R). The research projects GL2008-02879/BTE, LEDDRA 243857 and PREVENTING AND REMEDIATING DEGRADATION OF SOILS IN EUROPE THROUGH LAND CARE (RECARE)FP7-ENV-2013- supported this research. References Butzen, V., Seeger, M., Wirtz, S., Huemann, M., Mueller, C., Casper, M., Ries, J. B. 2014. Quantification of Hortonian overland flow generation and soil erosion in a Central European low mountain range using rainfall experiments. Catena, 113, 202-212. Cerdà, A

  2. Predicting image blur in proton radiography: comparisons between measurements and Monte Carlo simulations

    SciTech Connect

    von Wittenau, A; Aufderheide, M B; Henderson, G L

    2010-05-07

    Given the cost and lead-times involved in high-energy proton radiography, it is prudent to model proposed radiographic experiments to see if the images predicted would return useful information. We recently modified our raytracing transmission radiography modeling code HADES to perform simplified Monte Carlo simulations of the transport of protons in a proton radiography beamline. Beamline objects include the initial diffuser, vacuum magnetic fields, windows, angle-selecting collimators, and objects described as distorted 2D (planar or cylindrical) meshes or as distorted 3D hexahedral meshes. We present an overview of the algorithms used for the modeling and code timings for simulations through typical 2D and 3D meshes. We next calculate expected changes in image blur as scattering materials are placed upstream and downstream of a resolution test object (a 3 mm thick sheet of tantalum, into which 0.4 mm wide slits have been cut), and as the current supplied to the focusing magnets is varied. We compare and contrast the resulting simulations with the results of measurements obtained at the 800 MeV Los Alamos LANSCE Line-C proton radiography facility.

  3. Diffusion of Trace Alkanes in Polyethylene: Spin-Echo Measurements and Monte-Carlo Simulations

    NASA Astrophysics Data System (ADS)

    von Meerwall, E.; Lin, H.; Mattice, W. L.

    2006-03-01

    We have performed pulsed-gradient NMR diffusion (D) measurements on five n-alkanes (24, 28, 36, 44, and 60 carbons) in a polyethylene (PE) host (M = 33 kDa) as function of concentration c (2-10 wt.%) at 180 deg. C. Monte-Carlo simulations on the second-nearest-neighbor diamond lattice (38, 46, 62, and 78 carbons) at c between 2 and 15 wt.% in a host of PE (M = 4.5 kDa) explored static and dynamic properties. The bridging method uses beads combining adjacent moieties and incorporates two-bead moves; it permits detailed reconstruction of the chain molecules at any stage. It uses discretized short-range rotational isomeric state and long-range intra- and interchain Lennard-Jones potentials. For both experiment and simulation, trace D was obtained by extrapolating D(c) to c = 0 using the Fujita-Doolittle equation with known chain-end free-volume parameters. A ratio of 330 Monte-Carlo steps per picosecond brings simulation into congruence with experiment; this factor is identical to that required for PE melts. The applicability of the Rouse model is approached only for the largest alkanes, but the M(alkane)-dependence of trace D is seen to be in transition from the Rouse-like 1/M-scaling to a steeper value characteristic of reptation with constraint release.

  4. Measurements of the Ultraviolet Fluorescence Cross Sections and Spectra of Bacillus Anthracis Simulants

    SciTech Connect

    Stephens, J.R.

    1998-09-01

    Measurements of the ultraviolet autofluorescence spectra and absolute cross sections of the Bacillus anthracis (Ba) simulants Bacillus globigii (Bg), Bacillus megaterium (Bm), Bacillus subtilis (Bs), and Bacillus cereus (Bc) were measured. Fluorescence spectra and cross sections of pine pollen (Pina echinata) were measured for comparison. Both dried vegetative cells and spores separated from the sporulated vegetative material were studied. The spectra were obtained by suspending a small number (<10) of particles in air in our Single Particle Spectroscopy Apparatus (SPSA), illuminating the particles with light from a spectrally filtered arc lamp, and measuring the fluorescence spectra of the particles. The illumination was 280 nm (20 nm FWHM) and the fluorescence spectra was measured between 300 and 450 nm. The fluorescence cross section of vegetative Bg peaks at 320 nm with a maximum cross section of 5 X 10{sup -14} cm{sup 2}/sr-nm-particle while the Bg spore fluorescence peaks at 310 nm with peak fluorescence of 8 X 10{sup -15} cm{sup 2}/sr-nm-particle. Pine pollen particles showed a higher fluorescence peaking at 355 nm with a cross section of 1.7 X 10{sup -13} cm{sup 2}/sr-nm-particle. Integrated cross sections ranged from 3.0 X 10{sup -13} for the Bg spores through 2.25 X 10{sup -12} (cm{sup 2}/sr-particle) for the vegetative cells.

  5. Measurement and simulation of cosmic rays effects on neutron multiplicity counting

    NASA Astrophysics Data System (ADS)

    Weinmann-Smith, R.; Swinhoe, M. T.; Hendricks, J.

    2016-04-01

    Neutron coincidence and multiplicity counting is a standard technique used to measure uranium and plutonium masses in unknown samples for nuclear safeguards purposes, but background sources of radiation can obscure the results. In particular, high energy cosmic rays can produce large coincidence count contributions. Since some of the events occur in the sample itself, it is impossible to measure the background separately. This effect greatly increases the limit of detection of some low level neutron coincidence counting applications. The cosmic ray capability of MCNP6 was used to calculate the expected coincidence rates from cosmic rays for different sample configurations and experimental measurements were conducted for comparison. Uranium enriched to 66%, lead bricks, and an empty detector were measured in the mini Epithermal Neutron Multiplicity Counter, and MCNP6 simulations were made of the same measurements. The results show that the capability is adequate for predicting the expected background rates. Additional verification of MCNP6 was given by comparison of particle production rates to other publications, increasing confidence in MCNP6's use as a tool to lower the limit of detection. MCNP6 was then used to find particle and source information that would be difficult to detect experimentally. The coincidence count contribution was broken down by particle type for singles, doubles, and triples rates. The coincidence count contribution was broken down by source, from(a , n) , spontaneous fission, and cosmic rays, for each multiplicity.

  6. Aerodynamic roughness measured in the field and simulated in a wind tunnel

    NASA Technical Reports Server (NTRS)

    Sullivan, Robert; Greeley, Ronald

    1992-01-01

    This study evaluates how well values of aerodynamic surface roughness, z sub 0, measured over scale models in wind tunnels correlate with values of z sub 0 measured at full scale in the field. A field experiment was conducted in which values of z sub 0 and u* (wind friction speed) were measured over three arrays of non-erodible roughness elements on a dry lake bed. Wind profiles were measured by ten anemometers on a 15 m mast under thermally neutral atmospheric conditions. Values of z sub 0 increased from .00014 m (dry lake bed only) to .026 m with increasing roughness element density. The three roughness element arrays were simulated at 1/10 and 1/20 scale in an open-circuit atmospheric boundary-layer wind tunnel. Velocities were measured with a boundary-layer pitot-tube rake from the same relative position within the scale model arrays as the anemometers were relative to the field arrays. Each array at each scale was sampled three times at five freestream velocities. Average values of z sub 0 for each model array at each scale were compared with full-scale values of z sub 0 obtained in the field. The field vs. wind tunnel correspondence of z sub 0 is found to be z sub 0 field = 0.2661 x (z sub(0 model) x scale(exp -1))exp .8159.

  7. Theory and Simulation of A Novel Viscosity Measurement Method for High Temperature Semiconductor

    NASA Technical Reports Server (NTRS)

    Lin, Bochuan; Li, Chao; Ban, Heng; Scripa, Rose; Zhu, Shen; Su, Ching-Hua; Lehoczky, S. L.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    The properties of molten semiconductors are good indicators for material structure transformation and hysteresis under temperature variations. Viscosity, as one of the most important properties, is difficult to measure because of high temperature, high pressure, and vapor toxicity of melts. Recently, a novel method was developed by applying a rotating magnetic field to the melt sealed in a suspended quartz ampoule, and measuring the transient torque exerted by rotating melt flow on the ampoule wall. The method was designed to measure viscosity in short time period, which is essential for evaluating temperature hysteresis. This paper compares the theoretical prediction of melt flow and ampoule oscillation with the experimental data. A theoretical model was established and the coupled fluid flow and ampoule torsional vibration equations were solved numerically. The simulation results showed a good agreement with experimental data. The results also showed that both electrical conductivity and viscosity could be calculated by fitting the theoretical results to the experimental data. The transient velocity of the melt caused by the rotating magnetic field was found reach equilibrium in about half a minute, and the viscosity of melt could be calculated from the altitude of oscillation. This would allow the measurement of viscosity in a minute or so, in contrast to the existing oscillation cup method, which requires about an hour for one measurement.

  8. The interior of 67P/C-G nucleus revealed by CONSERT measurements and simulations

    NASA Astrophysics Data System (ADS)

    Levasseur-Regourd, A.; Kofman, Wlodek; Herique, Alain; Ciarletti, Valérie; Heggy, Essam; Lasue, Jérémie

    2015-11-01

    The CONSERT bistatic radar onboard the Rosetta spacecraft and the Philae lander has begun to reveal the internal structure of Comet 67P/Churyumov-Gerasimenko, through radio tomographic mapping between the lander and main spacecraft. The small lobe was found to be structurally homogeneous, at the spatial scale of ten meters, corresponding to a few wavelengths of CONSERT instrument [1]. The real part of the relative permittivity has been derived from the travel time of the strongest signals obtained on 12-13 November 2014, from Philae final landing site. Since the final position of the lander was not accurately defined, numerous ray-tracing simulations were performed to constrain the ambiguities on Philae position using the known position of Rosetta and the propagation time and paths inside and outside the nucleus. A least square statistical analysis between measurements and simulations lead to deduce a bulk relative permittivity about (1.27 ± 0.1); meanwhile, the uncertainty in the lander location was reduced to an area of about 21 by 34 square meters [1].Ongoing theoretical and experimental simulations are providing more insights on the nucleus properties. Numerical ray-tracing simulations of the propagation at grazing angles have been performed for various subsurface permittivity models. They establish that a permittivity gradient in the shallow sub-surface would have a strong effect on the wave propagation. The permittivity probably decreases with depth, suggesting that a significant increase of dust/ice ratio with depth is unlikely [2]. Laboratory simulations of the permittivity of subsurface cometary analog materials [3], and of surface porous analog samples [4] have taken place. Results suggest 67P dielectric properties to be mainly controlled by porosity, the dust/ice volumetric ratio to range from 0.4 to 2.6 and the porosity to range from 75 to 85% [1]. Further on-going laboratory measurements will be discussed.Supports from CNES and NASA are acknowledged

  9. Rainfall simulators - innovations seeking rainfall uniformity and automatic flow rate measurements

    NASA Astrophysics Data System (ADS)

    Bauer, Miroslav; Kavka, Petr; Strouhal, Luděk; Dostál, Tomáš; Krása, Josef

    2016-04-01

    Field rainfall simulators are used worldwide for many experimental purposes, such as runoff generation and soil erosion research. At CTU in Prague a laboratory simulator with swinging nozzles VeeJet has been operated since 2001. Since 2012 an additional terrain simulator is being used with 4 fixed FullJet 40WSQ nozzles with 2,4 m spacing and operating over two simultaneously sprinkled experimental plots sizing 8x2 and 1x1 m. In parallel to other research projects a specific problem was solved: improving rainfall spatial uniformity and overall intensity and surface runoff measurements. These fundamental variables significantly affect investigated processes as well as resulting water balance of the plot, therefore they need to be determined as accurately as possible. Although the original nozzles setting produced (commonly used) Christiansen uniformity index CU over 80 %, detailed measurements proved this index insufficient and showed many unrequired rainfall extremes within the plot. Moreover the number of rainfall intensity scenarios was limited and some of them required problematic multi-pressure operation of the water distribution system. Therefore the simulator was subjected to many substantial changes in 2015. Innovations ranged from pump intensification to control unit upgrade. As essential change was considered increase in number of nozzles to 9 in total and reducing their spacing to 1,2 m. However new uniformity measurements did not bring any significant improvement. Tested scenarios showed equal standard deviations of interpolated intensity rasters and equal or slightly lower CU index. Imperfections of sprinkling nozzles were found to be the limiting factor. Still many other benefits were brought with the new setup. Whole experimental plot 10x2 m is better covered with the rainfall while the water consumption is retained. Nozzles are triggered in triplets, which enables more rainfall intensity scenarios. Water distribution system is more stable due to

  10. Measurements and modeling of environmental tobacco smoke leakagefrom a simulated smoking room

    SciTech Connect

    Wagner, J.; Sullivan, D.P.; Faulkner, D.; Gundel, L.A.; Fisk,W.J.; Alevantis, L.E.; Waldman, J.M.

    2002-03-01

    The purpose of this study is to quantify the effect ofvarious design and operating parameters on smoking room performance.Twenty-eight experiments were conducted in a simulated smoking room witha smoking machine and an automatic door opener. Measurements were made ofair flows, pressures, temperatures, two particle-phase ETS tracers, twogas-phase ETS tracers, and sulfur hexafluoride. Quantification of leakageflows, the effect of these leaks on smoking room performance andnon-smoker exposure, and the relative importance of each leakagemechanism are presented. The results indicate that the first priority foran effective smoking room is to depressurize it with respect to adjoiningnon-smoking areas. Another important ETS leakage mechanism is the pumpingaction of the smoking room door. Substituting a sliding door for astandard swing-type door reduced this source of ETS leakagesignificantly. Measured results correlated well with model predictions(R2 = 0.82-0.99).

  11. Ultrasonic density measurement cell design and simulation of non-ideal effects.

    PubMed

    Higuti, Ricardo Tokio; Buiochi, Flávio; Adamowski, Júlio Cezar; de Espinosa, Francisco Montero

    2006-07-01

    This paper presents a theoretical analysis of a density measurement cell using an unidimensional model composed by acoustic and electroacoustic transmission lines in order to simulate non-ideal effects. The model is implemented using matrix operations, and is used to design the cell considering its geometry, materials used in sensor assembly, range of liquid sample properties and signal analysis techniques. The sensor performance in non-ideal conditions is studied, considering the thicknesses of adhesive and metallization layers, and the effect of residue of liquid sample which can impregnate on the sample chamber surfaces. These layers are taken into account in the model, and their effects are compensated to reduce the error on density measurement. The results show the contribution of residue layer thickness to density error and its behavior when two signal analysis methods are used.

  12. AMPEL experiments: nitric-oxide concentration measurements in a simulated MHD combustion gas

    SciTech Connect

    Dunn, P. F.; Johnson, T. R.; Reed, C. B.

    1980-12-01

    Results are presented of recent investigations of the effect of secondary combustion on nitric oxide (NO) concentrations in an simulated magnetohydrodynamic (MHD) combustion gas. Forty-one experiments, in which NO concentration measurements were made, were conducted at the Argonne MHD Process Engineering Laboratory (AMPEL). In sixteen of those experiments, secondary air mixed with the primary combustion gas was combusted over two temperature ranges (1500-1800/sup 0/K and 1700-2000/sup 0/K). For all clean-fuel experiments conducted, the measured changes in NO concentration that resulted from secondary combustion were predicted to within 10%, using an Argonne modification of the NASA chemical kinetics code. This predictive code was extended to estimate changes in NO concentrations that would occur during secondary combustion in a larger MHD facility. It is concluded that, in addition to mixing and several other factors, the heat loss from the secondary combustion zone strongly influences the amount of NO formed during secondary combustion.

  13. Experimental measurements in a large separation bubble due to a simulated glaze ice shape

    NASA Technical Reports Server (NTRS)

    Bragg, M. B.; Khodadoust, A.

    1988-01-01

    The effect of a simulated glaze ice accretion on the aerodynamic performance of a NACA 0012 airfoil was studied experimentally. Two ice shapes were tested, one from an experimentally measured accretion and one from an accretion predicted using a computer model given the same icing conditions. Lift, drag and moment coefficients were measured for the airfoil with both ice shapes, smooth and rough. The aerodynamic performance of the two shapes compared well at positive, but not negative, angles of attack. Split hot-film probe velocity data were presented in the upper surface boundary layer and in the wake. Boundary layer parameters were presented for the separation bubble and in the reattached turbulent boundary layer.

  14. Simulation of a sensor array for multiparameter measurements at the prosthetic limb interface

    NASA Astrophysics Data System (ADS)

    Rowe, Gabriel I.; Mamishev, Alexander V.

    2004-07-01

    Sensitive skin is a highly desired device for biomechanical devices, wearable computing, human-computer interfaces, exoskeletons, and, most pertinent to this paper, for lower limb prosthetics. The measurement of shear stress is very important because shear effects are key factors in developing surface abrasions and pressure sores in paraplegics and users of prosthetic/orthotic devices. A single element of a sensitive skin is simulated and characterized in this paper. Conventional tactile sensors are designed for measurement of the normal stress only, which is inadequate for comprehensive assessment of surface contact conditions. The sensitive skin discussed here is a flexible array capable of sensing shear and normal forces, as well as humidity and temperature on each element.

  15. Measurement and simulation of the polarization-dependent Purcell factor in a microwave fishnet metamaterial

    NASA Astrophysics Data System (ADS)

    Rustomji, Kaizad; Abdeddaim, Redha; de Sterke, C. Martijn; Kuhlmey, Boris; Enoch, Stefan

    2017-01-01

    We determine, experimentally and numerically, the electric and magnetic Purcell factors in a fishnet metamaterial in the frequency range 5-15 GHz by measuring the impedance of a dipole antenna. We compare measurements and numerical simulations of the Purcell factor for transverse electric (TEz) and transverse magnetic (TMz) polarizations. For TMz polarization, the dispersion relation of the structure is hyperbolic and enhances the Purcell factor. For TEz polarization, the dispersion relation does not allow any propagating solutions and decreases the Purcell factor below the effective plasma frequency. Eigenmode calculations of the periodic unit cell of the metamaterial are used to obtain the band structure and confirm the presence of hyperbolic isofrequency surfaces. The isofrequency surfaces are used to calculate the density of states (DOS). We also use the impedance method to obtain the DOS by averaging the Purcell factor obtained at different locations over the periodic unit cell and find good agreement with DOS calculated from eigenmode calculations.

  16. Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.

    2007-01-01

    Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.

  17. Experimental measurement and numerical simulation of residual stresses in a carburized layer of a 5120 steel

    SciTech Connect

    Rangaswamy, P.; Bourke, M.A.M.; Shipley, J.C.; Goldstone, J.A.

    1995-09-01

    A combined experimental and numerical study of residual stress and microstructure has been performed for a carburized steel 5120 specimen. Specimens were cut from 5120 steel bar stock, in the shape of hockey pucks and were subsequently carburized and quenched. X-ray diffraction was used to record stress profiles through the case for the martensite and retained austenite on the two flat surfaces oriented up and down during the quench. Layer removal was performed by electropolishing. Rietveld analysis was used to determine the lattice parameters of the phases at each depth varying with both carbon content and stress. The experimental measurements are compared with a numerical simulation of the phase transformation and the metallurgical changes following the carburization and quench. Results am discussed in the context of the microstructure and the role played by the retained austenite in interpretation. In addition the carbon profile obtained from the lattice parameters is compared with profiles measured using burnout.

  18. Accidental beam loss in superconducting accelerators: Simulations, consequences of accidents and protective measures

    SciTech Connect

    Drozhdin, A.; Mokhov, N.; Parker, B.

    1994-02-01

    The consequences of an accidental beam loss in superconducting accelerators and colliders of the next generation range from the mundane to rather dramatic, i.e., from superconducting magnet quench, to overheating of critical components, to a total destruction of some units via explosion. Specific measures are required to minimize and eliminate such events as much as practical. In this paper we study such accidents taking the Superconducting Supercollider complex as an example. Particle tracking, beam loss and energy deposition calculations were done using the realistic machine simulation with the Monte-Carlo codes MARS 12 and STRUCT. Protective measures for minimizing the damaging effects of prefire and misfire of injection and extraction kicker magnets are proposed here.

  19. Simulation of vibration-induced effect on plasma current measurement using a fiber optic current sensor.

    PubMed

    Descamps, Frédéric; Aerssens, Matthieu; Gusarov, Andrei; Mégret, Patrice; Massaut, Vincent; Wuilpart, Marc

    2014-06-16

    An accurate measurement of the plasma current is of paramount importance for controlling the plasma magnetic equilibrium in tokamaks. Fiber optic current sensor (FOCS) technology is expected to be implemented to perform this task in ITER. However, during ITER operation, the vessel and the sensing fiber will be subject to vibrations and thus to time-dependent parasitic birefringence, which may significantly compromise the FOCS performance. In this paper we investigate the effects of vibrations on the plasma current measurement accuracy under ITER-relevant conditions. The simulation results show that in the case of a FOCS reflection scheme including a spun fiber and a Faraday mirror, the error induced by the vibrations is acceptable regarding the ITER current diagnostics requirements.

  20. Histogramming of the Charged Particle Measurements with MSL/RAD - Comparison of Histogram Data with Simulations

    NASA Astrophysics Data System (ADS)

    Ehresmann, B.; Zeitlin, C.; Hassler, D. M.; Wimmer-Schweingruber, R. F.; Boettcher, S.; Koehler, J.; Martin, C.; Brinza, D.; Rafkin, S. C.

    2012-12-01

    The Radiation Assessment Detector (RAD) on-board the Mars Science Laboratory (MSL) is designed to measure a broad range of energetic particle radiation. A significant part of this radiation consists of charged particles, which mainly stem from cosmic background radiation, Solar particle events, and secondaries created by the interaction of these particles with the Martian atmosphere and soil. To measure charged particles RAD is equipped with a set of detectors: a particle telescope consisting of three silicon Solid-State Detectors (SSDs), a CsI scintillator and a plastic scintillator, as well as a further plastic scintillator used as anti-coincidence. RAD uses an elaborate post-processing logic to analyze if a measured event qualifies as a charged particle, as well as to distinguish between particles stopping in any one of the detectors and particles penetrating the whole detector stack. RAD then arranges these qualifying events in an appropriate stopping or penetrating charged particle histogram, reducing the data volume necessary to maintain crucial information about the measured particle. For ground-based data analysis it is of prime importance to derive information, such as particle species or energy, from the data in the downloaded histograms. Here, we will present how the chosen binning of these histograms enables us to derive this information. Pre-flight, we used the Monte-Carlo code GEANT4 to simulate the expected particle radiation and its interactions with a full model of the RAD sensor head. By mirroring the on-board processing logic, we derived statistics of which particle species and energies populate any one bin in the set of charged particle histograms. Finally, we will compare the resulting histogram data from RAD cruise and surface observations with simulations. RAD is supported by NASA (HEOMD) under JPL subcontract #1273039 to SwRI, and by DLR in Germany under contract to Christian-Albrechts-Universitaet zu Kiel (CAU).

  1. Chaos game representation of functional protein sequences, and simulation and multifractal analysis of induced measures

    NASA Astrophysics Data System (ADS)

    Yu, Zu-Guo; Xiao, Qian-Jun; Shi, Long; Yu, Jun-Wu; Vo, Anh

    2010-06-01

    Investigating the biological function of proteins is a key aspect of protein studies. Bioinformatic methods become important for studying the biological function of proteins. In this paper, we first give the chaos game representation (CGR) of randomly-linked functional protein sequences, then propose the use of the recurrent iterated function systems (RIFS) in fractal theory to simulate the measure based on their chaos game representations. This method helps to extract some features of functional protein sequences, and furthermore the biological functions of these proteins. Then multifractal analysis of the measures based on the CGRs of randomly-linked functional protein sequences are performed. We find that the CGRs have clear fractal patterns. The numerical results show that the RIFS can simulate the measure based on the CGR very well. The relative standard error and the estimated probability matrix in the RIFS do not depend on the order to link the functional protein sequences. The estimated probability matrices in the RIFS with different biological functions are evidently different. Hence the estimated probability matrices in the RIFS can be used to characterise the difference among linked functional protein sequences with different biological functions. From the values of the Dq curves, one sees that these functional protein sequences are not completely random. The Dq of all linked functional proteins studied are multifractal-like and sufficiently smooth for the Cq (analogous to specific heat) curves to be meaningful. Furthermore, the Dq curves of the measure μ based on their CGRs for different orders to link the functional protein sequences are almost identical if q >= 0. Finally, the Cq curves of all linked functional proteins resemble a classical phase transition at a critical point.

  2. SO2 over Central China: Measurements, Numerical Simulations and the Tropospheric Sulfur Budget

    NASA Technical Reports Server (NTRS)

    He, Hao; Li, Can; Loughner, Christopher P.; Li, Zhangqing; Krotkov, Nickolay A.; Yang, Kai; Wang, Lei; Zheng, Youfei; Bao, Xiangdong; Zhao, Guoqiang; Dickerson, Russell R.

    2012-01-01

    SO2 in central China was measured in situ from an aircraft and remotely using the Ozone Monitoring Instrument (OMI) from the Aura satellite; results were used to develop a numerical tool for evaluating the tropospheric sulfur budget - sources, sinks, transformation and transport. In April 2008, measured ambient SO2 concentrations decreased from approx.7 ppbv near the surface to approx. 1 ppbv at 1800 m altitude (an effective scale height of approx.800 m), but distinct SO2 plumes were observed between 1800 and 4500 m, the aircraft's ceiling. These free tropospheric plumes play a major role in the export of SO2 and in the accuracy of OMI retrievals. The mean SO2 column contents from aircraft measurements (0.73 DU, Dobson Units) and operational OMI SO2 products (0.63+/-0.26 DU) were close. The OMI retrievals were well correlated with in situ measurements (r = 0.84), but showed low bias (slope = 0.54). A new OMI retrieval algorithm was tested and showed improved agreement and bias (r = 0.87, slope = 0.86). The Community Multiscale Air Quality (CMAQ) model was used to simulate sulfur chemistry, exhibiting reasonable agreement (r = 0.62, slope = 1.33) with in situ SO2 columns. The mean CMAQ SO2 loading over central and eastern China was 54 kT, approx.30% more than the estimate from OMI SO2 products, 42 kT. These numerical simulations, constrained by observations, indicate that ",50% (35 to 61 %) of the anthropogenic sulfur emissions were transported downwind, and the overall lifetime of tropospheric SO2 was 38+/-7 h.

  3. The determination of beam quality correction factors: Monte Carlo simulations and measurements.

    PubMed

    González-Castaño, D M; Hartmann, G H; Sánchez-Doblado, F; Gómez, F; Kapsch, R-P; Pena, J; Capote, R

    2009-08-07

    Modern dosimetry protocols are based on the use of ionization chambers provided with a calibration factor in terms of absorbed dose to water. The basic formula to determine the absorbed dose at a user's beam contains the well-known beam quality correction factor that is required whenever the quality of radiation used at calibration differs from that of the user's radiation. The dosimetry protocols describe the whole ionization chamber calibration procedure and include tabulated beam quality correction factors which refer to 60Co gamma radiation used as calibration quality. They have been calculated for a series of ionization chambers and radiation qualities based on formulae, which are also described in the protocols. In the case of high-energy photon beams, the relative standard uncertainty of the beam quality correction factor is estimated to amount to 1%. In the present work, two alternative methods to determine beam quality correction factors are prescribed-Monte Carlo simulation using the EGSnrc system and an experimental method based on a comparison with a reference chamber. Both Monte Carlo calculations and ratio measurements were carried out for nine chambers at several radiation beams. Four chamber types are not included in the current dosimetry protocols. Beam quality corrections for the reference chamber at two beam qualities were also measured using a calorimeter at a PTB Primary Standards Dosimetry Laboratory. Good agreement between the Monte Carlo calculated (1% uncertainty) and measured (0.5% uncertainty) beam quality correction factors was obtained. Based on these results we propose that beam quality correction factors can be generated both by measurements and by the Monte Carlo simulations with an uncertainty at least comparable to that given in current dosimetry protocols.

  4. Simulation of soil organic carbon in different soil size fractions using 13Carbon measurement data

    NASA Astrophysics Data System (ADS)

    Gottschalk, P.; Bellarby, J.; Chenu, C.; Foereid, B.; Wattenbach, M.; Zingore, S.; Smith, J.

    2009-04-01

    We simulate the soil organic carbon (SOC) dynamics at a chronoseqeunce site in France, using the Rothamsted Carbon model. The site exhibits a transition from C3 plants, dominated by pine forest, to a conventional C4 maize rotation. The different 13C signatures of the forest plants and maize are used to distinguish between the woodland derived carbon (C) and the maize derived C. The model is evaluated against total SOC and C derived from forest and maize, respectively. The SOC dynamics of the five SOC pools of the model, decomposable plant material (DPM), resistant plant material (RPM), biomass, humus and inert C, are also compared to the SOC dynamics measured in different soil size fractions. These fractions are > 50 μm (particulate organic matter), 2-50 μm (silt associated SOC) and <2 μm (clay associated SOC). Other authors had shown that the RPM pool of the model corresponds well to SOC measured in the soil size fraction > 50 μm and the sum of the other pools corresponds well to the SOC measured in the soil size fraction < 50 μm. Default model applications show that the model underestimates the fast drop in forest C stocks in the first 20 years after land-use change and overestimates the C accumulation of maize C. Several hypotheses were tested to evaluate the simulations. Input data and internal model parameter uncertainties had minor effects on the simulations results. Accounting for erosion and implementing a simple tillage routine did not improve the simulation fit to the data. We therefore hypothesize that a generic process that is not yet explicitly accounted for in the ROTHC model could explain the loss in soil C after land use change. Such a process could be the loss of the physical protection of soil organic matter as would be observed following cultivation of a previously uncultivated soil. Under native conditions a fraction of organic matter is protected in stable soil aggregates. These aggregates are physically disrupted by continuous and

  5. Validation of numerical simulation with PIV measurements for two anastomosis models.

    PubMed

    Zhang, Jun-Mei; Chua, Leok Poh; Ghista, Dhanjoo N; Zhou, Tong-Ming; Tan, Yong Seng

    2008-03-01

    Hemodynamics is widely believed to influence coronary artery bypass graft (CABG) stenosis. Although distal anastomosis has been extensively investigated, further studies on proximal anastomosis are still necessary, as the extent and initiation of the stenosis process may be influenced by the flow of the proximal anastomosis per se. Therefore, in this study, two models (i.e. 90 degrees and 135 degrees anastomotic models) were designed and constructed to simulate a proximal anastomosis of CABG for the left and right coronary arteries, respectively. Flow characteristics for these models were studied experimentally in order to validate the simulation results found earlier. PIV measurements were carried out on two Pyrex glass models, so that the disturbed flow (stagnation point, flow separation and vortex) found in both proximal anastomosis models using numerical simulation, could be verified. Consequently, a fair agreement between numerical and experimental data was observed in terms of flow characteristics, velocity profiles and wall shear stress (WSS) distributions under both steady and pulsatile flow conditions. The discrepancy was postulated to be due to the difference in detailed geometry of the physical and computational models, due to manufacturing limitations. It was not possible to reproduce the exact shape of the computational model when making the Pyrex glass model. The analysis of the hemodynamic parameters based on the numerical simulation study also suggested that the 135 degrees proximal anastomosis model would alleviate the potential of intimal thickening and/or atherosclerosis, more than that of a 90 degrees proximal anastomosis model, as it had a lower variation range of time-averaged WSS and the lower segmental average of WSSG.

  6. ALE3D Simulation and Measurement of Violence in a Fast Cookoff Experiment with LX-10

    SciTech Connect

    McClelland, M A; Maienschein, J L; Howard, W M; deHaven, M R

    2006-11-22

    We performed a computational and experimental analysis of fast cookoff of LX-10 (94.7% HMX, 5.3% Viton A) confined in a 2 kbar steel tube with reinforced end caps. A Scaled-Thermal-Explosion-eXperiment (STEX) was completed in which three radiant heaters were used to heat the vessel until ignition, resulting in a moderately violent explosion after 20.4 minutes. Thermocouple measurements showed tube temperatures as high as 340 C at ignition and LX-10 surface temperatures as high as 279 C, which is near the melting point of HMX. Three micro-power radar systems were used to measure mean fragment velocities of 840 m/s. Photonics Doppler Velocimeters (PDVs) showed a rapid acceleration of fragments over 80 {micro}s. A one-dimensional ALE3D cookoff model at the vessel midplane was used to simulate the heating, thermal expansion, LX-10 decomposition composition, and closing of the gap between the HE (High Explosive) and vessel wall. Although the ALE3D simulation terminated before ignition, the model provided a good representation of heat transfer through the case and across the dynamic gap to the explosive.

  7. Ultrasound modulated light blood flow measurement using intensity autocorrelation function: a Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Tsalach, A.; Metzger, Y.; Breskin, I.; Zeitak, R.; Shechter, R.

    2014-03-01

    Development of techniques for continuous measurement of regional blood flow, and in particular cerebral blood flow (CBF), is essential for monitoring critical care patients. Recently, a novel technique, based on ultrasound modulation of light was developed for non-invasive, continuous CBF monitoring (termed ultrasound-tagged light (UTL or UT-NIRS)), and shown to correlate with readings of 133 Xe SPECT1 and laser Doppler2. Coherent light is introduced into the tissue concurrently with an Ultrasound (US) field. Displacement of scattering centers within the sampled volume induced by Brownian motion, blood flow and the US field affects the photons' temporal correlation. Hence, the temporal fluctuations of the obtained speckle pattern provide dynamic information about the blood flow. We developed a comprehensive simulation, combining the effects of Brownian motion, US and flow on the obtained speckle pattern. Photons trajectories within the tissue are generated using a Monte-Carlo based model. Then, the temporal changes in the optical path due to displacement of scattering centers are determined, and the corresponding interference pattern over time is derived. Finally, the light intensity autocorrelation function of a single speckle is calculated, from which the tissue decorrelation time is determined. The simulation's results are compared with in-vitro experiments, using a digital correlator, demonstrating decorrelation time prediction within the 95% confidence interval. This model may assist in the development of optical based methods for blood flow measurements and particularly, in methods using the acousto-optic effect.

  8. Measurements in a leading-edge separation bubble due to a simulated airfoil ice accretion

    NASA Technical Reports Server (NTRS)

    Bragg, M. B.; Khodadoust, A.; Spring, S. A.

    1992-01-01

    The separation bubble formed on an airfoil at low Reynolds number behind a simulated leading-edge glaze ice accretion is studied experimentally. Surface pressure and split hot-film measurements as well as flow visualization studies of the bubble reattachment point are reported. The simulated ice generates an adverse pressure gradient that causes a laminar separation bubble of the long bubble type to form. The boundary layer separates at a location on the ice accretion that is independent of angle of attack and reattaches at a downstream location 5-40 percent chord behind the leading edge, depending on the angle of attack. Velocity profiles show a large region of reverse flow that extends up from the airfoil surface as much as 2.5 percent chord. After reattachment, a thick distorted turbulent boundary layer exists. The separation bubble growth and reattachment are clearly seen in the plots of boundary-layer momentum thickness vs surface distance. Local minima and maxima in the boundary-layer momentum thickness development compare well with the shear layer transition point as indicated by the surface pressures and the reattachment point as measured from surface oil flow, respectively.

  9. PSF modeling by spikes simulations and wings measurements for the MOONS multi fiber spectrograph

    NASA Astrophysics Data System (ADS)

    Li Causi, G.; Lee, D.; Vitali, F.; Royer, F.; Oliva, E.

    2016-08-01

    The optical design of MOONS, the next generation thousand-fiber NIR spectrograph for the VLT, involves both on-axis reflective collimators and on-axis very fast reflective cameras, which yields both beam obstruction, due to fiber slit and detector support, and image spread, due to propagation within detector substrate. The need to model and control i) the effect of the diffraction spikes produced by these obstructions, ii) the detector-induced shape variation of the Point Spread Function (PSF), and iii) the intensity profile of the PSF wings, leads us to perform both simulations and lab measurements, in order to optimize the spider design and built a reliable PSF model, useful for simulate realistic raw images for testing the data reduction. Starting from the unobstructed PSF variation, as computed with the ZEMAX software, we numerically computed the diffraction spikes for different spider shapes, to which we added the PSF wing profile, as measured on a sample of the MOONS VPH diffraction grating. Finally, we implemented the PSF defocusing due to the thick detector (for the visible channel), we convolved the PSF with the fiber core image, and we added the optical ghosts, so finally obtaining a detailed and realistic PSF model, that we use for spectral extraction testing, cross talk estimation, and sensitivity predictions.

  10. Measurement and simulation of partial discharge in oil impregnated pressboard with an electrical aging process

    NASA Astrophysics Data System (ADS)

    Li, Junhao; Si, Wenrong; Yao, Xiu; Li, Yanming

    2009-10-01

    The continuous test on oil impregnated pressboard insulation with internal void defect was developed and the phase resolved partial discharge (PRPD) pattern of partial discharge (PD) signals during the electrical aging process was measured. Two different void structures which have different void volume were used in this experiment. It shows that the PD pattern could be classified into five stages and a great diversity in the first four stages is observed. The larger void volume leads to larger PD magnitude. The computer numerical simulation model which is based on a physical discharge process was used and the causes of PD pattern change were interpreted by comparison with computer numerical simulation results. The initial values and change tendency of gas pressure and surface conductivity were determined through experiment. The model parameters in different stages have been studied as well as the insight into the physical changes in the void during electrical aging. The results provide rules for the identification of the electrical aging stage through the partial discharge measurements.

  11. Validation of Monte-Carlo simulations with measurements at the ICON beam-line at SINQ

    NASA Astrophysics Data System (ADS)

    Giller, L.; Filges, U.; Kühne, G.; Wohlmuther, M.; Zanini, L.

    2008-02-01

    ICON is the new cold neutron imaging facility at the neutron spallation source SINQ. The ICON facility is placed at beam-line S52 with direct view to the cold liquid D 2 moderator. The beam-line includes a 4.4 m long collimation section followed by a 11 m long flight path to the imaging system. The essential part of the collimation section is composed of six revolving drums and a variable aperture wheel. Depending on the investigated object, different apertures are used. Measurements have shown that each setup has a different spatial neutron flux distribution and specific beam profiles. Measured beam profiles have been used to validate results of simulations coupling the Monte-Carlo program MCNPX with the neutron ray-tracing program McStas. In a first step, MCNPX was used to calculate neutron spectra closed to the SINQ target, at the entrance of the collimation section. These results served as an input for McStas where the beam-line itself was simulated. In the present paper, experimental and theoretical results will be compared and discussed.

  12. Model simulations of the first aerosol indirect effect and comparison of cloud susceptibility fo satellite measurements

    SciTech Connect

    Chuang, C; Penner, J E; Kawamoto, K

    2002-03-08

    Present-day global anthropogenic emissions contribute more than half of the mass in submicron particles primarily due to sulfate and carbonaceous aerosol components derived from fossil fuel combustion and biomass burning. These anthropogenic aerosols modify the microphysics of clouds by serving as cloud condensation nuclei (CCN) and enhance the reflectivity of low-level water clouds, leading to a cooling effect on climate (the Twomey effect or first indirect effect). The magnitude of the first aerosol indirect effect is associated with cloud frequency as well as a quantity representing the sensitivity of cloud albedo to changes in cloud drop number concentration. This quantity is referred to as cloud susceptibility [Twomey, 1991]. Analysis of satellite measurements demonstrates that marine stratus clouds are likely to be of higher susceptibility than continental clouds because of their lower number concentrations of cloud drops [Platnick and Twomey, 1994]. Here, we use an improved version of the fully coupled climate/chemistry model [Chuang et al., 1997] to calculate the global concentrations Of sulfate, dust, sea salt, and carbonaceous aerosols (biomass smoke and fossil fuel organic matter and black carbon). We investigated the impact of anthropogenic aerosols on cloud susceptibility and calculated the associated changes of shortwave radiative fluxes at the top of the atmosphere. We also examined the correspondence between the model simulation of cloud susceptibility and that inferred from satellite measurements to test whether our simulated aerosol concentrations and aerosol/cloud interactions give a faithful representation of these features.

  13. Kinetic Monte Carlo Simulations and Molecular Conductance Measurements of the Bacterial Decaheme Cytochrome MtrF

    SciTech Connect

    Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro; Shi, Liang; El-Naggar, Mohamed Y.

    2014-09-05

    Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, as well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.

  14. Measurements and simulations of wakefields at the Accelerator Test Facility 2

    NASA Astrophysics Data System (ADS)

    Snuverink, J.; Ainsworth, R.; Boogert, S. T.; Cullinan, F. J.; Lyapin, A.; Kim, Y. I.; Kubo, K.; Kuroda, S.; Okugi, T.; Tauchi, T.; Terunuma, N.; Urakawa, J.; White, G. R.

    2016-09-01

    Wakefields are an important factor in accelerator design, and are a real concern when preserving the low beam emittance in modern machines. Charge dependent beam size growth has been observed at the Accelerator Test Facility (ATF2), a test accelerator for future linear collider beam delivery systems. Part of the explanation of this beam size growth is wakefields. In this paper we present numerical calculations of the wakefields produced by several types of geometrical discontinuities in the beam line as well as tracking simulations to estimate the induced effects. We also discuss precision beam kick measurements performed with the ATF2 cavity beam position monitor system for a test wakefield source in a movable section of the vacuum chamber. Using an improved model independent method we measured a wakefield kick for this movable section of about 0.49 V /pC /mm , which, compared to the calculated value from electromagnetic simulations of 0.41 V /pC /mm , is within the systematic error.

  15. Soldier/Hardware-in-the-loop Simulation-based Combat Vehicle Duty Cycle Measurement: Duty Cycle Experiment 2

    DTIC Science & Technology

    2007-01-24

    DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Soldier/Hardware-in-the-loop Simulation -based Combat Vehicle...dcscorp.com Keywords: Motion base simulator , hybrid electric power system, man-in-the-loop, hardware-in-the-loop, long haul. ABSTRACT: This...paper describes a human-in-the-loop motion-based simulator interfaced to hybrid-electric power system hardware both of which were used to measure the

  16. Simulation research on improved regularized solution of the inverse problem in spectral extinction measurements.

    PubMed

    Mroczka, Janusz; Szczuczyński, Damian

    2012-04-10

    We present further results of the simulation research on the constrained regularized least squares (CRLS) solution of the ill-conditioned inverse problem in spectral extinction (turbidimetric) measurements, which we originally presented in this journal [Appl. Opt. 49, 4591 (2010)]. The inverse problem consists of determining the particle size distribution (PSD) function of a particulate system on the basis of a measured extinction coefficient as a function of wavelength. In our previous paper, it was shown that under assumed conditions the problem can be formulated in terms of the discretized Fredholm integral equation of the first kind. The CRLS method incorporates two constraints, which the PSD sought will satisfy: nonnegativity of the PSD values and normalization of the PSD to unity when integrated over the whole range of particle size, into the regularized least squares (RLS) method. This leads to the quadratic programming problem, which is solved by means of the active set algorithm within the research. The simulation research that is the subject of the present paper is a continuation and extension of the research described in our previous paper. In the present research, the performance of the CRLS method variants is compared not only to the corresponding RLS method variants but also to other regularization techniques: the truncated generalized singular value decomposition and the filtered generalized singular value decomposition, as well as nonlinear iterative algorithms: The Twomey algorithm and the Twomey-Markowski algorithm. Moreover, two methods of selecting the optimum value of the regularization parameter are considered: The L-curve method and the generalized cross validation method. The results of our simulation research provide even stronger proof that the CRLS method performs considerably better with reconstruction of PSD than other inversing methods, in terms of better fidelity and smaller uncertainty.

  17. Performance and efficiency of geotextile-supported erosion control measures during simulated rainfall events

    NASA Astrophysics Data System (ADS)

    Obriejetan, Michael; Rauch, Hans Peter; Florineth, Florin

    2013-04-01

    Erosion control systems consisting of technical and biological components are widely accepted and proven to work well if installed properly with regard to site-specific parameters. A wide range of implementation measures for this specific protection purpose is existent and new, in particular technical solutions are constantly introduced into the market. Nevertheless, especially vegetation aspects of erosion control measures are frequently disregarded and should be considered enhanced against the backdrop of the development and realization of adaptation strategies in an altering environment due to climate change associated effects. Technical auxiliaries such as geotextiles typically used for slope protection (nettings, blankets, turf reinforcement mats etc.) address specific features and due to structural and material diversity, differing effects on sediment yield, surface runoff and vegetational development seem evident. Nevertheless there is a knowledge gap concerning the mutual interaction processes between technical and biological components respectively specific comparable data on erosion-reducing effects of technical-biological erosion protection systems are insufficient. In this context, an experimental arrangement was set up to study the correlated influences of geotextiles and vegetation and determine its (combined) effects on surface runoff and soil loss during simulated heavy rainfall events. Sowing vessels serve as testing facilities which are filled with top soil under application of various organic and synthetic geotextiles and by using a reliable drought resistant seed mixture. Regular vegetational monitoring as well as two rainfall simulation runs with four repetitions of each variant were conducted. Therefore a portable rainfall simulator with standardized rainfall intensity of 240 mm h-1 and three minute rainfall duration was used to stress these systems on different stages of plant development at an inclination of 30 degrees. First results show

  18. Open Source Software Openfoam as a New Aerodynamical Simulation Tool for Rocket-Borne Measurements

    NASA Astrophysics Data System (ADS)

    Staszak, T.; Brede, M.; Strelnikov, B.

    2015-09-01

    The only way to do in-situ measurements, which are very important experimental studies for atmospheric science, in the mesoshere/lower thermosphere (MLT) is to use sounding rockets. The drawback of using rockets is the shock wave appearing because of the very high speed of the rocket motion (typically about 1000 mIs). This shock wave disturbs the density, the temperature and the velocity fields in the vicinity of the rocket, compared to undisturbed values of the atmosphere. This effect, however, can be quantified and the measured data has to be corrected not just to make it more precise but simply usable. The commonly accepted and widely used tool for this calculations is the Direct Simulation Monte Carlo (DSMC) technique developed by GA. Bird which is available as stand-alone program limited to use a single processor. Apart from complications with simulations of flows around bodies related to different flow regimes in the altitude range of MLT, that rise due to exponential density change by several orders of magnitude, a particular hardware configuration introduces significant difficulty for aerodynamical calculations due to choice of the grid sizes mainly depending on the demands on adequate DSMCs and good resolution of geometries with scale differences of factor of iO~. This makes either the calculation time unreasonably long or even prevents the calculation algorithm from converging. In this paper we apply the free open source software OpenFOAM (licensed under GNU GPL) for a three-dimensional CFD-Simulation of a flow around a sounding rocket instrumentation. An advantage of this software package, among other things, is that it can run on high performance clusters, which are easily scalable. We present the first results and discuss the potential of the new tool in applications for sounding rockets.

  19. Turbulence and transport reduction with innovative plasma shapes in TCV -- correlation ECE measurement and gyrokinetic simulations

    NASA Astrophysics Data System (ADS)

    Pochelon, Antoine

    2010-11-01

    Due to turbulence, core energy transport in tokamaks generally exceeds collisional transport by at least an order of magnitude. It is therefore crucial to understand the instabilities driving the turbulent state and to find ways to control them. Shaping the plasma is one of these fundamental tools. In low collisionality plasmas, such as in a reactor, changing triangularity from positive (delta=+0.4) to negative triangularity (delta=-0.4) is shown on TCV to reduce the energy transport by a factor two. This opens the possibility of having H-mode-like confinement time within an L-mode edge, or reduced ELMs. An optimum triangularity can be sought between steep edge barriers (delta>0), plagued by large ELMs, and improved core confinement (delta<0). Recent correlation ECE measurements show that the reduction of transport at negative delta is reflected in a reduction by a factor of two of both the amplitude of temperature fluctuations in the broadband frequency range 30-150 kHz, and the fluctuation correlation length, measured at mid-radius. In addition, the fluctuations amplitude is reduced with increasing collisionality, consistent with a reduction of the Trapped Electron Modes (TEM) drive. The effect of negative triangularity on turbulence and transport is compared to gyrokinetic code results: First, global linear simulations predict shorter radial TEM wavelength, consistent with the shorter radial turbulence correlation length observed. Second, at least close to the strongly shaped plasma boundary, local nonlinear simulations predict lower TEM induced transport with decreased triangularity. Calculations are now being extended to global nonlinear simulations.

  20. Measurements of Elastic and Inelastic Properties under Simulated Earth's Mantle Conditions in Large Volume Apparatus

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.

    2012-12-01

    The interpretation of highly resolved seismic data from Earths deep interior require measurements of the physical properties of Earth's materials under experimental simulated mantle conditions. More than decade ago seismic tomography clearly showed subduction of crustal material can reach the core mantle boundary under specific circumstances. That means there is no longer space for the assumption deep mantle rocks might be much less complex than deep crustal rocks known from exhumation processes. Considering this geophysical high pressure research is faced the challenge to increase pressure and sample volume at the same time to be able to perform in situ experiments with representative complex samples. High performance multi anvil devices using novel materials are the most promising technique for this exciting task. Recent large volume presses provide sample volumes 3 to 7 orders of magnitude bigger than in diamond anvil cells far beyond transition zone conditions. The sample size of several cubic millimeters allows elastic wave frequencies in the low to medium MHz range. Together with the small and even adjustable temperature gradients over the whole sample this technique makes anisotropy and grain boundary effects in complex systems accessible for elastic and inelastic properties measurements in principle. The measurements of both elastic wave velocities have also no limits for opaque and encapsulated samples. The application of triple-mode transducers and the data transfer function technique for the ultrasonic interferometry reduces the time for saving the data during the experiment to about a minute or less. That makes real transient measurements under non-equilibrium conditions possible. A further benefit is, both elastic wave velocities are measured exactly simultaneously. Ultrasonic interferometry necessarily requires in situ sample deformation measurement by X-radiography. Time-resolved X-radiography makes in situ falling sphere viscosimetry and even the

  1. Geographic Information Office

    USGS Publications Warehouse

    ,

    2004-01-01

    The Geographic Information Office (GIO) is the principal information office for U.S. Geological Survey (USGS), focused on: Information Policy and Services, Information Technology, Science Information, Information Security, and the Federal Geographic Data Committee/Geospatial One Stop.

  2. Office of Child Care

    MedlinePlus

    ... Early Head Start-Child Care Partnerships. Review the profiles. > What is the Office of Child Care (OCC)? The Office of Child Care supports low-income working families through child care financial assistance and ...

  3. Interactive Office user's manual

    NASA Technical Reports Server (NTRS)

    Montgomery, Edward E.; Lowers, Benjamin; Nabors, Terri L.

    1990-01-01

    Given here is a user's manual for Interactive Office (IO), an executive office tool for organization and planning, written specifically for Macintosh. IO is a paperless management tool to automate a related group of individuals into one productive system.

  4. LMAL Accounting Office 1936

    NASA Technical Reports Server (NTRS)

    1936-01-01

    Accounting Office: The Langley Memorial Aeronautical Laboratory's accounting office, 1936, with photographs of the Wright brothers on the wall. Although the Lab was named after Samuel P. Langley, most of the NACA staff held the Wrights as their heroes.

  5. Office of Child Care

    MedlinePlus

    ... examples related to the health and safety training provisions of the final regulations published by the Office ... specific information and examples related to the subsidy provisions of the final regulations published by the Office ...

  6. Simulated increases in body fat and errors in bone mineral density measurements by DXA and QCT.

    PubMed

    Yu, Elaine W; Thomas, Bijoy J; Brown, J Keenan; Finkelstein, Joel S

    2012-01-01

    Major alterations in body composition, such as with obesity and weight loss, have complex effects on the measurement of bone mineral density (BMD) by dual-energy X-ray absorptiometry (DXA). The effects of altered body fat on quantitative computed tomography (QCT) measurements are unknown. We scanned a spine phantom by DXA and QCT before and after surrounding with sequential fat layers (up to 12 kg). In addition, we measured lumbar spine and proximal femur BMD by DXA and trabecular spine BMD by QCT in 13 adult volunteers before and after a simulated 7.5 kg increase in body fat. With the spine phantom, DXA BMD increased linearly with sequential fat layering at the normal (p < 0.01) and osteopenic (p < 0.01) levels, but QCT BMD did not change significantly. In humans, fat layering significantly reduced DXA spine BMD values (mean ± SD: -2.2 ± 3.7%, p = 0.05) and increased the variability of measurements. In contrast, fat layering increased QCT spine BMD in humans (mean ± SD: 1.5 ± 2.5%, p = 0.05). Fat layering did not change mean DXA BMD of the femoral neck or total hip in humans significantly, but measurements became less precise. Associations between baseline and fat-simulation scans were stronger for QCT of the spine (r(2)= 0.97) than for DXA of the spine (r(2)= 0.87), total hip (r(2) = 0.80), or femoral neck (r(2)= 0.75). Bland-Altman plots revealed that fat-associated errors were greater for DXA spine and hip BMD than for QCT trabecular spine BMD. Fat layering introduces error and decreases the reproducibility of DXA spine and hip BMD measurements in human volunteers. Although overlying fat also affects QCT BMD measurements, the error is smaller and more uniform than with DXA BMD. Caution must be used when interpreting BMD changes in humans whose body composition is changing.

  7. A rainfall simulation experiment on soil and water conservation measures - Undesirable results

    NASA Astrophysics Data System (ADS)

    Hösl, R.; Strauss, P.

    2012-04-01

    Sediment and nutrient inputs from agriculturally used land into surface waters are one of the main problems concerning surface water quality. On-site soil and water conservation measures are getting more and more popular throughout the last decades and a lot of research has been done within this issue. Numerous studies can be found about rainfall simulation experiments with different conservation measures tested like no till, mulching employing different types of soil cover, as well as sub soiling practices. Many studies document a more or less great success in preventing soil erosion and enhancing water quality by implementing no till and mulching techniques on farmland but few studies also indicate higher erosion rates with implementation of conservation tillage practices (Strauss et al., 2003). In May 2011 we conducted a field rainfall simulation experiment in Upper Austria to test 5 different maize cultivation techniques: no till with rough seedbed, no till with fine seedbed, mulching with disc harrow and rotary harrow, mulching with rotary harrow and conventional tillage using plough and rotary harrow. Rough seedbed refers to the seedbed preparation at planting of the cover crops. On every plot except on the conventionally managed one cover crops (a mix of Trifolium alexandrinum, Phacelia, Raphanus sativus and Herpestes) were sown in August 2010. All plots were rained three times with deionised water (<50 μS.cm-1) for one hour with 50mm.h-1 rainfall intensity. Surface runoff and soil erosion were measured. Additionally, soil cover by mulch was measured as well as soil texture, bulk density, penetration resistance, surface roughness and soil water content before and after the simulation. The simulation experiments took place about 2 weeks after seeding of maize in spring 2011. The most effective cultivation techniques for soil prevention expectedly proved to be the no till variants, mean erosion rate was about 0.1 kg.h-1, mean surface runoff was 29 l.h-1

  8. Pediatric office emergencies.

    PubMed

    Fuchs, Susan

    2013-10-01

    Pediatricians regularly see emergencies in the office, or children that require transfer to an emergency department, or hospitalization. An office self-assessment is the first step in determining how to prepare for an emergency. The use of mock codes and skill drills make office personnel feel less anxious about medical emergencies. Emergency information forms provide valuable, quick information about complex patients for emergency medical services and other physicians caring for patients. Furthermore, disaster planning should be part of an office preparedness plan.

  9. Simulated measurement of power flow in structures near to simple sources and simple boundaries

    NASA Technical Reports Server (NTRS)

    Mcgary, Michael C.

    1988-01-01

    Advances in electronics technology along with the advent of low cost multichannel Fast Fourier analyzers have made it practical to use higher order central difference formulas to measure power flow in 1- and 2-D structures. The method discussed uses five point differencing for the spatial derivatives in 1-D and a thirteen point difference pattern for the spatial derivatives in 2-D plates and shells. It is assumed that the measuring transducers are accelerometers. An analytical study of the higher order differencing method and the conventional two accelerometer method was performed as a preliminary to the application of these methods to actual aircraft structures. Some classical problems were analyzed in order to simulate and compare the performance of the two methods under near field measurement conditions. These near field conditions include examples of power flows near simple sources and simple boundaries. The estimates produced by the two methods were compared to the exact solution in each example. Presented are the theory and selected results of the study, which indicate that the bias errors of the two accelerometer method under near field measurement conditions may be much larger than previous studies have suggested.

  10. Characterizing Thermal Properties of Melting Te Semiconductor: Thermal Diffusivity Measurements and Simulation

    NASA Technical Reports Server (NTRS)

    Zhu, Shen; Su, Ching-Hua; Li, C.; Lin, B.; Ben, H.; Scripa, R. N.; Lehoczky, S. L.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    Tellurium is an element for many II-VI and I-III-VI(sub 2) compounds that are useful materials for fabricating many devises. In the melt growth techniques, the thermal properties of the molten phase are important parameter for controlling growth process to improve semiconducting crystal quality. In this study, thermal diffusivity of molten tellurium has been measured by a laser flash method in the temperature range from 500 C to 900 C. A pulsed laser with 1064 nm wavelength is focused on one side of the measured sample. The thermal diffusivity can be estimated from the temperature transient at the other side of the sample. A numerical simulation based on the thermal transport process has been also performed. By numerically fitting the experimental results, both the thermal conductivity and heat capacity can be derived. A relaxation phenomenon, which shows a slow drift of the measured thermal conductivity toward the equilibrium value after cooling of the sample, was observed for the first time. The error analysis and the comparison of the results to published data measured by other techniques will be discussed in the presentation.

  11. Characterizing Thermal Properties of Melting Te Semiconductor: Thermal Diffusivity Measurements and Simulation

    NASA Technical Reports Server (NTRS)

    Zhu, Shen; Li, C.; Su, Ching-Hua; Lin, B.; Ben, H.; Scripa, R. N.; Lehoczky, S. L.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    Tellurium is an element for many II-VI and I-III-VI(sub 2) compounds that are useful materials for fabricating many devices. In the melt growth techniques, the thermal properties of the molten phase are important parameter for controlling growth process to improve semiconducting crystal quality. In this study, thermal diffusivity of molten tellurium has been measured by a laser flash method in the temperature range from 500 C to 900 C. A pulsed laser with 1064 nm wavelength is focused on one side of the measured sample. The thermal diffusivity can be estimated from the temperature transient at the other side of the sample. A numerical simulation based on the thermal transport process has been also performed. By numerically fitting the experimental results, both the thermal conductivity and heat capacity can be derived. A relaxation phenomenon, which shows a slow drift of the measured thermal conductivity toward the equilibrium value after cooling of the sample, was observed for the first time. The error analysis and the comparison of the results to published data measured by other techniques will be discussed.

  12. First Fast-Ion D-alpha (FIDA) Measurements and Simulations on C-2U

    NASA Astrophysics Data System (ADS)

    Bolte, Nathan; Gupta, Deepak; Stagner, Luke; Onofri, Marco; Dettrick, Sean; Granstedt, Erik; TAE Team

    2016-10-01

    In Tri Alpha Energy's C-2U experiment, advanced beam-driven field-reversed configuration (FRC) plasmas were sustained via tangential neutral beam injection1. The dominant fast ion population made a dramatic impact on the overall plasma performance. A fast-ion D-alpha (FIDA)2 diagnostic, which is based on the Doppler-shifted Balmer-alpha light from neutralized fast ions, was recently added to the C-2U fast-ion diagnostics suite. The first ever FIDA measurements on an FRC topology have been carried out. Bandpass-filtered FIDA measurements (>6 keV ions) were made with a photomultiplier tube and are forward modeled by FIDASIM. Line-integrated signals were taken at eight radial locations and eight times during the FRC lifetime. While the measurements share some salient features with the simulation, they are 4.5x larger, suggesting a higher fast-ion content than the Monte Carlo distribution. Highly Doppler-shifted beam radiation is also measured with a high-speed camera and is spatially well-correlated with FIDASIM. Having shown the feasibility of FIDA on C-2U, we will further explore the use of FIDA on the upgraded C-2W machine to estimate fast-ion densities and to infer the local fast-ion distribution function. Tri Alpha Energy, Inc.

  13. Turbidity Sensor for Bacterial Growth Measurements in Spaceflight and Simulated Micro-gravity

    NASA Astrophysics Data System (ADS)

    van Benthem, Roel; de Grave, Wubbo

    2009-11-01

    For the BIOFILTER flight experiment a set of turbidity sensors was developed for the measurement of the growth rate of the bacteria Xanthobacter autrophicus GJ10 in a fluid medium. During the flight experiment on FOTON M2 in 2005, bacterial growth was measured revealing growth rates between 0.046-0.077 h - 1 in microgravity, i.e. approximately 1.5-2.5 times slower than routinely measured under optimal laboratory conditions on earth. To increase confidence in the equipment and for comparison of the results, a ground-reference experiment was carried out in 2006, using BIOFILTER hardware mounted on a random positioning machine (RPM). The RPM performed random rotations at 0.5°/min (for settling compensation) and 90°/min (for simulated microgravity) while the environment was controlled, accurately repeating the BIOFILTER flight temperature conditions. Despite the rotations of the RPM, a normal growth rate of 0.115 h - 1 was confirmed in both cases. The operation of the turbidity sensor was verified. Biological interpretation of the measurements is however compromised due to poor mixing and other unknown physical and biological phenomena that need to be addressed for further space experiments using these kinds of systems.

  14. Measured Infrared Optical Cross Sections For a Variety Of Chemical and Biological Aerosol Simulants

    NASA Astrophysics Data System (ADS)

    Gurton, Kristan P.; Ligon, David; Dahmani, Rachid

    2004-08-01

    We conducted a series of spectral extinction measurements on a variety of aerosolized chemical and biological simulants over the spectral range 3-13 µm using conventional Fourier-transform IR (FTIR) aerosol spectroscopy. Samples consist of both aerosolized particulates and atomized liquids. Materials considered include Bacillus subtilis endospores, lyophilized ovalbumin, polyethylene glycol, dimethicone (SF-96), and three common background materials: kaolin clay (hydrated aluminum silicate), Arizona road dust (primarily SiO2), and diesel soot. Aerosol size distributions and mass density were measured simultaneously with the FTIR spectra. As a result, all optical parameters presented here are mass normalized, i.e., in square meters per gram. In an effort to establish the utility of using Mie theory to predict such parameters, we conducted a series of calculations. For materials in which the complex indices of refraction are known, e.g., silicone oil (SF-96) and kaolin, measured size distributions were convolved with Mie theory and the resultant spectral extinction calculated. Where there was good agreement between measured and calculated extinction spectra, absorption, total scattering, and backscatter were also calculated.

  15. Experiment and numerical simulation for laser ultrasonic measurement of residual stress.

    PubMed

    Zhan, Yu; Liu, Changsheng; Kong, Xiangwei; Lin, Zhongya

    2017-01-01

    Laser ultrasonic is a most promising method for non-destructive evaluation of residual stress. The residual stress of thin steel plate is measured by laser ultrasonic technique. The pre-stress loading device is designed which can easily realize the condition of the specimen being laser ultrasonic tested at the same time in the known stress state. By the method of pre-stress loading, the acoustoelastic constants are obtained and the effect of different test directions on the results of surface wave velocity measurement is discussed. On the basis of known acoustoelastic constants, the longitudinal and transverse welding residual stresses are measured by the laser ultrasonic technique. The finite element method is used to simulate the process of surface wave detection of welding residual stress. The pulsed laser is equivalent to the surface load and the relationship between the physical parameters of the laser and the load is established by the correction coefficient. The welding residual stress of the specimen is realized by the ABAQUS function module of predefined field. The results of finite element analysis are in good agreement with the experimental method. The simple and effective numerical and experimental methods for laser ultrasonic measurement of residual stress are demonstrated.

  16. Comparative study between ion-scale turbulence measurements and gyrokinetic simulations

    NASA Astrophysics Data System (ADS)

    Lee, W.; Ko, S. H.; Choi, M. J.; Ko, W. H.; Lee, K. D.; Leem, J.; Yun, G. S.; Park, H. K.; Wang, W. X.; Budny, R. V.; Park, Y. S.; Luhmann, N. C., Jr.; Kim, K. W.; Kstar Team

    2016-10-01

    Ion gyroscale density fluctuations were measured with a microwave imaging reflectometer (MIR) in neutral beam injected L-mode plasmas on KSTAR. The spatial and temporal characteristic scales of the measured fluctuations were studied by comparing with the local equilibrium parameters relevant to the ion-scale turbulence. Linear and nonlinear gyrokinetic simulations predicted unstable modes with poloidal wavenumbers of 3 cm-1 (or kθρs 0.4) and the wavenumbers were also identified from the measured fluctuations. The poloidal wavenumber can be derived from the measured mode frequency and poloidal velocity. The dominant mode frequency and poloidal velocity were obtained from cross correlations among 16 poloidal channels. Both the mode frequency and poloidal velocity mostly are primarily due to the E x B flow velocity in fast rotating plasmas with neutral beam injection. Work supported by NRF Korea under Grant Number NRF-2014M1A7A1A03029865 and Korean Ministry of Science, ICT, and Future Planning under the KSTAR project contract.

  17. Observing System Simulations for the NASA ASCENDS Lidar CO2 Mission Concept: Substantiating Science Measurement Requirements

    NASA Technical Reports Server (NTRS)

    Kawa, Stephan R.; Baker, David Frank; Schuh, Andrew E.; Abshire, James Brice; Browell, Edward V.; Michalak, Anna M.

    2012-01-01

    The NASA ASCENDS mission (Active Sensing of Carbon Emissions, Nights, Days, and Seasons) is envisioned as the next generation of dedicated, space-based CO2 observing systems, currently planned for launch in about the year 2022. Recommended by the US National Academy of Sciences Decadal Survey, active (lidar) sensing of CO2 from space has several potentially significant advantages, in comparison to current and planned passive CO2 instruments, that promise to advance CO2 measurement capability and carbon cycle understanding into the next decade. Assessment and testing of possible lidar instrument technologies indicates that such sensors are more than feasible, however, the measurement precision and accuracy requirements remain at unprecedented levels of stringency. It is, therefore, important to quantitatively and consistently evaluate the measurement capabilities and requirements for the prospective active system in the context of advancing our knowledge of carbon flux distributions and their dependence on underlying physical processes. This amounts to establishing minimum requirements for precision, relative accuracy, spatial/temporal coverage and resolution, vertical information content, interferences, and possibly the tradeoffs among these parameters, while at the same time framing a mission that can be implemented within a constrained budget. Here, we present results of observing system simulation studies, commissioned by the ASCENDS Science Requirements Definition Team, for a range of possible mission implementation options that are intended to substantiate science measurement requirements for a laser-based CO2 space instrument.

  18. Simulation and Measurement of Medium-Frequency Signals Coupling From a Line to a Loop Antenna

    PubMed Central

    Damiano, Nicholas W.; Li, Jingcheng; Zhou, Chenming; Brocker, Donovan E.; Qin, Yifeng; Werner, Douglas H.; Werner, Pingjuan L.

    2016-01-01

    The underground-mining environment can affect radio-signal propagation in various ways. Understanding these effects is especially critical in evaluating communications systems used during normal mining operations and during mine emergencies. One of these types of communications systems relies on medium-frequency (MF) radio frequencies. This paper presents the simulation and measurement results of recent National Institute for Occupational Safety and Health (NIOSH) research aimed at investigating MF coupling between a transmission line (TL) and a loop antenna in an underground coal mine. Two different types of measurements were completed: 1) line-current distribution and 2) line-to-antenna coupling. Measurements were taken underground in an experimental coal mine and on a specially designed surface test area. The results of these tests are characterized by current along a TL and voltage induced in the loop from a line. This paper concludes with a discussion of issues for MF TLs. These include electromagnetic fields at the ends of the TL, connection of the ends of the TL, the effect of other conductors underground, and the proximity of coal or earth. These results could help operators by providing examples of these challenges that may be experienced underground and a method by which to measure voltage induced by a line. PMID:27784954

  19. Office Computers: Ergonomic Considerations.

    ERIC Educational Resources Information Center

    Ganus, Susannah

    1984-01-01

    Each new report of the office automation market indicates technology is overrunning the office. The impacts of this technology are described and some ways to manage and physically "soften" the change to a computer-based office environment are suggested. (Author/MLW)

  20. Evaluation of the UK Met Office's HadGEM3-RA and HadRM3P regional climate models within South America-CORDEX simulations: ENSO related interannual precipitation variability

    NASA Astrophysics Data System (ADS)

    Bozkurt, D.; Rojas, M.

    2014-12-01

    This study aims to investigate and compare the ability of the UK Met Office's HadGEM3-RA and HadRM3P regional climate models (RCMs) to simulate mean and interannual variability of precipitation over South America with a special focus on Chile. The HadGEM3-RA is a regional version of the newly developed HadGEM3 global model and the HadRM3P is based on the earlier HadCM3 global model. The RCMs simulations were carried out at 0.44o x 0.44o degree resolution over South America-CORDEX domain for the period 1989-2008. The initial and boundary conditions were provided by ERA-Interim Reanalysis data available at 6-h intervals with a resolution of 1.5o x 1.5o in the horizontal and 37 pressure levels. We compare the results against a number of observational datasets, including gridded dataset of CRU, UDEL, TRMM and GPCP. Moreover, available station data is derived from Direccion General de Aguas (DGA) mainly for Central Chile, which is the heartland of Chile with the highest population and important economic activities. The analysis is mainly focused on evaluating the abilities of the RCMs in simulating spatial pattern and ENSO related precipitation variability in different subregions of South America-CORDEX domain. In general, both RCMs have a good skill in reproducing spatial pattern and annual cycle of observed precipitation in climatically different subregions. However, both RCMs tend to underestimate precipitation in the Amazon Basin, which is more pronounced in the HadRM3P simulations. On the contrary, the RCMs tend to overestimate the precipitation over the Andes and southern Chile. The overestimation could be related to the physical core of the RCMs, but the discrepancies could also arise due to insufficient station network, especially in the mountainous areas, potentially yielding smaller precipitation quantities in the observed data than the true ones. In terms of interannual variability, the models capture ENSO related wet and dry interannual precipitation

  1. Contact stiffness and damping identification for hardware-in-the-loop contact simulator with measurement delay compensation

    NASA Astrophysics Data System (ADS)

    Qi, Chenkun; Zhao, Xianchao; Gao, Feng; Ren, Anye; Sun, Qiao

    2016-06-01

    The hardware-in-the-loop (HIL) contact simulator is to simulate the contact process of two flying objects in space. The contact stiffness and damping are important parameters used for the process monitoring, compliant contact control and force compensation control. In this study, a contact stiffness and damping identification approach is proposed for the HIL contact simulation with the force measurement delay. The actual relative position of two flying objects can be accurately measured. However, the force measurement delay needs to be compensated because it will lead to incorrect stiffness and damping identification. Here, the phase lead compensation is used to reconstruct the actual contact force from the delayed force measurement. From the force and position data, the contact stiffness and damping are identified in real time using the recursive least squares (RLS) method. The simulations and experiments are used to verify that the proposed stiffness and damping identification approach is effective.

  2. Skin dose measurements using radiochromic films, TLDS and ionisation chamber and comparison with Monte Carlo simulation.

    PubMed

    Alashrah, Saleh; Kandaiya, Sivamany; Maalej, Nabil; El-Taher, A

    2014-12-01

    Estimation of the surface dose is very important for patients undergoing radiation therapy. The purpose of this study is to investigate the dose at the surface of a water phantom at a depth of 0.007 cm as recommended by the International Commission on Radiological Protection and International Commission on Radiation Units and Measurement with radiochromic films (RFs), thermoluminescent dosemeters and an ionisation chamber in a 6-MV photon beam. The results were compared with the theoretical calculation using Monte Carlo (MC) simulation software (MCNP5, BEAMnrc and DOSXYZnrc). The RF was calibrated by placing the films at a depth of maximum dose (d(max)) in a solid water phantom and exposing it to doses from 0 to 500 cGy. The films were scanned using a transmission high-resolution HP scanner. The optical density of the film was obtained from the red component of the RGB images using ImageJ software. The per cent surface dose (PSD) and percentage depth dose (PDD) curve were obtained by placing film pieces at the surface and at different depths in the solid water phantom. TLDs were placed at a depth of 10 cm in a solid water phantom for calibration. Then the TLDs were placed at different depths in the water phantom and were exposed to obtain the PDD. The obtained PSD and PDD values were compared with those obtained using a cylindrical ionisation chamber. The PSD was also determined using Monte Carlo simulation of a LINAC 6-MV photon beam. The extrapolation method was used to determine the PSD for all measurements. The PSD was 15.0±3.6% for RF. The TLD measurement of the PSD was 16.0±5.0%. The (0.6 cm(3)) cylindrical ionisation chamber measurement of the PSD was 50.0±3.0%. The theoretical calculation using MCNP5 and DOSXYZnrc yielded a PSD of 15.0±2.0% and 15.7±2.2%. In this study, good agreement between PSD measurements was observed using RF and TLDs with the Monte Carlo calculation. However, the cylindrical chamber measurement yielded an overestimate of the PSD

  3. NIST System for Measuring the Directivity Index of Hearing Aids under Simulated Real-Ear Conditions.

    PubMed

    Wagner, Randall P

    2013-01-01

    The directivity index is a parameter that is commonly used to characterize the performance of directional hearing aids, and is determined from the measured directional response. Since this response is different for a hearing aid worn on a person as compared to when it is in a free field, directivity index measurements of hearing aids are usually done under simulated real-ear conditions. Details are provided regarding the NIST system for measuring the hearing aid directivity index under these conditions and how this system is used to implement a standardized procedure for performing such measurements. This procedure involves a sampling method that utilizes sound source locations distributed in a semi-aligned zone array on an imaginary spherical surface surrounding a standardized acoustical test manikin. The capabilities of the system were demonstrated over the frequency range of one-third-octave bands with center frequencies from 200 Hz to 8000 Hz through NIST participation in an interlaboratory comparison. This comparison was conducted between eight different laboratories of members of Working Group S3/WG48, Hearing Aids, established by Accredited Standards Committee S3, Bioacoustics, which is administered by the Acoustical Society of America and accredited by the American National Standards Institute. Directivity measurements were made for a total of six programmed memories in two different hearing aids and for the unaided manikin with the manikin right pinna accompanying the aids. Omnidirectional, cardioid, and bidirectional response patterns were measured. Results are presented comparing the NIST data with the reference values calculated from the data reported by all participating laboratories.

  4. Simulation of the Impact of New Ocean Surface Wind Measurements on H*Wind Analyses

    NASA Technical Reports Server (NTRS)

    Miller, Timothy; Atlas, Robert; Black, Peter; Chen, Shuyi; Hood, Robbie; Johnson, James; Jones, Linwood; Ruf, Chris; Uhlhorn, Eric

    2008-01-01

    The H*Wind analysis, a product of the Hurricane Research Division of NOAA's Atlantic Oceanographic and Meteorological Laboratory, brings together wind measurements from a variety of observation platforms into an objective analysis of the distribution of surface wind speeds in a tropical cyclone. This product is designed to improve understanding of the extent and strength of the wind field, and to improve the assessment of hurricane intensity. See http://www.aoml.noaa.gov/hrd/data sub/wind.html. The Hurricane Imaging Radiometer (HIRAD) is a new passive microwave remote sensor for hurricane observations that is currently under development by NASA Marshall Space Flight Center, NOAA Hurricane Research Division, the University of Central Florida and the University of Michigan. HIRAD is being designed to enhance the current real-time airborne ocean surface winds observation capabilities of NOAA and USAF Weather Squadron hurricane hunter aircraft using the operational airbome Stepped Frequency Microwave Radiometer (SFMR). Unlike SFMR, which measures wind speed and rain rate along the ground track directly beneath the aircraft, HIRAD will provide images of the surface wind and rain field over a wide swath (approximately 3 x the aircraft altitude, or approximately 2 km from space). The instrument is described in a separate paper presented at this conference. The present paper describes a set of Observing System Simulation Experiments (OSSEs) in which measurements from the new instrument as well as those from existing instruments (air, surface, and space-based) are simulated from the output of a numerical model from the University of Miami, and those results are used to construct H*Wind analyses. Evaluations will be presented on the relative impact of HIRAD and other instruments on H*Wind analyses, including the use of HIRAD from 2 aircraft altitudes and from a space-based platform.

  5. Impact of sample geometry on the measurement of pressure-saturation curves: Experiments and simulations

    NASA Astrophysics Data System (ADS)

    Moura, M.; Fiorentino, E.-A.; Mâløy, K. J.; Schäfer, G.; Toussaint, R.

    2015-11-01

    In this paper, we study the influence of sample geometry on the measurement of pressure-saturation relationships, by analyzing the drainage of a two-phase flow from a quasi-2-D random porous medium. The medium is transparent, which allows for the direct visualization of the invasion pattern during flow, and is initially saturated with a viscous liquid (a dyed glycerol-water mix). As the pressure in the liquid is gradually reduced, air penetrates from an open inlet, displacing the liquid which leaves the system from an outlet on the opposite side. Pressure measurements and images of the flow are recorded and the pressure-saturation relationship is computed. We show that this relationship depends on the system size and aspect ratio. The effects of the system's boundaries on this relationship are measured experimentally and compared with simulations produced using an invasion percolation algorithm. The pressure build up at the beginning and end of the invasion process are particularly affected by the boundaries of the system whereas at the central part of the model (when the air front progresses far from these boundaries), the invasion happens at a statistically constant capillary pressure. These observations have led us to propose a much simplified pressure-saturation relationship, valid for systems that are large enough such that the invasion is not influenced by boundary effects. The properties of this relationship depend on the capillary pressure thresholds distribution, sample dimensions, and average pore connectivity and its applications may be of particular interest for simulations of two-phase flow in large porous media.

  6. Distribution of carbon nanotube sizes from adsorption measurements and computer simulation.

    PubMed

    Kowalczyk, Piotr; Hołyst, Robert; Tanaka, Hideki; Kaneko, Katsumi

    2005-08-04

    The method for the evaluation of the distribution of carbon nanotube sizes from the static adsorption measurements and computer simulation of nitrogen at 77 K is developed. We obtain the condensation/evaporation pressure as a function of pore size of a cylindrical carbon tube using Gauge Cell Monte Carlo Simulation (Gauge Cell MC). To obtain the analytical form of the relationships mentioned above we use Derjaguin-Broekhoff-deBoer theory. Finally, the pore size distribution (PSD) of the single-walled carbon nanohorns (SWNHs) is determined from a single nitrogen adsorption isotherm measured at 77 K. We neglect the conical part of an isolated SWNH tube and assume a structureless wall of a carbon nanotube. We find that the distribution of SWNH sizes is broad (internal pore radii varied in the range 1.0-3.6 nm with the maximum at 1.3 nm). Our method can be used for the determination of the pore size distribution of the other tubular carbon materials, like, for example, multiwalled or double-walled carbon nanotubes. Besides the applicable aspect of the current work the deep insight into the problem of capillary condensation/evaporation in confined carbon cylindrical geometry is presented. As a result, the critical pore radius in structureless single-walled carbon tubes is determined as being equal to three nitrogen collision diameters. Below that size the adsorption-desorption isotherm is reversible (i.e., supercritical in nature). We show that the classical static adsorption measurements combined with the proper modeling of the capillary condensation/evaporation phenomena is a powerful method that can be applied for the determination of the distribution of nanotube sizes.

  7. Analyzing carbon dioxide and methane emissions in California using airborne measurements and model simulations

    NASA Astrophysics Data System (ADS)

    Johnson, M. S.; Yates, E. L.; Iraci, L. T.; Jeong, S.; Fischer, M. L.

    2013-12-01

    Greenhouse gas (GHG) concentrations have increased over the past decades and are linked to global temperature increases and climate change. These changes in climate have been suggested to have varying effects, and uncertain consequences, on agriculture, water supply, weather, sea-level rise, the economy, and energy. To counteract the trend of increasing atmospheric concentrations of GHGs, the state of California has passed the California Global Warming Act of 2006 (AB-32). This requires that by the year 2020, GHG (e.g., carbon dioxide (CO2) and methane (CH4)) emissions will be reduced to 1990 levels. To quantify GHG fluxes, emission inventories are routinely compiled for the State of California (e.g., CH4 emissions from the California Greenhouse Gas Emissions Measurement (CALGEM) Project). The major sources of CO2 and CH4 in the state of California are: transportation, electricity production, oil and gas extraction, cement plants, agriculture, landfills/waste, livestock, and wetlands. However, uncertainties remain in these emission inventories because many factors contributing to these processes are poorly quantified. To alleviate these uncertainties, a synergistic approach of applying air-borne measurements and chemical transport modeling (CTM) efforts to provide a method of quantifying local and regional GHG emissions will be performed during this study. Additionally, in order to further understand the temporal and spatial distributions of GHG fluxes in California and the impact these species have on regional climate, CTM simulations of daily variations and seasonality of total column CO2 and CH4 will be analyzed. To assess the magnitude and spatial variation of GHG emissions and to identify local 'hot spots', airborne measurements of CH4 and CO2 were made by the Alpha Jet Atmospheric eXperiment (AJAX) over the San Francisco Bay Area (SFBA) and San Joaquin Valley (SJV) in January and February 2013 during the Discover-AQ-CA study. High mixing ratios of GHGs were

  8. The structure of liquid water up to 360 MPa from x-ray diffraction measurements using a high Q-range and from molecular simulation

    SciTech Connect

    Skinner, L. B.; Galib, M.; Fulton, J. L.; Mundy, C. J.; Parise, J. B.; Pham, V. -T.; Schenter, G. K.; Benmore, C. J.

    2016-04-07

    X-ray diffraction measurements of liquid water are reported at pressures up to 360 MPa corresponding to a density of 0.0373 molecules per Å3. The measurements were conducted at a spatial resolution corresponding to Qmax = 16 Å-1. The method of data analysis and measurement in this study follows the earlier benchmark results reported for water under ambient conditions having density of 0.0333 molecules per Å3 and Qmax = 20 Å-1 [J Chem Phys 138, 074506 (2013)]1 and at 70°C having density of 0.0327 molecules per Å3 and Qmax = 20 Å-1. [J Chem Phys 141, 214507 (2014)]2 The structure of water is very different at these three different T and P state points and thus they provide basis for evaluating the fidelity of molecular simulation. Measurements show that at 360 MPa, the 4 waters residing in the region between 2.3-3 Å are nearly unchanged: the peak position, shape and coordination number are nearly identical to their values under ambient conditions. However, in the region above 3 Å, large structural changes occur with the collapse of the well-defined 2nd shell and shifting of higher shells to shorter distances. The measured structure is compared to simulated structure using intermolecular potentials described by both first-principles methods (revPBE-D3) and classical potentials (TIP4P/2005 and mW). The DFT-based, revPBE-D3 provides the best overall representation of the ambient, high-temperature and high-pressure data while the TIP4P/2005 also captures the densification mechanism, whereby the non-bonded 5th nearest neighbor molecule, which encroaches the 1st shell at ambient pressure, is pushed further into the local tetrahedral arrangement at higher pressures by the more distant molecules filling the void space in the network between the 1st and 2nd shells. Acknowledgments: Thanks to Rick Spence and Doug Robinson for support with the beamline equipment at the Advanced Photon Source. The helpful comments of Valeria Molinero are acknowledged. This work was

  9. The impact of measurement frequency on the domains of glycemic control in the critically ill--a Monte Carlo simulation.

    PubMed

    Krinsley, James S; Bruns, David E; Boyd, James C

    2015-03-01

    The role of blood glucose (BG) measurement frequency on the domains of glycemic control is not well defined. This Monte Carlo mathematical simulation of glycemic control in a cohort of critically ill patients modeled sets of 100 patients with simulated BG-measuring devices having 5 levels of measurement imprecision, using 2 published insulin infusion protocols, for 200 hours, with 3 different BG-measurement intervals-15 minutes (Q15'), 1 hour (Q1h), and 2 hours (Q2h)-resulting in 1,100,000 BG measurements for 3000 simulated patients. The model varied insulin sensitivity, initial BG value and rate of gluconeogenesis. The primary outcomes included rates of hyperglycemia (BG > 180 mg/dL), hypoglycemia (BG < 70 and 40 mg/dL), proportion of patients with elevated glucose variability (within-patient coefficient of variation [CV] > 20%), and time in range (BG ranges 80-150 mg/dL and 80-180 mg/dL). Percentages of hyperglycemia, hypoglycemia at both thresholds, and patients with elevated glucose variability as well as time outside glycemic targets were substantially higher in simulations with measurement interval Q2h compared to those with measurement interval Q1h and moderately higher in simulations with Q1h than in those with Q15'. Higher measurement frequency mitigated the deleterious effect of high measurement imprecision, defined as CV ≥ 15%. This Monte Carlo simulation suggests that glycemic control in critically ill patients is more optimal with a BG measurement interval no longer than 1h, with further benefit obtained with use of measurement interval of 15'. These findings have important implications for the development of glycemic control standards.

  10. Comparison of 2-D model simulations of ozone and nitrous oxide at high latitudes with stratospheric measurements

    NASA Technical Reports Server (NTRS)

    Proffitt, M. H.; Solomon, S.; Loewenstein, M.

    1992-01-01

    A linear reference relationship between O3 and N2O has been used to estimate polar winter O3 loss from aircraft data taken in the lower stratosphere. Here, this relationship is evaluated at high latitudes by comparing it with a 2D model simulation and with NIMBUS 7 satellite measurements. Although comparisons with satellite measurements are limited to January through May, the model simulations are compared during other seasons. The model simulations and the satellite data are found to be consistent with the winter O3 loss analysis. It is shown that such analyses are likely to be inappropriate during other seasons.

  11. Optical performance simulation of free-form optics for an eye implant based on a measurement data enhanced model.

    PubMed

    Sieber, Ingo; Li, Likai; Gengenbach, Ulrich; Beckert, Erik; Steinkopf, Ralf; Yi, Allen Y

    2016-08-20

    This paper describes the application of a modeling approach for precise optical performance prediction of free-form optics-based subsystems on a demonstration model of an eye implant. The simulation model is enhanced by surface data measured on the free-form lens parts. The manufacturing of the free-form lens parts is realized by two different manufacturing processes: ultraprecision diamond machining and microinjection molding. Evaluation of both processes is conducted by a simulation of the optical performance on the basis of their surface measurement comparisons with the nominal geometry. The simulation results indicate that improvements from the process optimization of microinjection molding were obtained for the best manufacturing accuracy.

  12. Surface structure of imidazolium-based ionic liquids: Quantitative comparison between simulations and high-resolution RBS measurements

    NASA Astrophysics Data System (ADS)

    Nakajima, Kaoru; Nakanishi, Shunto; Lísal, Martin; Kimura, Kenji

    2016-03-01

    Elemental depth profiles of 1-alkyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide ([CnMIM][TFSI], n = 4, 6, 8) are measured using high-resolution Rutherford backscattering spectroscopy (HRBS). The profiles are compared with the results of molecular dynamics (MD) simulations. Both MD simulations and HRBS measurements show that the depth profiles deviate from the uniform stoichiometric composition in the surface region, showing preferential orientations of ions at the surface. The MD simulations qualitatively reproduce the observed HRBS profiles but the agreement is not satisfactory. The observed discrepancy is ascribed to the capillary waves. By taking account of the surface roughness induced by the capillary waves, the agreement becomes almost perfect.

  13. Simulation as a New Tool to Establish Benchmark Outcome Measures in Obstetrics

    PubMed Central

    2015-01-01

    Background There are not enough clinical data from rare critical events to calculate statistics to decide if the management of actual events might be below what could reasonably be expected (i.e. was an outlier). Objectives In this project we used simulation to describe the distribution of management times as an approach to decide if the management of a simulated obstetrical crisis scenario could be considered an outlier. Design Twelve obstetrical teams managed 4 scenarios that were previously developed. Relevant outcome variables were defined by expert consensus. The distribution of the response times from the teams who performed the respective intervention was graphically displayed and median and quartiles calculated using rank order statistics. Results Only 7 of the 12 teams performed chest compressions during the arrest following the ‘cannot intubate/cannot ventilate’ scenario. All other outcome measures were performed by at least 11 of the 12 teams. Calculation of medians and quartiles with 95% CI was possible for all outcomes. Confidence intervals, given the small sample size, were large. Conclusion We demonstrated the use of simulation to calculate quantiles for management times of critical event. This approach could assist in deciding if a given performance could be considered normal and also point to aspects of care that seem to pose particular challenges as evidenced by a large number of teams not performing the expected maneuver. However sufficiently large sample sizes (i.e. from a national data base) will be required to calculate acceptable confidence intervals and to establish actual tolerance limits. PMID:26107661

  14. Pulse-echo ultrasound transit time spectroscopy: A comparison of experimental measurement and simulation prediction.

    PubMed

    Wille, Marie-Luise; Almualimi, Majdi A; Langton, Christian M

    2016-01-01

    Considering ultrasound propagation through complex composite media as an array of parallel sonic rays, a comparison of computer-simulated prediction with experimental data has previously been reported for transmission mode (where one transducer serves as transmitter, the other as receiver) in a series of 10 acrylic step-wedge samples, immersed in water, exhibiting varying degrees of transit time inhomogeneity. In this study, the same samples were used but in pulse-echo mode, where the same ultrasound transducer served as both transmitter and receiver, detecting both 'primary' (internal sample interface) and 'secondary' (external sample interface) echoes. A transit time spectrum was derived, describing the proportion of sonic rays with a particular transit time. A computer simulation was performed to predict the transit time and amplitude of various echoes created, and compared with experimental data. Applying an amplitude-tolerance analysis, 91.7% ± 3.7% of the simulated data were within ±1 standard deviation of the experimentally measured amplitude-time data. Correlation of predicted and experimental transit time spectra provided coefficients of determination (R(2)%) ranging from 100.0% to 96.8% for the various samples tested. The results acquired from this study provide good evidence for the concept of parallel sonic rays. Furthermore, deconvolution of experimental input and output signals has been shown to provide an effective method to identify echoes otherwise lost due to phase cancellation. Potential applications of pulse-echo ultrasound transit time spectroscopy include improvement of ultrasound image fidelity by improving spatial resolution and reducing phase interference artefacts.

  15. Measurement and simulation of subsurface tracer migration to tile drains in low permeability, macroporous soil

    NASA Astrophysics Data System (ADS)

    Bishop, Joshua M.; Callaghan, Michael V.; Cey, Edwin E.; Bentley, Larry R.

    2015-06-01

    Multiyear monitoring and simulation of a conservative tracer was used in this study to investigate preferential flow and macropore-matrix interactions in low permeability, macroporous soil. 2,6-Difluorobenzoic acid (DFBA) tracer was applied to a 20 × 20 m drip irrigated test plot situated over two tile drains. Tracer movement over the 2009 and 2010 field seasons was monitored using tile drain effluent, suction lysimeters, monitoring wells, and soil cores. Despite similar volumes of water application to the plot in each season, 10 times more water and 14 times more DFBA were captured by the drains in 2010 due to wetter regional hydrologic conditions. The importance of preferential flow along macropores was shown by rapid DFBA breakthrough to the tile (<47 h), and DFBA detections in sand units below the tile drains. Preferential flow resulted in less than 8% of the DFBA mass being captured by the tiles over both years. With much of the DFBA mass (75%) retained in the upper 0.25 m of the soil at the end of 2009, numerical simulations were used to quantify the migration of this in situ tracer during the subsequent 2010 field season. Dual permeability and dual porosity models produced similar matches to measured tile drain flows and concentrations, but solute leaching was captured more effectively by the dual permeability formulation. The simulations highlighted limitations in current descriptions for small-scale mass transfer between matrix and macropore domains, which do not consider time-dependent transfer coefficients or nonuniform distributions of solute mass within soil matrix blocks.