Science.gov

Sample records for performance analysis 1994-2002

  1. MyPyramid equivalents database for USDA survey food codes, 1994-2002, version 1.0.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The MyPyramid Equivalents Database for USDA Food Codes, 1994-2002 Version 1.0 (MyPyrEquivDB_v1) is based on USDA’s MyPyramid Food Guidance System (2005) and provides equivalents data on the five major food groups and selected subgroups (32 groups in all) for all USDA survey food codes available for ...

  2. [Characteristics of malaria cases diagnosed in Edirne province between 1994-2002].

    PubMed

    Ay, Gazanfer; Gürcan, Saban; Tatman Otkun, Müşerref; Tuğrul, Murat; Otkun, Metin

    2004-01-01

    In this study, the epidemiological characteristics of malaria cases in Edirne province were investigated. Between the years of 1994-2002, a total of 317,087 blood samples were collected from soldiers in the province with selective active surveillance and from the resident population with active or passive surveillance methods, by the medical staff of Malaria Control Department and Health Centers, to search the presence of Plasmodium. In 281 of them Plasmodium spp. were detected, and the characteristics of malaria cases were investigated. Of the cases, 238 (84.7%) were detected in the first three years and mostly in September. While the indigenous cases were detected in the districts where rice planted intensely, the imported cases were detected in the districts heavily populated by military staff. Of the imported cases, 62% originated from Diyarbakir, Batman and Sanliurfa provinces (Southeast part of Turkey). P. vivax was detected as the causative agent in all blood samples except one P. ovale. This latter case has been the only one in Turkey so far and he was a student from Afghanistan. Attaching importance to fight off mosquitoes in intensely rice planted districts and strictly surveying the military staff, particularly from the region of Southern-East Anatolia, have led to successful control of the malaria cases in Edirne region. PMID:15293910

  3. A probable extralimital postbreeding assembly of bufflehead Bucephala albeola in southcentral North Dakota, USA, 1994-2002

    USGS Publications Warehouse

    Igl, L.D.

    2003-01-01

    The Bufflehead Bucephala albeola predominantly in Canada and Alaska (USA). Evidence suggests that the species may have recently expanded its breeding range southward into central and south-central North Dakota. This paper presents data on observations of Buffleheads during the breeding season in Kidder County, North Dakota, 1994-2002, and discusses the possibility that the species has not expanded its breeding range but rather has established an extralimital post-breeding staging area south of its typical breeding range.

  4. A probable extralimital post-breeding assembly of Bufflehead Bucephala albeola in southcentral North Dakota, USA, 1994-2002

    USGS Publications Warehouse

    Igl, L.D.

    2003-01-01

    The Bufflehead Bucephala albeola breeds predominantly in Canada and Alaska (USA). Evidence suggests that the species may have recently expanded its breeding range southward into central and south central North Dakota. This paper presents data on observations of Buffleheads during the breeding season in Kidder County, North Dakota, 1994-2002, and discusses the possibility that the species has not expanded its breeding range but rather has established an extralimital post-breeding staging area south of its typical breeding range.

  5. Doppler images of the RS CVn binary II Pegasi during the years 1994-2002

    NASA Astrophysics Data System (ADS)

    Lindborg, M.; Korpi, M. J.; Hackman, T.; Tuominen, I.; Ilyin, I.; Piskunov, N.

    2011-02-01

    Aims: We publish 16 Doppler imaging temperature maps for the years 1994-2002 of the active RS CVn star II Peg. The six maps from 1999-2002 are based on previously unpublished observations. Through Doppler imaging we want to study the spot evolution of the star and in particular compare this with previous results showing a cyclic spot behaviour and persistent, active longitudes. Methods: The observations were collected with the SOFIN spectrograph at the Nordic Optical Telescope. The temperature maps were calculated using a Doppler imaging code based on Tikhonov regularization. Results: During 1994-2001, our results show a consistent trend in the derived longitudes of the principal and secondary temperature minima over time such that the magnetic structure appears to rotate somewhat more rapidly than the orbital period of this close binary. A sudden phase jump in the active region occurred between the observing seasons of 2001 and 2002. No clear trend over time is detected in the derived latitudes of the spots, indicating that the systematic motion could be related to the drift of the spot-generating mechanism rather than to differential rotation. The derived temperature maps are quite similar to the ones obtained earlier with different methods and the main differences occur in the spot latitudes and relative strength of the spot structures. Conclusions: We observe both longitude and latitude shifts in the spot activity of II Peg. However, our results are not consistent with the periodic behaviour presented in previous studies. Full Table 1 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/526/A44

  6. Internet Access in U.S. Public Schools and Classrooms: 1994-2002. E.D. Tabs.

    ERIC Educational Resources Information Center

    Kleiner, Anne; Lewis, Laurie

    This report presents data on Internet access in U.S. public schools from 1994 to 2002 by school characteristics. It provides trend analysis on the progress of public schools and classrooms in connecting to the Internet and on the ratio of students to instructional computers with Internet access. For the year 2002, this report also presents data on…

  7. [Molecular epidemiology of rabies epizootics in Colombia, 1994-2002: evidence of human and canine rabies associated with chiroptera].

    PubMed

    Páez, Andrés; Nuñez, Constanza; García, Clemencia; Boshell, Jorge

    2003-03-01

    Three urban rabies outbreaks have been reported in Colombia during the last two decades, one of which is ongoing in the Caribbean region (northern Colombia). The earlier outbreaks occurred almost simultaneously in Arauca (eastern Colombia) and in the Central region, ending in 1997. Phylogenetic relationships among rabies viruses isolated from the three areas were based on a comparison of cDNA fragments coding for the endodomain of protein G and a fragment of L protein obtained by RT-PCR. The sequenced amplicons which included the G-L intergenic region contained 902 base pairs. Phylogenetic analysis showed three distinct groups of viruses. Colombian genetic variant I viruses were isolated only from Arauca and the Central region, but are now apparently extinct. Colombian genetic variant II viruses were isolated in the Caribbean region and are still being transmitted in that area. The third group of bat rabies variants were isolated from two insectivorous bats, three domestic dogs and a human. This associates bat rabies virus with rabies in Colombian dogs and humans, and indicates bats to be a rabies reservoir of public health significance. PMID:12696396

  8. Performance Measurement Analysis System

    Energy Science and Technology Software Center (ESTSC)

    1989-06-01

    The PMAS4.0 (Performance Measurement Analysis System) is a user-oriented system designed to track the cost and schedule performance of Department of Energy (DOE) major projects (MPs) and major system acquisitions (MSAs) reporting under DOE Order 5700.4A, Project Management System. PMAS4.0 provides for the analysis of performance measurement data produced from management control systems complying with the Federal Government''s Cost and Schedule Control Systems Criteria.

  9. Ariel Performance Analysis System

    NASA Astrophysics Data System (ADS)

    Ariel, Gideon B.; Penny, M. A.; Saar, Dany

    1990-08-01

    The Ariel Performance Analysis System is a computer-based system for the measurement, analysis and presentation of human performance. The system is based on a proprietary technique for processing multiple high-speed film and video recordings of a subject's performance. It is noninvasive, and does not require wires, sensors, markers or reflectors. In addition, it is portable and does not require modification of the performing environment. The scale and accuracy of measurement can be set to whatever levels are required by the activity being performed.

  10. Performance Support for Performance Analysis

    ERIC Educational Resources Information Center

    Schaffer, Scott; Douglas, Ian

    2004-01-01

    Over the past several years, there has been a shift in emphasis in many business, industry, government and military training organizations toward human performance technology or HPT (Rossett, 2002; Dean, 1995). This trend has required organizations to increase the human performance knowledge, skills, and abilities of the training workforce.…

  11. MIR Performance Analysis

    SciTech Connect

    Hazen, Damian; Hick, Jason

    2012-06-12

    We provide analysis of Oracle StorageTek T10000 Generation B (T10KB) Media Information Record (MIR) Performance Data gathered over the course of a year from our production High Performance Storage System (HPSS). The analysis shows information in the MIR may be used to improve tape subsystem operations. Most notably, we found the MIR information to be helpful in determining whether the drive or tape was most suspect given a read or write error, and for helping identify which tapes should not be reused given their history of read or write errors. We also explored using the MIR Assisted Search to order file retrieval requests. We found that MIR Assisted Search may be used to reduce the time needed to retrieve collections of files from a tape volume.

  12. DAS performance analysis

    SciTech Connect

    Bates, G.; Bodine, S.; Carroll, T.; Keller, M.

    1984-02-01

    This report begins with an overview of the Data Acquisition System (DAS), which supports several of PPPL's experimental devices. Performance measurements which were taken on DAS and the tools used to make them are then described.

  13. Dependability and performability analysis

    NASA Technical Reports Server (NTRS)

    Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.

    1993-01-01

    Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.

  14. Analysis of EDP performance

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The objective of this contract was the investigation of the potential performance gains that would result from an upgrade of the Space Station Freedom (SSF) Data Management System (DMS) Embedded Data Processor (EDP) '386' design with the Intel Pentium (registered trade-mark of Intel Corp.) '586' microprocessor. The Pentium ('586') is the latest member of the industry standard Intel X86 family of CISC (Complex Instruction Set Computer) microprocessors. This contract was scheduled to run in parallel with an internal IBM Federal Systems Company (FSC) Internal Research and Development (IR&D) task that had the goal to generate a baseline flight design for an upgraded EDP using the Pentium. This final report summarizes the activities performed in support of Contract NAS2-13758. Our plan was to baseline performance analyses and measurements on the latest state-of-the-art commercially available Pentium processor, representative of the proposed space station design, and then phase to an IBM capital funded breadboard version of the flight design (if available from IR&D and Space Station work) for additional evaluation of results. Unfortunately, the phase-over to the flight design breadboard did not take place, since the IBM Data Management System (DMS) for the Space Station Freedom was terminated by NASA before the referenced capital funded EDP breadboard could be completed. The baseline performance analyses and measurements, however, were successfully completed, as planned, on the commercial Pentium hardware. The results of those analyses, evaluations, and measurements are presented in this final report.

  15. MPQC: Performance Analysis and Optimization

    SciTech Connect

    Sarje, Abhinav; Williams, Samuel; Bailey, David

    2012-11-30

    MPQC (Massively Parallel Quantum Chemistry) is a widely used computational quantum chemistry code. It is capable of performing a number of computations commonly occurring in quantum chemistry. In order to achieve better performance of MPQC, in this report we present a detailed performance analysis of this code. We then perform loop and memory access optimizations, and measure performance improvements by comparing the performance of the optimized code with that of the original MPQC code. We observe that the optimized MPQC code achieves a significant improvement in the performance through a better utilization of vector processing and memory hierarchies.

  16. Lidar performance analysis

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1994-01-01

    Section 1 details the theory used to build the lidar model, provides results of using the model to evaluate AEOLUS design instrument designs, and provides snapshots of the visual appearance of the coded model. Appendix A contains a Fortran program to calculate various forms of the refractive index structure function. This program was used to determine the refractive index structure function used in the main lidar simulation code. Appendix B contains a memo on the optimization of the lidar telescope geometry for a line-scan geometry. Appendix C contains the code for the main lidar simulation and brief instruction on running the code. Appendix D contains a Fortran code to calculate the maximum permissible exposure for the eye from the ANSI Z136.1-1992 eye safety standards. Appendix E contains a paper on the eye safety analysis of a space-based coherent lidar presented at the 7th Coherent Laser Radar Applications and Technology Conference, Paris, France, 19-23 July 1993.

  17. Techniques for Automated Performance Analysis

    SciTech Connect

    Marcus, Ryan C.

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  18. Inlet Performance Analysis Code Developed

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Barnhart, Paul J.

    1998-01-01

    The design characteristics of an inlet very much depend on whether the inlet is to be flown at subsonic, supersonic, or hypersonic speed. Whichever the case, the primary function of an inlet is to deliver free-stream air to the engine face at the highest stagnation pressure possible and with the lowest possible variation in both stagnation pressure and temperature. At high speeds, this is achieved by a system of oblique and/or normal shock waves, and possibly some isentropic compression. For both subsonic and supersonic flight, current design practice indicates that the inlet should deliver the air to the engine face at approximately Mach 0.45. As a result, even for flight in the high subsonic regime, the inlet must retard (or diffuse) the air substantially. Second, the design of an inlet is influenced largely by the compromise between high performance and low weight. This compromise involves tradeoffs between the mission requirements, flight trajectory, airframe aerodynamics, engine performance, and weight-all of which, in turn, influence each other. Therefore, to study the effects of some of these influential factors, the Propulsion System Analysis Office of the NASA Lewis Research Center developed the Inlet Performance Analysis Code (IPAC). This code uses oblique shock and Prandtl-Meyer expansion theory to predict inlet performance. It can be used to predict performance for a given inlet geometric design such as pitot, axisymmetric, and two-dimensional. IPAC also can be used to design preliminary inlet systems and to make subsequent performance analyses. It computes the total pressure, the recovery, the airflow, and the drag coefficients. The pressure recovery includes losses associated with normal and oblique shocks, internal and external friction, the sharp lip, and diffuser components. Flow rate includes captured, engine, spillage, bleed, and bypass flows. The aerodynamic drag calculation includes drags associated with spillage, cowl lip suction, wave, bleed

  19. Scalable Performance Measurement and Analysis

    SciTech Connect

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  20. MUSE instrument global performance analysis

    NASA Astrophysics Data System (ADS)

    Loupias, M.; Bacon, R.; Caillier, P.; Fleischmann, A.; Jarno, A.; Kelz, A.; Kosmalski, J.; Laurent, F.; Le Floch, M.; Lizon, J. L.; Manescau, A.; Nicklas, H.; Parès, L.; Pécontal, A.; Reiss, R.; Remillieux, A.; Renault, E.; Roth, M. M.; Rupprecht, G.; Stuik, R.

    2010-07-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument developed for ESO (European Southern Observatory) and will be assembled to the VLT (Very Large Telescope) in 2012. The MUSE instrument can simultaneously record 90.000 spectra in the visible wavelength range (465-930nm), across a 1*1arcmin2 field of view, thanks to 24 identical Integral Field Units (IFU). A collaboration of 7 institutes has successfully passed the Final Design Review and is currently working on the first sub-assemblies. The sharing of performances has been based on 5 main functional sub-systems. The Fore Optics sub-system derotates and anamorphoses the VLT Nasmyth focal plane image, the Splitting and Relay Optics associated with the Main Structure are feeding each IFU with 1/24th of the field of view. Each IFU is composed of a 3D function insured by an image slicer system and a spectrograph, and a detection function by a 4k*4k CCD cooled down to 163°K. The 5th function is the calibration and data reduction of the instrument. This article depicts the breakdown of performances between these sub-systems (throughput, image quality...), and underlines the constraining parameters of the interfaces either internal or with the VLT. The validation of all these requirements is a critical task started a few months ago which requires a clear traceability and performances analysis.

  1. Software Performs Complex Design Analysis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.

  2. Stage Separation Performance Analysis Project

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Zhang, Sijun; Liu, Jiwen; Wang, Ten-See

    2001-01-01

    Stage separation process is an important phenomenon in multi-stage launch vehicle operation. The transient flowfield coupled with the multi-body systems is a challenging problem in design analysis. The thermodynamics environment with burning propellants during the upper-stage engine start in the separation processes adds to the complexity of the-entire system. Understanding the underlying flow physics and vehicle dynamics during stage separation is required in designing a multi-stage launch vehicle with good flight performance. A computational fluid dynamics model with the capability to coupling transient multi-body dynamics systems will be a useful tool for simulating the effects of transient flowfield, plume/jet heating and vehicle dynamics. A computational model using generalize mesh system will be used as the basis of this development. The multi-body dynamics system will be solved, by integrating a system of six-degree-of-freedom equations of motion with high accuracy. Multi-body mesh system and their interactions will be modeled using parallel computing algorithms. Adaptive mesh refinement method will also be employed to enhance solution accuracy in the transient process.

  3. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  4. Analysis of Costs and Performance

    ERIC Educational Resources Information Center

    Duchesne, Roderick M.

    1973-01-01

    This article outlines a library management information system concerned with total library costs and performance. The system is essentially an adaptation of well-proven industrial and commercial management accounting techniques to the library context. (24 references) (Author)

  5. Guided wave tomography performance analysis

    NASA Astrophysics Data System (ADS)

    Huthwaite, Peter; Lowe, Michael; Cawley, Peter

    2016-02-01

    Quantifying wall loss caused by corrosion is a significant challenge for the petrochemical industry. Corrosion commonly occurs at pipe supports, where surface access for inspection is limited. Guided wave tomography is pursued as a solution to this: guided waves are transmitted through the region of interest from an array, and tomographic reconstruction techniques are applied to the measured signals in order to produce a map of thickness. There are many parameters in the system which can affect the performance; this paper investigates how the accuracy varies as defect width and depth, operating frequency and guided wave mode are all changed. For the S0 mode, the best performance was seen around 170kHz on the 10mm plate, with poor performance seen at almost all other frequencies. A0 showed better performance across a broad range of frequencies, with resolution improving with frequency as the wavelength reduced. However, it was shown that the resolution limit did drop relative to the wavelength, limiting the performance at high frequencies slightly.

  6. Adaptive Optics Communications Performance Analysis

    NASA Technical Reports Server (NTRS)

    Srinivasan, M.; Vilnrotter, V.; Troy, M.; Wilson, K.

    2004-01-01

    The performance improvement obtained through the use of adaptive optics for deep-space communications in the presence of atmospheric turbulence is analyzed. Using simulated focal-plane signal-intensity distributions, uncoded pulse-position modulation (PPM) bit-error probabilities are calculated assuming the use of an adaptive focal-plane detector array as well as an adaptively sized single detector. It is demonstrated that current practical adaptive optics systems can yield performance gains over an uncompensated system ranging from approximately 1 dB to 6 dB depending upon the PPM order and background radiation level.

  7. Structural-Thermal-Optical-Performance (STOP) Analysis

    NASA Technical Reports Server (NTRS)

    Bolognese, Jeffrey; Irish, Sandra

    2015-01-01

    The presentation will be given at the 26th Annual Thermal Fluids Analysis Workshop (TFAWS 2015) hosted by the Goddard Spaceflight Center (GSFC) Thermal Engineering Branch (Code 545). A STOP analysis is a multidiscipline analysis, consisting of Structural, Thermal and Optical Performance Analyses, that is performed for all space flight instruments and satellites. This course will explain the different parts of performing this analysis. The student will learn how to effectively interact with each discipline in order to accurately obtain the system analysis results.

  8. Performance of statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Davis, R. F.; Hines, D. E.

    1973-01-01

    Statistical energy analysis (SEA) methods have been developed for high frequency modal analyses on random vibration environments. These SEA methods are evaluated by comparing analytical predictions to test results. Simple test methods are developed for establishing SEA parameter values. Techniques are presented, based on the comparison of the predictions with test values, for estimating SEA accuracy as a function of frequency for a general structure.

  9. Quasi steady MPD performance analysis

    NASA Astrophysics Data System (ADS)

    Guarducci, F.; Paccani, G.; Lehnert, J.

    2011-04-01

    Pulsed (quasi-steady) solid propellant magnetoplasmadynamic thruster operation has been investigated both in the self-induced and applied magnetic field cases. Input parameters have been varied in order to analyze performance (in particular impulse bit) dependance on these parameters. The stored energy per shot has been set to four values between 2000 and 3000 J, while magnetic field has been set to six values between 0 and 159 mT. Impulse bit has been evaluated through a thrust stand technique: a brief overview of this method is given together with a description of the data processing procedure. Current measurements allow to use Maeker's formula as a reference for comparison between theoretical and empirical results as well as between self and applied field operation. Appreciable improvements of the thruster impulse bit performance have been noticed for defined sets of stored energy and applied field values. An inductive interaction between the magnet coil and the laboratory facilities, resulting in thrust stand displacement, has been observed: this phenomenon and its consequences on measurements have been investigated. A target used as a ballistic pendulum, insensitive to magnetic coupling, has been employed to acquire a new set of measurements: the results obtained with the target technique show a maximum discrepancy of 5% when compared with the measurements derived from the thrust stand technique. Finally, the thrust stand measurements appear to be affected by the inductive interactions only for very high values of the applied field.

  10. Performance Analysis of GYRO: A Tool Evaluation

    SciTech Connect

    Worley, P.; Roth, P.; Candy, J.; Shan, Hongzhang; Mahinthakumar,G.; Sreepathi, S.; Carrington, L.; Kaiser, T.; Snavely, A.; Reed, D.; Zhang, Y.; Huck, K.; Malony, A.; Shende, S.; Moore, S.; Wolf, F.

    2005-06-26

    The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. First, a manual approach is taken, using custom scripts to analyze the output of embedded wall clock timers, floating point operation counts collected using hardware performance counters, and traces of user and communication events collected using the profiling interface to Message Passing Interface (MPI) libraries. Parts of the analysis are then repeated or extended using a number of sophisticated performance analysis tools: IPM, KOJAK, SvPablo, TAU, and the PMaC modeling tool suite. The paper briefly discusses what has been discovered via this manual analysis process, what performance analyses are inconvenient or infeasible to attempt manually, and to what extent the tools show promise in accelerating or significantly extending the manual performance analyses.

  11. Stocker/feeder cattle standardized performance analysis.

    PubMed

    McGrann, J M; McAdams, J

    1995-07-01

    The Standardized Performance Analysis (SPA) for stocker/feed cattle is a recommended set of production and financial performance analysis guidelines developed specifically for the grazing, growing, and finishing phases of beef cattle production. Guidelines were developed by members of the National Cattlemen's Association (NCA), Extension Specialists, and the National Integrated Resource Management Coordination Committee to provide beef cattle producers with a comprehensive, standardized means of measuring, analyzing, and reporting the performance and profitability of an operation. This article describes and illustrates through an example the performance measures chosen. The NCA certifies software and education materials conforming to the Stocker/Feeder Guidelines. PMID:7584818

  12. Building America Performance Analysis Procedures: Revision 1

    SciTech Connect

    2004-06-01

    To measure progress toward multi-year research goals, cost and performance trade-offs are evaluated through a series of controlled field and laboratory experiments supported by energy analysis techniques using test data to calibrate simulation models.

  13. A Perspective on DSN System Performance Analysis

    NASA Technical Reports Server (NTRS)

    Pham, Timothy T.

    2006-01-01

    This paper discusses the performance analysis effort being carried out in the NASA Deep Space Network. The activity involves root cause analysis of failures and assessment of key performance metrics. The root cause analysis helps pinpoint the true cause of observed problems so that proper correction can be effected. The assessment currently focuses on three aspects: (1) data delivery metrics such as Quantity, Quality, Continuity, and Latency; (2) link-performance metrics such as antenna pointing, system noise temperature, Doppler noise, frequency and time synchronization, wide-area-network loading, link-configuration setup time; and (3) reliability, maintainability, availability metrics. The analysis establishes whether the current system is meeting its specifications and if so, how much margin is available. The findings help identify the weak points in the system and direct attention of programmatic investment for performance improvement.

  14. Paramedir: A Tool for Programmable Performance Analysis

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    Performance analysis of parallel scientific applications is time consuming and requires great expertise in areas such as programming paradigms, system software, and computer hardware architectures. In this paper we describe a tool that facilitates the programmability of performance metric calculations thereby allowing the automation of the analysis and reducing the application development time. We demonstrate how the system can be used to capture knowledge and intuition acquired by advanced parallel programmers in order to be transferred to novice users.

  15. A Performance Approach to Job Analysis.

    ERIC Educational Resources Information Center

    Folsom, Al

    2001-01-01

    Discussion of performance technology and training evaluation focuses on a job analysis process in the Coast Guard. Topics include problems with low survey response rates; costs; the need for appropriate software; discussions with stakeholders and subject matter experts; and maximizing worthy performance. (LRW)

  16. Teaching performance management using behavior analysis

    PubMed Central

    Ackley, George B. E.; Bailey, Jon S.

    1995-01-01

    A special undergraduate track in performance management, taught using behavior analysis principles, is described. The key elements of the program are presented, including the point systems and other reinforcement contingencies in the classes, the goals of the instructional activities, and many of the requirements used to evaluate student performance. Finally, the article provides examples of the performance management projects students have conducted with local businesses. PMID:22478206

  17. Performance optimisations for distributed analysis in ALICE

    NASA Astrophysics Data System (ADS)

    Betev, L.; Gheata, A.; Gheata, M.; Grigoras, C.; Hristov, P.

    2014-06-01

    Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.

  18. Interfacing Computer Aided Parallelization and Performance Analysis

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Biegel, Bryan A. (Technical Monitor)

    2003-01-01

    When porting sequential applications to parallel computer architectures, the program developer will typically go through several cycles of source code optimization and performance analysis. We have started a project to develop an environment where the user can jointly navigate through program structure and performance data information in order to make efficient optimization decisions. In a prototype implementation we have interfaced the CAPO computer aided parallelization tool with the Paraver performance analysis tool. We describe both tools and their interface and give an example for how the interface helps within the program development cycle of a benchmark code.

  19. Integrating performance data collection, analysis, and visualization

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.; Rudolph, David C.

    1990-01-01

    An integrated data collection, analysis, and data visualization environment is described for a specific parallel system - the Intel iPSC/2 hypercube. The data collection components of the environment encompass software event tracing at the operating system with a program level and a hardware-based performance monitoring system used to capture software events. A visualization system based on the X-window environment permits dynamic display and reduction of performance data. A performance data collection, analysis, and visualization environment makes it possible to access the effects of architectural and system software variations.

  20. Comparative performance analysis of mobile displays

    NASA Astrophysics Data System (ADS)

    Safaee-Rad, Reza; Aleksic, Milivoje

    2012-01-01

    Cell-phone display performance (in terms of color quality and optical efficiency) has become a critical factor in creating a positive user experience. As a result, there is a significant amount of effort by cell-phone OEMs to provide a more competitive display solution. This effort is focused on using different display technologies (with significantly different color characteristics) and more sophisticated display processors. In this paper, the results of a mobile-display comparative performance analysis are presented. Three cell-phones from major OEMs are selected and their display performances are measured and quantified. Comparative performance analysis is done using display characteristics such as display color gamut size, RGB-channels crosstalk, RGB tone responses, gray tracking performance, color accuracy, and optical efficiency.

  1. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  2. Comprehensive analysis of transport aircraft flight performance

    NASA Astrophysics Data System (ADS)

    Filippone, Antonio

    2008-04-01

    This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance

  3. Performance analysis of LAN bridges and routers

    NASA Technical Reports Server (NTRS)

    Hajare, Ankur R.

    1991-01-01

    Bridges and routers are used to interconnect Local Area Networks (LANs). The performance of these devices is important since they can become bottlenecks in large multi-segment networks. Performance metrics and test methodology for bridges and routers were not standardized. Performance data reported by vendors is not applicable to the actual scenarios encountered in an operational network. However, vendor-provided data can be used to calibrate models of bridges and routers that, along with other models, yield performance data for a network. Several tools are available for modeling bridges and routers - Network II.5 was used. The results of the analysis of some bridges and routers are presented.

  4. Using Covariance Analysis to Assess Pointing Performance

    NASA Technical Reports Server (NTRS)

    Bayard, David; Kang, Bryan

    2009-01-01

    A Pointing Covariance Analysis Tool (PCAT) has been developed for evaluating the expected performance of the pointing control system for NASA s Space Interferometry Mission (SIM). The SIM pointing control system is very complex, consisting of multiple feedback and feedforward loops, and operating with multiple latencies and data rates. The SIM pointing problem is particularly challenging due to the effects of thermomechanical drifts in concert with the long camera exposures needed to image dim stars. Other pointing error sources include sensor noises, mechanical vibrations, and errors in the feedforward signals. PCAT models the effects of finite camera exposures and all other error sources using linear system elements. This allows the pointing analysis to be performed using linear covariance analysis. PCAT propagates the error covariance using a Lyapunov equation associated with time-varying discrete and continuous-time system matrices. Unlike Monte Carlo analysis, which could involve thousands of computational runs for a single assessment, the PCAT analysis performs the same assessment in a single run. This capability facilitates the analysis of parametric studies, design trades, and "what-if" scenarios for quickly evaluating and optimizing the control system architecture and design.

  5. Laser Atmospheric Wind Sounder (LAWS) performance analysis

    NASA Technical Reports Server (NTRS)

    Kenyon, D.; Petheram, J.

    1991-01-01

    The science objectives of the NASA's Laser Atmospheric Sounder (LAWS) are discussed, and results of the performance analysis of the LAWS system are presented together with the instrument configuration used for these performance analyses. The results of analyses show that the science requirements for the wind-velocity accuracies of m/sec in the lower troposphere and 5 m/sec in the upper troposphere will be met by the present design of the LAWS system. The paper presents the performance estimates of the LAWS in terms of the global coverage, spatial resolution, signal-to-noise ratio, line-of-sight velocity error, and horizontal inversion accuracy.

  6. Analysis of driver performance under reduced visibility

    NASA Technical Reports Server (NTRS)

    Kaeppler, W. D.

    1982-01-01

    Mathematical models describing vehicle dynamics as well as human behavior may be useful in evaluating driver performance and in establishing design criteria for vehicles more compatible with man. In 1977, a two level model of driver steering behavior was developed, but its parameters were identified for clear visibility conditions only. Since driver performance degrades under conditions of reduced visibility, e.g., fog, the two level model should be investigated to determine its applicability to such conditions. The data analysis of a recently performed driving simulation experiment showed that the model still performed reasonably well under fog conditions, although there was a degradation in its predictive capacity during fog. Some additional parameters affecting anticipation and lag time may improve the model's performance for reduced visibility conditions.

  7. Automated Cache Performance Analysis And Optimization

    SciTech Connect

    Mohror, Kathryn

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  8. Using Ratio Analysis to Evaluate Financial Performance.

    ERIC Educational Resources Information Center

    Minter, John; And Others

    1982-01-01

    The ways in which ratio analysis can help in long-range planning, budgeting, and asset management to strengthen financial performance and help avoid financial difficulties are explained. Types of ratios considered include balance sheet ratios, net operating ratios, and contribution and demand ratios. (MSE)

  9. Probabilistic Analysis of Gas Turbine Field Performance

    NASA Technical Reports Server (NTRS)

    Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.

    2002-01-01

    A gas turbine thermodynamic cycle was computationally simulated and probabilistically evaluated in view of the several uncertainties in the performance parameters, which are indices of gas turbine health. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design, enhance performance, increase system availability and make it cost effective. The analysis leads to the selection of the appropriate measurements to be used in the gas turbine health determination and to the identification of both the most critical measurements and parameters. Probabilistic analysis aims at unifying and improving the control and health monitoring of gas turbine aero-engines by increasing the quality and quantity of information available about the engine's health and performance.

  10. Performance analysis and prediction in triathlon.

    PubMed

    Ofoghi, Bahadorreza; Zeleznikow, John; Macmahon, Clare; Rehula, Jan; Dwyer, Dan B

    2016-01-01

    Performance in triathlon is dependent upon factors that include somatotype, physiological capacity, technical proficiency and race strategy. Given the multidisciplinary nature of triathlon and the interaction between each of the three race components, the identification of target split times that can be used to inform the design of training plans and race pacing strategies is a complex task. The present study uses machine learning techniques to analyse a large database of performances in Olympic distance triathlons (2008-2012). The analysis reveals patterns of performance in five components of triathlon (three race "legs" and two transitions) and the complex relationships between performance in each component and overall performance in a race. The results provide three perspectives on the relationship between performance in each component of triathlon and the final placing in a race. These perspectives allow the identification of target split times that are required to achieve a certain final place in a race and the opportunity to make evidence-based decisions about race tactics in order to optimise performance. PMID:26177783

  11. NPAC-Nozzle Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1997-01-01

    A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.

  12. Transformer cooler performance: Analysis and experiment

    SciTech Connect

    Lang, S.K.; Bergles, A.E.

    1994-12-31

    During the summer of 1988, the coolers operating on the number-one transformer at the Niagara Mohawk New Scotland Substation were unable to maintain the temperature of the transformer oil at an acceptable level during a period of peak power and hot weather conditions. As a result of this incident, the Niagara Mohawk Power Corporation requested that the operation performance characteristics of the failed General Electric FOA oil transformer cooler be investigated by the RPI Heat Transfer Laboratory. A theoretical and experimental analysis has been performed on the performance of a transformer cooler. The theoretical study involved the formulation of a numerical model of the cooler, which predicted that the performance is extremely sensitive and dependent upon the air-side flow rate/heat transfer coefficient, as well as the available heat transfer area. The experimental work consisted of the design and implementation of a cooling loop from which experimental data were obtained to confirm the reliability of the numerical calculations. The experimental results are in good agreement with the numerical predictions, therefore, they confirm the reliability of the analysis.

  13. Transformer cooler performance: Analysis and experiment

    SciTech Connect

    Lang, S.K.; Bergles, A.E.

    1995-10-01

    During the summer of 1988, the coolers operating on the number-one transformer at the Niagara Mohawk New Scotland Substation were unable to maintain the temperature of the transformer oil at an acceptable level during a period of peak power and hot weather conditions. As a result of that incident, the Niagara Mohawk Power Corporation requested that the operation performance characteristics of the failed General Electric FOA oil transformer cooler be investigated by the Heat Transfer Laboratory at Rensselaer Polytechnic Institute. A theoretical and experimental analysis has been performed on the performance of a transformer cooler. The theoretical study involved the formulation of a model of the cooler, which predicted that the performance is extremely sensitive and dependent upon the air-side flow rate/heat transfer coefficient, as well as the available heat transfer area. The experimental work consisted of the design and implementation of a cooling loop, from which experimental data were obtained to confirm the accuracy of the predictions. The experimental results are in good agreement with the numerical predictions; therefore, they confirm the reliability of the analysis.

  14. Multiprocessor smalltalk: Implementation, performance, and analysis

    SciTech Connect

    Pallas, J.I.

    1990-01-01

    Multiprocessor Smalltalk demonstrates the value of object-oriented programming on a multiprocessor. Its implementation and analysis shed light on three areas: concurrent programming in an object oriented language without special extensions, implementation techniques for adapting to multiprocessors, and performance factors in the resulting system. Adding parallelism to Smalltalk code is easy, because programs already use control abstractions like iterators. Smalltalk's basic control and concurrency primitives (lambda expressions, processes and semaphores) can be used to build parallel control abstractions, including parallel iterators, parallel objects, atomic objects, and futures. Language extensions for concurrency are not required. This implementation demonstrates that it is possible to build an efficient parallel object-oriented programming system and illustrates techniques for doing so. Three modification tools-serialization, replication, and reorganization-adapted the Berkeley Smalltalk interpreter to the Firefly multiprocessor. Multiprocessor Smalltalk's performance shows that the combination of multiprocessing and object-oriented programming can be effective: speedups (relative to the original serial version) exceed 2.0 for five processors on all the benchmarks; the median efficiency is 48%. Analysis shows both where performance is lost and how to improve and generalize the experimental results. Changes in the interpreter to support concurrency add at most 12% overhead; better access to per-process variables could eliminate much of that. Changes in the user code to express concurrency add as much as 70% overhead; this overhead could be reduced to 54% if blocks (lambda expressions) were reentrant. Performance is also lost when the program cannot keep all five processors busy.

  15. Nominal Performance Biosphere Dose Conversion Factor Analysis

    SciTech Connect

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose

  16. Performance management in healthcare: a critical analysis.

    PubMed

    Hewko, Sarah J; Cummings, Greta G

    2016-01-01

    Purpose - The purpose of this paper is to explore the underlying theoretical assumptions and implications of current micro-level performance management and evaluation (PME) practices, specifically within health-care organizations. PME encompasses all activities that are designed and conducted to align employee outputs with organizational goals. Design/methodology/approach - PME, in the context of healthcare, is analyzed through the lens of critical theory. Specifically, Habermas' theory of communicative action is used to highlight some of the questions that arise in looking critically at PME. To provide a richer definition of key theoretical concepts, the authors conducted a preliminary, exploratory hermeneutic semantic analysis of the key words "performance" and "management" and of the term "performance management". Findings - Analysis reveals that existing micro-level PME systems in health-care organizations have the potential to create a workforce that is compliant, dependent, technically oriented and passive, and to support health-care systems in which inequalities and power imbalances are perpetually reinforced. Practical implications - At a time when the health-care system is under increasing pressure to provide high-quality, affordable services with fewer resources, it may be wise to investigate new sector-specific ways of evaluating and managing performance. Originality/value - In this paper, written for health-care leaders and health human resource specialists, the theoretical assumptions and implications of current PME practices within health-care organizations are explored. It is hoped that readers will be inspired to support innovative PME practices within their organizations that encourage peak performance among health-care professionals. PMID:26764960

  17. Nominal Performance Biosphere Dose Conversion Factor Analysis

    SciTech Connect

    M.A. Wasiolek

    2003-07-25

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports (BSC 2003 [DIRS 160964]; BSC 2003 [DIRS 160965]; BSC 2003 [DIRS 160976]; BSC 2003 [DIRS 161239]; BSC 2003 [DIRS 161241]) contain detailed description of the model input parameters. This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs and conversion factors for the TSPA. The BDCFs will be used in performance assessment for calculating annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose from beta- and photon-emitting radionuclides.

  18. Nominal Performance Biosphere Dose Conversion Factor Analysis

    SciTech Connect

    M.A. Wasiolek

    2005-04-28

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objectives of this analysis are to develop BDCFs for the

  19. Diversity Performance Analysis on Multiple HAP Networks

    PubMed Central

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  20. Diversity Performance Analysis on Multiple HAP Networks.

    PubMed

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  1. Performance Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2005-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. In this paper, an FTC analysis framework is provided to calculate the upper bound of an induced-L(sub 2) norm of an FTC system with existence of false identification and detection time delay. The upper bound is written as a function of a fault detection time and exponential decay rates and has been used to determine which FTC law produces less performance degradation (tracking error) due to false identification. The analysis framework is applied for an FTC system of a HiMAT (Highly Maneuverable Aircraft Technology) vehicle. Index Terms fault tolerant control system, linear parameter varying system, HiMAT vehicle.

  2. Idaho National Laboratory Quarterly Performance Analysis

    SciTech Connect

    Lisbeth Mitchell

    2014-11-01

    This report is published quarterly by the Idaho National Laboratory (INL) Quality and Performance Management Organization. The Department of Energy (DOE) Occurrence Reporting and Processing System (ORPS), as prescribed in DOE Order 232.2, “Occurrence Reporting and Processing of Operations Information,” requires a quarterly analysis of events, both reportable and not reportable, for the previous 12 months. This report is the analysis of 60 reportable events (23 from the 4th Qtr FY14 and 37 from the prior three reporting quarters) as well as 58 other issue reports (including not reportable events and Significant Category A and B conditions) identified at INL from July 2013 through October 2014. Battelle Energy Alliance (BEA) operates the INL under contract DE AC07 051D14517.

  3. An analysis of air-turborocket performance

    NASA Astrophysics Data System (ADS)

    Bussi, Giuseppe; Colasurdo, Guido; Pastrone, Dario

    1993-06-01

    In order to assess the capabilities of the air-turborocket, an off-design analysis of a representative LOX-LH2 fed engine is carried out. Working lines on an envisageable compressor map are drawn for different flight conditions along a typical transatmospheric vehicle flight path. Characteristic aspects of the air-turborocket behavior in the spontaneous and controlled mode are highlighted. Specific thrust and propellant consumption at full throttle are computed, both in the dry and augmented mode. Performance achievable by exploiting the permissible mass flow range of the compressor map via the variation of the nozzle throat area, is shown.

  4. Analysis of imaging system performance capabilities

    NASA Astrophysics Data System (ADS)

    Haim, Harel; Marom, Emanuel

    2013-06-01

    Present performance analysis of optical imaging systems based on results obtained with classic one-dimensional (1D) resolution targets (such as the USAF resolution chart) are significantly different than those obtained with a newly proposed 2D target [1]. We hereby prove such claim and show how the novel 2D target should be used for correct characterization of optical imaging systems in terms of resolution and contrast. We apply thereafter the consequences of these observations on the optimal design of some two-dimensional barcode structures.

  5. PERFORMANCE ANALYSIS OF MECHANICAL DRAFT COOLING TOWER

    SciTech Connect

    Lee, S; Alfred Garrett, A; James02 Bollinger, J; Larry Koffman, L

    2009-02-10

    Industrial processes use mechanical draft cooling towers (MDCT's) to dissipate waste heat by transferring heat from water to air via evaporative cooling, which causes air humidification. The Savannah River Site (SRS) has cross-flow and counter-current MDCT's consisting of four independent compartments called cells. Each cell has its own fan to help maximize heat transfer between ambient air and circulated water. The primary objective of the work is to simulate the cooling tower performance for the counter-current cooling tower and to conduct a parametric study under different fan speeds and ambient air conditions. The Savannah River National Laboratory (SRNL) developed a computational fluid dynamics (CFD) model and performed the benchmarking analysis against the integral measurement results to accomplish the objective. The model uses three-dimensional steady-state momentum, continuity equations, air-vapor species balance equation, and two-equation turbulence as the basic governing equations. It was assumed that vapor phase is always transported by the continuous air phase with no slip velocity. In this case, water droplet component was considered as discrete phase for the interfacial heat and mass transfer via Lagrangian approach. Thus, the air-vapor mixture model with discrete water droplet phase is used for the analysis. A series of parametric calculations was performed to investigate the impact of wind speeds and ambient conditions on the thermal performance of the cooling tower when fans were operating and when they were turned off. The model was also benchmarked against the literature data and the SRS integral test results for key parameters such as air temperature and humidity at the tower exit and water temperature for given ambient conditions. Detailed results will be published here.

  6. SUBSONIC WIND TUNNEL PERFORMANCE ANALYSIS SOFTWARE

    NASA Technical Reports Server (NTRS)

    Eckert, W. T.

    1994-01-01

    This program was developed as an aid in the design and analysis of subsonic wind tunnels. It brings together and refines previously scattered and over-simplified techniques used for the design and loss prediction of the components of subsonic wind tunnels. It implements a system of equations for determining the total pressure losses and provides general guidelines for the design of diffusers, contractions, corners and the inlets and exits of non-return tunnels. The algorithms used in the program are applicable to compressible flow through most closed- or open-throated, single-, double- or non-return wind tunnels or ducts. A comparison between calculated performance and that actually achieved by several existing facilities produced generally good agreement. Any system through which air is flowing which involves turns, fans, contractions etc. (e.g., an HVAC system) may benefit from analysis using this software. This program is an update of ARC-11138 which includes PC compatibility and an improved user interface. The method of loss analysis used by the program is a synthesis of theoretical and empirical techniques. Generally, the algorithms used are those which have been substantiated by experimental test. The basic flow-state parameters used by the program are determined from input information about the reference control section and the test section. These parameters were derived from standard relationships for compressible flow. The local flow conditions, including Mach number, Reynolds number and friction coefficient are determined for each end of each component or section. The loss in total pressure caused by each section is calculated in a form non-dimensionalized by local dynamic pressure. The individual losses are based on the nature of the section, local flow conditions and input geometry and parameter information. The loss forms for typical wind tunnel sections considered by the program include: constant area ducts, open throat ducts, contractions, constant

  7. Failure analysis of high performance ballistic fibers

    NASA Astrophysics Data System (ADS)

    Spatola, Jennifer S.

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mode changes in the fiber fracture when transversely loaded by indenters of different shapes. An experimental design mimicking transverse impact was used to determine any such effects. Three different indenters were used: round, FSP, and razor blade. The indenter height was changed to change the angle of failure tested. Five high performance fibers were examined: KevlarRTM KM2, SpectraRTM 130d, DyneemaRTM SK-62 and SK-76, and ZylonRTM 555. Failed fibers were analyzed using an SEM to determine failure mechanisms. The results show that the round and razor blade indenters produced a constant failure strain, as well as failure mechanisms independent of testing angle. The FSP indenter produced a decrease in failure strain as the angle increased. Fibrillation was the dominant failure mechanism at all angles for the round indenter, while through thickness shearing was the failure mechanism for the razor blade. The FSP indenter showed a transition from fibrillation at low angles to through thickness shearing at high angles, indicating that the round and razor blade indenters are extreme cases of the FSP indenter. The failure mechanisms observed with the FSP indenter at various angles correlated with the experimental strain data obtained during fiber testing. This indicates that geometry of the indenter tip in compression is a contributing factor in lowering the failure strain of the high performance fibers. TEM analysis of the fiber failure mechanisms was also attempted, though without

  8. Nominal Performance Biosphere Dose Conversion Factor Analysis

    SciTech Connect

    Wasiolek, Maryla A.

    2000-12-21

    The purpose of this report was to document the process leading to development of the Biosphere Dose Conversion Factors (BDCFs) for the postclosure nominal performance of the potential repository at Yucca Mountain. BDCF calculations concerned twenty-four radionuclides. This selection included sixteen radionuclides that may be significant nominal performance dose contributors during the compliance period of up to 10,000 years, five additional radionuclides of importance for up to 1 million years postclosure, and three relatively short-lived radionuclides important for the human intrusion scenario. Consideration of radionuclide buildup in soil caused by previous irrigation with contaminated groundwater was taken into account in the BDCF development. The effect of climate evolution, from the current arid conditions to a wetter and cooler climate, on the BDCF values was evaluated. The analysis included consideration of different exposure pathway's contribution to the BDCFs. Calculations of nominal performance BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. BDCFs for the nominal performance, when combined with the concentrations of radionuclides in groundwater allow calculation of potential radiation doses to the receptor of interest. Calculated estimates of radionuclide concentration in groundwater result from the saturated zone modeling. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA) to calculate doses to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.

  9. Approaches to Cycle Analysis and Performance Metrics

    NASA Technical Reports Server (NTRS)

    Parson, Daniel E.

    2003-01-01

    The following notes were prepared as part of an American Institute of Aeronautics and Astronautics (AIAA) sponsored short course entitled Air Breathing Pulse Detonation Engine (PDE) Technology. The course was presented in January of 2003, and again in July of 2004 at two different AIAA meetings. It was taught by seven instructors, each of whom provided information on particular areas of PDE research. These notes cover two areas. The first is titled Approaches to Cycle Analysis and Performance Metrics. Here, the various methods of cycle analysis are introduced. These range from algebraic, thermodynamic equations, to single and multi-dimensional Computational Fluid Dynamic (CFD) solutions. Also discussed are the various means by which performance is measured, and how these are applied in a device which is fundamentally unsteady. The second topic covered is titled PDE Hybrid Applications. Here the concept of coupling a PDE to a conventional turbomachinery based engine is explored. Motivation for such a configuration is provided in the form of potential thermodynamic benefits. This is accompanied by a discussion of challenges to the technology.

  10. Axial and centrifugal pump meanline performance analysis

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1994-01-01

    A meanline pump flow modeling method has been developed to provide a fast capability for modeling pumps of cryogenic rocket engines. Based on this method, a meanline pump flow code (PUMPA) has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The design point rotor efficiency is obtained from empirically derived correlations of loss to rotor specific speed. The rapid input setup and computer run time for the meanline pump flow code makes it an effective analysis and conceptual design tool. The map generation capabilities of the PUMPA code provide the information needed for interfacing with a rocket engine system modeling code.

  11. Radio-science performance analysis software

    NASA Technical Reports Server (NTRS)

    Morabito, D. D.; Asmar, S. W.

    1995-01-01

    The Radio Science Systems Group (RSSG) provides various support functions for several flight project radio-science teams. Among these support functions are uplink and sequence planning, real-time operations monitoring and support, data validation, archiving and distribution functions, and data processing and analysis. This article describes the support functions that encompass radio-science data performance analysis. The primary tool used by the RSSG to fulfill this support function is the STBLTY program set. STBLTY is used to reconstruct observable frequencies and calculate model frequencies, frequency residuals, frequency stability in terms of Allan deviation, reconstructed phase, frequency and phase power spectral density, and frequency drift rates. In the case of one-way data, using an ultrastable oscillator (USO) as a frequency reference, the program set computes the spacecraft transmitted frequency and maintains a database containing the in-flight history of the USO measurements. The program set also produces graphical displays. Some examples and discussions on operating the program set on Galileo and Ulysses data will be presented.

  12. Radio-science performance analysis software

    NASA Astrophysics Data System (ADS)

    Morabito, D. D.; Asmar, S. W.

    1995-02-01

    The Radio Science Systems Group (RSSG) provides various support functions for several flight project radio-science teams. Among these support functions are uplink and sequence planning, real-time operations monitoring and support, data validation, archiving and distribution functions, and data processing and analysis. This article describes the support functions that encompass radio-science data performance analysis. The primary tool used by the RSSG to fulfill this support function is the STBLTY program set. STBLTY is used to reconstruct observable frequencies and calculate model frequencies, frequency residuals, frequency stability in terms of Allan deviation, reconstructed phase, frequency and phase power spectral density, and frequency drift rates. In the case of one-way data, using an ultrastable oscillator (USO) as a frequency reference, the program set computes the spacecraft transmitted frequency and maintains a database containing the in-flight history of the USO measurements. The program set also produces graphical displays. Some examples and discussions on operating the program set on Galileo and Ulysses data will be presented.

  13. Past Performance analysis of HPOTP bearings

    NASA Technical Reports Server (NTRS)

    Bhat, B. N.; Dolan, F. J.

    1982-01-01

    The past performance analysis conducted on three High Pressure Oxygen Turbopump (HPOTP) bearings from the Space Shuttle Main Engine is presented. Metallurgical analysis of failed bearing balls and races, and wear track and crack configuration analyses were carried out. In addition, one bearing was tested in laboratory at very high axial loads. The results showed that the cracks were surface initiated and propagated into subsurface locations at relatively small angles. Subsurface cracks were much more extensive than was appeared on the surface. The location of major cracks in the races corresponded to high radial loads rather than high axial loads. There was evidence to suggest that the inner races were heated to elevated temperatures. A failure scenario was developed based on the above findings. According to this scenario the HPOTP bearings are heated by a combination of high loads and high coefficient of friction (poor lubrication). Different methods of extending the HPOTP bearing life are also discussed. These include reduction of axial loads, improvements in bearing design, lubrication and cooling, and use of improved bearing materials.

  14. Performance Analysis of ICA in Sensor Array.

    PubMed

    Cai, Xin; Wang, Xiang; Huang, Zhitao; Wang, Fenghua

    2016-01-01

    As the best-known scheme in the field of Blind Source Separation (BSS), Independent Component Analysis (ICA) has been intensively used in various domains, including biomedical and acoustics applications, cooperative or non-cooperative communication, etc. While sensor arrays are involved in most of the applications, the influence on the performance of ICA of practical factors therein has not been sufficiently investigated yet. In this manuscript, the issue is researched by taking the typical antenna array as an illustrative example. Factors taken into consideration include the environment noise level, the properties of the array and that of the radiators. We analyze the analytic relationship between the noise variance, the source variance, the condition number of the mixing matrix and the optimal signal to interference-plus-noise ratio, as well as the relationship between the singularity of the mixing matrix and practical factors concerned. The situations where the mixing process turns (nearly) singular have been paid special attention to, since such circumstances are critical in applications. Results and conclusions obtained should be instructive when applying ICA algorithms on mixtures from sensor arrays. Moreover, an effective countermeasure against the cases of singular mixtures has been proposed, on the basis of previous analysis. Experiments validating the theoretical conclusions as well as the effectiveness of the proposed scheme have been included. PMID:27164100

  15. Performance Analysis of ICA in Sensor Array

    PubMed Central

    Cai, Xin; Wang, Xiang; Huang, Zhitao; Wang, Fenghua

    2016-01-01

    As the best-known scheme in the field of Blind Source Separation (BSS), Independent Component Analysis (ICA) has been intensively used in various domains, including biomedical and acoustics applications, cooperative or non-cooperative communication, etc. While sensor arrays are involved in most of the applications, the influence on the performance of ICA of practical factors therein has not been sufficiently investigated yet. In this manuscript, the issue is researched by taking the typical antenna array as an illustrative example. Factors taken into consideration include the environment noise level, the properties of the array and that of the radiators. We analyze the analytic relationship between the noise variance, the source variance, the condition number of the mixing matrix and the optimal signal to interference-plus-noise ratio, as well as the relationship between the singularity of the mixing matrix and practical factors concerned. The situations where the mixing process turns (nearly) singular have been paid special attention to, since such circumstances are critical in applications. Results and conclusions obtained should be instructive when applying ICA algorithms on mixtures from sensor arrays. Moreover, an effective countermeasure against the cases of singular mixtures has been proposed, on the basis of previous analysis. Experiments validating the theoretical conclusions as well as the effectiveness of the proposed scheme have been included. PMID:27164100

  16. High Performance Data Analysis via Coordinated Caches

    NASA Astrophysics Data System (ADS)

    Fischer, M.; Metzlaff, C.; Kühn, E.; Giffels, M.; Quast, G.; Jung, C.; Hauth, T.

    2015-12-01

    With the second run period of the LHC, high energy physics collaborations will have to face increasing computing infrastructural needs. Opportunistic resources are expected to absorb many computationally expensive tasks, such as Monte Carlo event simulation. This leaves dedicated HEP infrastructure with an increased load of analysis tasks that in turn will need to process an increased volume of data. In addition to storage capacities, a key factor for future computing infrastructure is therefore input bandwidth available per core. Modern data analysis infrastructure relies on one of two paradigms: data is kept on dedicated storage and accessed via network or distributed over all compute nodes and accessed locally. Dedicated storage allows data volume to grow independently of processing capacities, whereas local access allows processing capacities to scale linearly. However, with the growing data volume and processing requirements, HEP will require both of these features. For enabling adequate user analyses in the future, the KIT CMS group is merging both paradigms: popular data is spread over a local disk layer on compute nodes, while any data is available from an arbitrarily sized background storage. This concept is implemented as a pool of distributed caches, which are loosely coordinated by a central service. A Tier 3 prototype cluster is currently being set up for performant user analyses of both local and remote data.

  17. Space Shuttle Main Engine performance analysis

    NASA Astrophysics Data System (ADS)

    Santi, L. Michael

    1993-11-01

    For a number of years, NASA has relied primarily upon periodically updated versions of Rocketdyne's power balance model (PBM) to provide space shuttle main engine (SSME) steady-state performance prediction. A recent computational study indicated that PBM predictions do not satisfy fundamental energy conservation principles. More recently, SSME test results provided by the Technology Test Bed (TTB) program have indicated significant discrepancies between PBM flow and temperature predictions and TTB observations. Results of these investigations have diminished confidence in the predictions provided by PBM, and motivated the development of new computational tools for supporting SSME performance analysis. A multivariate least squares regression algorithm was developed and implemented during this effort in order to efficiently characterize TTB data. This procedure, called the 'gains model,' was used to approximate the variation of SSME performance parameters such as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms of six assumed independent influences. These six influences were engine power level, mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and temperature. A BFGS optimization algorithm provided the base procedure for determining regression coefficients for both linear and full quadratic approximations of parameter variation. Statistical information relative to data deviation from regression derived relations was also computed. A new strategy for integrating test data with theoretical performance prediction was also investigated. The current integration procedure employed by PBM treats test data as pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance. Within PBM, this integration procedure is called 'data reduction.' By contrast, the new data integration procedure, termed 'reconciliation,' uses mathematical optimization techniques, and requires both

  18. Space Shuttle Main Engine performance analysis

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1993-01-01

    For a number of years, NASA has relied primarily upon periodically updated versions of Rocketdyne's power balance model (PBM) to provide space shuttle main engine (SSME) steady-state performance prediction. A recent computational study indicated that PBM predictions do not satisfy fundamental energy conservation principles. More recently, SSME test results provided by the Technology Test Bed (TTB) program have indicated significant discrepancies between PBM flow and temperature predictions and TTB observations. Results of these investigations have diminished confidence in the predictions provided by PBM, and motivated the development of new computational tools for supporting SSME performance analysis. A multivariate least squares regression algorithm was developed and implemented during this effort in order to efficiently characterize TTB data. This procedure, called the 'gains model,' was used to approximate the variation of SSME performance parameters such as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms of six assumed independent influences. These six influences were engine power level, mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and temperature. A BFGS optimization algorithm provided the base procedure for determining regression coefficients for both linear and full quadratic approximations of parameter variation. Statistical information relative to data deviation from regression derived relations was also computed. A new strategy for integrating test data with theoretical performance prediction was also investigated. The current integration procedure employed by PBM treats test data as pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance. Within PBM, this integration procedure is called 'data reduction.' By contrast, the new data integration procedure, termed 'reconciliation,' uses mathematical optimization techniques, and requires both

  19. Data Link Performance Analysis for LVLASO Experiments

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1998-01-01

    Low-visibility Landing and Surface Operations System (LVLASO) is currently being prototyped and tested at NASA Langley Research Center. Since the main objective of the system is to maintain the aircraft landings and take-offs even during low-visibility conditions, timely exchange of positional and other information between the aircraft and the ground control is critical. For safety and reliability reasons, there are several redundant sources on the ground (e.g., ASDE, AMASS) that collect and disseminate information about the environment to the aircrafts. The data link subsystem of LVLASO is responsible for supporting the timely transfer of information between the aircrafts and the ground controllers. In fact, if not properly designed, the data link subsystem could become a bottleneck in the proper functioning of LVLASO. Currently, the other components of the system are being designed assuming that the data link has adequate capacity and is capable of delivering the information in a timely manner. During August 1-28, 1997, several flight experiments were conducted to test the prototypes of subsystems developed under LVLASO project, The back-round and details of the tests are described in the next section. The test results have been collected in two CDs by FAA and Rockwell-Collins. Under the current grant, we have analyzed the data and evaluated the performance of the Mode S datalink. In this report, we summarize the results of our analysis. Much of the results are shown in terms of graphs or histograms. The test date (or experiment number) was often taken as the X-axis and the Y-axis denotes whatever metric of focus in that chart. In interpreting these charts, one need to take into account the vehicular traffic during a particular experiment. In general, the performance of the data link was found to be quite satisfactory in terms of delivering long and short Mode S squitters from the vehicles to the ground receiver, Similarly, its performance in delivering control

  20. Performance analysis of memory hierachies in high performance systems

    SciTech Connect

    Yogesh, A.

    1993-07-01

    This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

  1. Performance analysis of robust road sign identification

    NASA Astrophysics Data System (ADS)

    Ali, Nursabillilah M.; Mustafah, Y. M.; Rashid, N. K. A. M.

    2013-12-01

    This study describes performance analysis of a robust system for road sign identification that incorporated two stages of different algorithms. The proposed algorithms consist of HSV color filtering and PCA techniques respectively in detection and recognition stages. The proposed algorithms are able to detect the three standard types of colored images namely Red, Yellow and Blue. The hypothesis of the study is that road sign images can be used to detect and identify signs that are involved with the existence of occlusions and rotational changes. PCA is known as feature extraction technique that reduces dimensional size. The sign image can be easily recognized and identified by the PCA method as is has been used in many application areas. Based on the experimental result, it shows that the HSV is robust in road sign detection with minimum of 88% and 77% successful rate for non-partial and partial occlusions images. For successful recognition rates using PCA can be achieved in the range of 94-98%. The occurrences of all classes are recognized successfully is between 5% and 10% level of occlusions.

  2. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  3. Performance analysis of electrical circuits /PANE/

    NASA Technical Reports Server (NTRS)

    Johnson, K. L.; Steinberg, L. L.

    1968-01-01

    Automated statistical and worst case computer program has been designed to perform dc and ac steady circuit analyses. The program determines the worst case circuit performance by solving circuit equations.

  4. NEXT Performance Curve Analysis and Validation

    NASA Technical Reports Server (NTRS)

    Saripalli, Pratik; Cardiff, Eric; Englander, Jacob

    2016-01-01

    Performance curves of the NEXT thruster are highly important in determining the thruster's ability in performing towards mission-specific goals. New performance curves are proposed and examined here. The Evolutionary Mission Trajectory Generator (EMTG) is used to verify variations in mission solutions based on both available thruster curves and the new curves generated. Furthermore, variations in BOL and EOL curves are also examined. Mission design results shown here validate the use of EMTG and the new performance curves.

  5. Cost and performance analysis of physical security systems

    SciTech Connect

    Hicks, M.J.; Yates, D.; Jago, W.H.; Phillips, A.W.

    1998-04-01

    Analysis of cost and performance of physical security systems can be a complex, multi-dimensional problem. There are a number of point tools that address various aspects of cost and performance analysis. Increased interest in cost tradeoffs of physical security alternatives has motivated development of an architecture called Cost and Performance Analysis (CPA), which takes a top-down approach to aligning cost and performance metrics. CPA incorporates results generated by existing physical security system performance analysis tools, and utilizes an existing cost analysis tool. The objective of this architecture is to offer comprehensive visualization of complex data to security analysts and decision-makers.

  6. IPAC-Inlet Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1997-01-01

    A series of analyses have been developed which permit the calculation of the performance of common inlet designs. The methods presented are useful for determining the inlet weight flows, total pressure recovery, and aerodynamic drag coefficients for given inlet geometric designs. Limited geometric input data is required to use this inlet performance prediction methodology. The analyses presented here may also be used to perform inlet preliminary design studies. The calculated inlet performance parameters may be used in subsequent engine cycle analyses or installed engine performance calculations for existing uninstalled engine data.

  7. An Ethnostatistical Analysis of Performance Measurement

    ERIC Educational Resources Information Center

    Winiecki, Donald J.

    2008-01-01

    Within the fields of human performance technology (HPT), human resources management (HRM), and management in general, performance measurement is not only foundational but considered necessary at all phases in the process of HPT. In HPT in particular, there is substantial advice literature on what measurement is, why it is essential, and (at a…

  8. Spacecraft Multiple Array Communication System Performance Analysis

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Desilva, Kanishka; Sham, Catherine C.

    2010-01-01

    The Communication Systems Simulation Laboratory (CSSL) at the NASA Johnson Space Center is tasked to perform spacecraft and ground network communication system simulations, design validation, and performance verification. The CSSL has developed simulation tools that model spacecraft communication systems and the space and ground environment in which the tools operate. In this paper, a spacecraft communication system with multiple arrays is simulated. Multiple array combined technique is used to increase the radio frequency coverage and data rate performance. The technique is to achieve phase coherence among the phased arrays to combine the signals at the targeting receiver constructively. There are many technical challenges in spacecraft integration with a high transmit power communication system. The array combining technique can improve the communication system data rate and coverage performances without increasing the system transmit power requirements. Example simulation results indicate significant performance improvement can be achieved with phase coherence implementation.

  9. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  10. Power plant performance monitoring and improvement: Volume 5, Turbine cycle performance analysis: Interim report

    SciTech Connect

    Crim, H.G. Jr.; Westcott, J.C.; de Mello, R.W.; Brandon, R.E.; Kona, C.; Schmehl, T.G.; Reddington, J.R.

    1987-12-01

    This volume describes advanced instrumentation and computer programs for turbine cycle performance analysis. Unit conditions are displayed on-line. Included are techniques for monitoring the performance of feedwater heaters and the main condenser, procedures for planning turbine maintenance based on an analysis of preoutage testing and performance history, and an overview of the project's computerized data handling and display systems. (DWL)

  11. Assessing BMP Performance Using Microtox Toxicity Analysis

    EPA Science Inventory

    Best Management Practices (BMPs) have been shown to be effective in reducing runoff and pollutants from urban areas and thus provide a mechanism to improve downstream water quality. Currently, BMP performance regarding water quality improvement is assessed through measuring each...

  12. Analysis of telescope performance: MTF approach

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Páta, Petr

    2006-03-01

    Small robotic telescopes (like BOOTES in Spain, BART in Czech Republic or FRAM in Argentina) are constructed for continuous galactic survey and fast reactions to GRB (Gamma Ray Burts) alerts. Due subtile construction performance of those instruments strongly depends on temperature, atmosphere scintillations etc. In this article will be discussed possibilities of performance improvement based on knowledge of any transfer characteristic like modulation transfer function MTF (or Point Spread Function PSF of course) of imaging system introducing a robotic telescope.

  13. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  14. Rocket-in-a-Duct Performance Analysis

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.; Reed, Brian D.

    1999-01-01

    An axisymmetric, 110 N class, rocket configured with a free expansion between the rocket nozzle and a surrounding duct was tested in an altitude simulation facility. The propellants were gaseous hydrogen and gaseous oxygen and the hardware consisted of a heat sink type copper rocket firing through copper ducts of various diameters and lengths. A secondary flow of nitrogen was introduced at the blind end of the duct to mix with the primary rocket mass flow in the duct. This flow was in the range of 0 to 10% of the primary massflow and its effect on nozzle performance was measured. The random measurement errors on thrust and massflow were within +/-1%. One dimensional equilibrium calculations were used to establish the possible theoretical performance of these rocket-in-a-duct nozzles. Although the scale of these tests was small, they simulated the relevant flow expansion physics at a modest experimental cost. Test results indicated that lower performance was obtained at higher free expansion area ratios and longer ducts, while, higher performance was obtained with the addition of secondary flow. There was a discernable peak in specific impulse efficiency at 4% secondary flow. The small scale of these tests resulted in low performance efficiencies, but prior numerical modeling of larger rocket-in-a-duct engines predicted performance that was comparable to that of optimized rocket nozzles. This remains to be proven in large-scale, rocket-in-a-duct tests.

  15. Managing Performance Analysis with Dynamic Statistical Projection Pursuit

    SciTech Connect

    Vetter, J.S.; Reed, D.A.

    2000-05-22

    Computer systems and applications are growing more complex. Consequently, performance analysis has become more difficult due to the complex, transient interrelationships among runtime components. To diagnose these types of performance issues, developers must use detailed instrumentation to capture a large number of performance metrics. Unfortunately, this instrumentation may actually influence the performance analysis, leading the developer to an ambiguous conclusion. In this paper, we introduce a technique for focusing a performance analysis on interesting performance metrics. This technique, called dynamic statistical projection pursuit, identifies interesting performance metrics that the monitoring system should capture across some number of processors. By reducing the number of performance metrics, projection pursuit can limit the impact of instrumentation on the performance of the target system and can reduce the volume of performance data.

  16. Network interface unit design options performance analysis

    NASA Technical Reports Server (NTRS)

    Miller, Frank W.

    1991-01-01

    An analysis is presented of three design options for the Space Station Freedom (SSF) onboard Data Management System (DMS) Network Interface Unit (NIU). The NIU provides the interface from the Fiber Distributed Data Interface (FDDI) local area network (LAN) to the DMS processing elements. The FDDI LAN provides the primary means for command and control and low and medium rate telemetry data transfers on board the SSF. The results of this analysis provide the basis for the implementation of the NIU.

  17. System performance analysis of stretched membrane heliostats

    SciTech Connect

    Anderson, J V; Murphy, L M; Short, W; Wendelin, T

    1985-12-01

    The optical performance of both focused and unfocused stretched membrane heliostats was examined in the context of the overall cost and performance of central receiver systems. The sensitivity of optical performance to variations in design parameters such as the system size (capacity), delivery temperature, heliostat size, and heliostat surface quality was also examined. The results support the conclusion that focused stretched membrane systems provide an economically attractive alternative to current glass/metal heliostats over essentially the entire range of design parameters studied. In addition, unfocused stretched membrane heliostats may be attractive for a somewhat more limited range of applications, which would include the larger plant sizes (e.g., 450 MW) and lower delivery temperatures (e.g., 450/sup 0/C), or situations in which the heliostat size could economically be reduced.

  18. Modeling and analysis of web portals performance

    NASA Astrophysics Data System (ADS)

    Abdul Rahim, Rahela; Ibrahim, Haslinda; Syed Yahaya, Sharipah Soaad; Khalid, Khairini

    2011-10-01

    The main objective of this study is to develop a model based on queuing theory at a system level of web portals performance for a university. A system level performance model views the system being modeled as a 'black box' which considers the arrival rate of packets to the portals server and service rate of the portals server. These two parameters are important elements to measure Web portals performance metrics such as server utilization, average server throughput, average number of packet in the server and mean response time. This study refers to infinite population and finite queue. The proposed analytical model is simple in such a way that it is easy to define and fast to interpret the results but still represents the real situation.

  19. Midlife plasma vitamin D concentrations and performance in different cognitive domains assessed 13 years later.

    PubMed

    Assmann, Karen E; Touvier, Mathilde; Andreeva, Valentina A; Deschasaux, Mélanie; Constans, Thierry; Hercberg, Serge; Galan, Pilar; Kesse-Guyot, Emmanuelle

    2015-05-28

    25-Hydroxyvitamin D (25(OH)D) insufficiency is very common in many countries. Yet, the extent to which 25(OH)D status affects cognitive performance remains unclear. The objective of the present study was to evaluate the cross-time association between midlife plasma 25(OH)D concentrations and subsequent cognitive performance, using a subsample from the French 'SUpplémentation en Vitamines et Minéraux AntioXydants' randomised trial (SU.VI.MAX, 1994-2002) and the SU.VI.MAX 2 observational follow-up study (2007-9). 25(OH)D concentrations were measured in plasma samples drawn in 1994-5, using an electrochemoluminescent immunoassay. Cognitive performance was evaluated in 2007-9 with a neuropsychological battery including phonemic and semantic fluency tasks, the RI-48 (rappel indicé-48 items) cued recall test, the Trail Making Test and the forward and backward digit span. Cognitive factors were extracted via principal component analysis (PCA). Data from 1009 individuals, aged 45-60 years at baseline, with available 25(OH)D and cognitive measurements were analysed by multivariable linear regression models and ANCOVA, stratified by educational level. PCA yielded two factors, designated as 'verbal memory' (strongly correlated with the RI-48 and phonemic/semantic fluency tasks) and 'short-term/working memory' (strongly correlated with the digit span tasks). In the fully adjusted regression model, among individuals with low education, there was a positive association between 25(OH)D concentrations and the 'short-term/working memory' factor (P=0.02), mainly driven by the backward digit span (P=0.004). No association with either cognitive factor was found among better educated participants. In conclusion, higher midlife 25(OH)D concentrations were linked to better outcomes concerning short-term and working memory. However, these results were specific to subjects with low education, suggesting a modifying effect of cognitive reserve. PMID:25864611

  20. Forecast analysis of optical waveguide bus performance

    NASA Technical Reports Server (NTRS)

    Ledesma, R.; Rourke, M. D.

    1979-01-01

    Elements to be considered in the design of a data bus include: architecture; data rate; modulation, encoding, detection; power distribution requirements; protocol, work structure; bus reliability, maintainability; interterminal transmission medium; cost; and others specific to application. Fiber- optic data bus considerations for a 32 port transmissive star architecture, are discussed in a tutorial format. General optical-waveguide bus concepts, are reviewed. The electrical and optical performance of a 32 port transmissive star bus, and the effects of temperature on the performance of optical-waveguide buses are examined. A bibliography of pertinent references and the bus receiver test results are included.

  1. Computer program performs statistical analysis for random processes

    NASA Technical Reports Server (NTRS)

    Newberry, M. H.

    1966-01-01

    Random Vibration Analysis Program /RAVAN/ performs statistical analysis on a number of phenomena associated with flight and captive tests, but can also be used in analyzing data from many other random processes.

  2. Computer program performs stiffness matrix structural analysis

    NASA Technical Reports Server (NTRS)

    Bamford, R.; Batchelder, R.; Schmele, L.; Wada, B. K.

    1968-01-01

    Computer program generates the stiffness matrix for a particular type of structure from geometrical data, and performs static and normal mode analyses. It requires the structure to be modeled as a stable framework of uniform, weightless members, and joints at which loads are applied and weights are lumped.

  3. THERMAL PERFORMANCE ANALYSIS FOR WSB DRUM

    SciTech Connect

    Lee, S

    2008-06-26

    The Nuclear Nonproliferation Programs Design Authority is in the design stage of the Waste Solidification Building (WSB) for the treatment and solidification of the radioactive liquid waste streams generated by the Pit Disassembly and Conversion Facility (PDCF) and Mixed Oxide (MOX) Fuel Fabrication Facility (MFFF). The waste streams will be mixed with a cementitious dry mix in a 55-gallon waste container. Savannah River National Laboratory (SRNL) has been performing the testing and evaluations to support technical decisions for the WSB. Engineering Modeling & Simulation Group was requested to evaluate the thermal performance of the 55-gallon drum containing hydration heat source associated with the current baseline cement waste form. A transient axi-symmetric heat transfer model for the drum partially filled with waste form cement has been developed and heat transfer calculations performed for the baseline design configurations. For this case, 65 percent of the drum volume was assumed to be filled with the waste form, which has transient hydration heat source, as one of the baseline conditions. A series of modeling calculations has been performed using a computational heat transfer approach. The baseline modeling results show that the time to reach the maximum temperature of the 65 percent filled drum is about 32 hours when a 43 C initial cement temperature is assumed to be cooled by natural convection with 27 C external air. In addition, the results computed by the present model were compared with analytical solutions. The modeling results will be benchmarked against the prototypic test results. The verified model will be used for the evaluation of the thermal performance for the WSB drum.

  4. A guide for performing system safety analysis

    NASA Technical Reports Server (NTRS)

    Brush, J. M.; Douglass, R. W., III.; Williamson, F. R.; Dorman, M. C. (Editor)

    1974-01-01

    A general guide is presented for performing system safety analyses of hardware, software, operations and human elements of an aerospace program. The guide describes a progression of activities that can be effectively applied to identify hazards to personnel and equipment during all periods of system development. The general process of performing safety analyses is described; setting forth in a logical order the information and data requirements, the analytical steps, and the results. These analyses are the technical basis of a system safety program. Although the guidance established by this document cannot replace human experience and judgement, it does provide a methodical approach to the identification of hazards and evaluation of risks to the system.

  5. Performance Analysis of IIUM Wireless Campus Network

    NASA Astrophysics Data System (ADS)

    Abd Latif, Suhaimi; Masud, Mosharrof H.; Anwar, Farhat

    2013-12-01

    International Islamic University Malaysia (IIUM) is one of the leading universities in the world in terms of quality of education that has been achieved due to providing numerous facilities including wireless services to every enrolled student. The quality of this wireless service is controlled and monitored by Information Technology Division (ITD), an ISO standardized organization under the university. This paper aims to investigate the constraints of wireless campus network of IIUM. It evaluates the performance of the IIUM wireless campus network in terms of delay, throughput and jitter. QualNet 5.2 simulator tool has employed to measure these performances of IIUM wireless campus network. The observation from the simulation result could be one of the influencing factors in improving wireless services for ITD and further improvement.

  6. Performance analysis of panoramic infrared systems

    NASA Astrophysics Data System (ADS)

    Furxhi, Orges; Driggers, Ronald G.; Holst, Gerald; Krapels, Keith

    2014-05-01

    Panoramic imagers are becoming more commonplace in the visible part of the spectrum. These imagers are often used in the real estate market, extreme sports, teleconferencing, and security applications. Infrared panoramic imagers, on the other hand, are not as common and only a few have been demonstrated. A panoramic image can be formed in several ways, using pan and stitch, distributed aperture, or omnidirectional optics. When omnidirectional optics are used, the detected image is a warped view of the world that is mapped on the focal plane array in a donut shape. The final image on the display is the mapping of the omnidirectional donut shape image back to the panoramic world view. In this paper we analyze the performance of uncooled thermal panoramic imagers that use omnidirectional optics, focusing on range performance.

  7. Performance Analysis of the Unitree Central File

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Flater, David

    1994-01-01

    This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.

  8. Light beam deflector performance: a comparative analysis.

    PubMed

    Zook, J D

    1974-04-01

    The performance of various types of analog light beam deflectors is summarized, and their relative positions in a deflector hierarchy are defined. The three types of deflectors considered are (1) mechanical (galvanometer) mirror deflectors, (2) acoustooptic deflectors, and (3) analog electrooptic deflectors. Material figures of merit are defined and compared, and the theoretical trade-off between speed and resolution is given for each type of deflector. PMID:20126095

  9. Performance analysis of intracavity birefringence sensing

    SciTech Connect

    Yoshino, Toshihiko

    2008-05-10

    The performance of intracavity birefringence sensing by use of a standing-wave laser is theoretically analyzed when the cavity involves internal reflection. On the three-mirror compound cavity model, the condition for converting an optical path length into a laser frequency or a retardation into an optical beat frequency with good linearity and little uncertainty is derived as a function of the cavity parameters and is numerically analyzed.

  10. Moisture performance analysis of EPS frost insulation

    SciTech Connect

    Ojanen, T.; Kokko, E.

    1997-11-01

    A horizontal layer of expanded polystyrene foam (EPS) is widely used as a frost insulation of building foundations in the Nordic countries. The performance properties of the insulation depend strongly on the moisture level of the material. Experimental methods are needed to produce samples for testing the material properties in realistic moisture conditions. The objective was to analyze the moisture loads and the wetting mechanisms of horizontal EPS frost insulation. Typical wetting tests, water immersion and diffusive water vapor absorption tests, were studied and the results were compared with the data from site investigations. Usually these tests give higher moisture contents of EPS than what are detected in drained frost insulation applications. Also the effect of different parameters, like the immersion depth and temperature gradient were studied. Special attention was paid to study the effect of diffusion on the wetting process. Numerical simulation showed that under real working conditions the long period diffusive moisture absorption in EPS frost insulation remained lower than 1% Vol. Moisture performance was determined experimentally as a function of the distance between the insulation and the free water level in the ground. The main moisture loads and the principles for good moisture performance of frost insulation are presented.

  11. Performance analysis of the multichannel astrometric photometer

    NASA Technical Reports Server (NTRS)

    Huang, Chunsheng; Lawrence, George N.; Levy, Eugene H.; Mcmillan, Robert S.

    1987-01-01

    It has been proposed that extrasolar planetary systems may be observed if perturbations in star position due to the orbit of Jupiter-type planets could be detected. To see this motion, high accuracy measurements of 0.01 milliarcsecond are required over a relatively large field of view. Techniques using a moving Ronchi grating have been proposed for this application and have been successful in ground-based lower resolution tests. The method may have application to other precision angular measurement problems. This paper explores the theoretical description of the method, considers certain of the error sources, and presents a preliminary calculation of the performance which may be achieved.

  12. Performance analysis of instrumentation system management policies

    SciTech Connect

    Waheed, A.; Melfi, V.F.; Rover, D.T.

    1995-12-01

    Run-time trace data helps to debug and analyze the parallel programs. Obtaining and managing this data during run-time is the responsibility of an instrumentation system that incurs overhead. In its worst case, this overhead can result in severe perturbation of the behavior of the actual program. This paper presents a queuing model for an instrumentation system. The purpose is to provide a rigorous mathematical tool that could allow the analysis of the overhead of program behavior due to the instrumentation system management policies. We summarize the effects of two management policies: FOF and FAOF policies.

  13. Analysis of tandem mirror reactor performance

    SciTech Connect

    Wu, K.F.; Campbell, R.B.; Peng, Y.K.M.

    1984-11-01

    Parametric studies are performed using a tandem mirror plasma point model to evaluate the wall loading GAMMA and the physics figure of merit, Q (fusion power/injected power). We explore the relationship among several dominant parameters and determine the impact on the plasma performance of electron cyclotron resonance heating in the plug region. These global particle and energy balance studies were carried out under the constraints of magnetohydrodynamic (MHD) equilibrium and stability and constant magnetic flux, assuming a fixed end-cell geometry. We found that the higher the choke coil fields, the higher the Q, wall loading, and fusion power due to the combination of the increased central-cell field B/sub c/ and density n/sub c/ and the reduced central-cell beta ..beta../sub c/. The MHD stability requirement of constant B/sub c//sup 2/..beta../sub c/ causes the reduction in ..beta../sub c/. In addition, a higher value of fusion power can also be obtained, at a fixed central-cell length, by operating at a lower value of B/sub c/ and a higher value of ..beta../sub c/.

  14. A New Approach to Aircraft Robust Performance Analysis

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Tierno, Jorge E.

    2004-01-01

    A recently developed algorithm for nonlinear system performance analysis has been applied to an F16 aircraft to begin evaluating the suitability of the method for aerospace problems. The algorithm has a potential to be much more efficient than the current methods in performance analysis for aircraft. This paper is the initial step in evaluating this potential.

  15. Optical performance test & analysis of intraocular lenses

    NASA Astrophysics Data System (ADS)

    Choi, Junoh

    Cataract is a condition in the eye that if left untreated, could lead to blindness. One of the effective ways to treat cataract is the removal of the cataractous natural crystalline lens and implantation of an artificial lens called an intraocular lens(IOL). The designs of the IOLs have shown improvements over the years to further imitate natural human vision. A need for an objective testing and analysis tool for the latest IOLs grow with the advancements of the IOLs. In this dissertation, I present a system capable of objective test and analysis of the advanced IOLs. The system consists of (1) Model eye into which an IOL can be inserted to mimic conditions of the human eye. (2) Modulation Transfer Function measurement setup capable of through-focus test for depth of field studies and polychromatic test for study of effects of chromatization. (3) Use of Defocus Transfer Function to simulate depth of field characteristic of rotationally symmetric multifocal designs and extension of the function to polychromatic conditions. (4) Several target imaging experiments for comparison of stray light artifacts and simulation using a non-sequential ray trace package.

  16. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  17. Shuttle TPS thermal performance and analysis methodology

    NASA Technical Reports Server (NTRS)

    Neuenschwander, W. E.; Mcbride, D. U.; Armour, G. A.

    1983-01-01

    Thermal performance of the thermal protection system was approximately as predicted. The only extensive anomalies were filler bar scorching and over-predictions in the high Delta p gap heating regions of the orbiter. A technique to predict filler bar scorching has been developed that can aid in defining a solution. Improvement in high Delta p gap heating methodology is still under study. Minor anomalies were also examined for improvements in modeling techniques and prediction capabilities. These include improved definition of low Delta p gap heating, an analytical model for inner mode line convection heat transfer, better modeling of structure, and inclusion of sneak heating. The limited number of problems related to penetration items that presented themselves during orbital flight tests were resolved expeditiously, and designs were changed and proved successful within the time frame of that program.

  18. Performance analysis of advanced spacecraft TPS

    NASA Technical Reports Server (NTRS)

    Pitts, William C.

    1991-01-01

    Spacecraft entering a planetary atmosphere require a very sophisticated thermal protection system. The materials used must be tailored to each specific vehicle based on its planned mission profiles. Starting with the Space Shuttle, many types of ceramic insulation with various combinations of thermal properties have been developed by others. The development of two new materials is described: A Composite Flexible Blanket Insulation which has a significantly lower effective thermal conductivity than other ceramic blankets; and a Silicon Matrix Composite which has applications at high temperature locations such as wing leading edges. Also, a systematic study is described that considers the application of these materials for a proposed Personnel Launch System. The study shows how most of these available ceramic materials would perform during atmospheric entry of this vehicle. Other specific applications of these thermal protection materials are discussed.

  19. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  20. Energy performance analysis of prototype electrochromic windows

    SciTech Connect

    Sullivan, R.; Rubin, M.; Selkowitz, S.

    1996-12-01

    This paper presents the results of a study investigating the energy performance of three newly developed prototype electrochromic devices. The DOE-2.1 E energy simulation program was used to analyze the annual cooling, lighting, and total electric energy use and peak demand as a function of window type and size. The authors simulated a prototypical commercial office building module located in the cooling-dominated locations of Phoenix, AZ and Miami, FL. Heating energy use was also studied in the heating-dominated location of Madison, WI. Daylight illuminance was used to control electrochromic state-switching. Two types of window systems were analyzed; i.e., the outer pane electrochromic glazing was combined with either a conventional low-E or a spectrally selective inner pane. The properties of the electrochromic glazings are based on measured data of new prototypes developed as part of a cooperative DOE-industry program. The results show the largest difference in annual electric energy performance between the different window types occurs in Phoenix and is about 6.5 kWh/m{sup 2} floor area (0.60 kWh/ft{sup 2}) which can represent a cost of about $.52/m{sup 2} ($.05/ft{sup 2}) using electricity costing $.08/kWh. In heating-dominated locations, the electrochromic should be maintained in its bleached state during the heating season to take advantage of beneficial solar heat gain which would reduce the amount of required heating. This also means that the electrochromic window with the largest solar heat gain coefficient is best.

  1. Performance analysis of advanced spacecraft TPS

    NASA Technical Reports Server (NTRS)

    Pitts, William C.

    1987-01-01

    The analysis on the feasibility for using metal hydrides in the thermal protection system of cryogenic tanks in space was based on the heat capacity of ice as the phase change material (PCM). It was found that with ice the thermal protection system weight could be reduced by, at most, about 20 percent over an all LI-900 insulation. For this concept to be viable, a metal hydride with considerably more capacity than water would be required. None were found. Special metal hydrides were developed for hydrogen fuel storage applications and it may be possible to do so for the current application. Until this appears promising further effort on this feasibility study does not seem warranted.

  2. Performance analysis of superconducting generator electromagnetic shielding

    NASA Astrophysics Data System (ADS)

    Xia, D.; Xia, Z.

    2015-12-01

    In this paper, the shielding performance of electromagnetic shielding systems is analyzed using the finite element method. Considering the non-iron-core rotor structure of superconducting generators, it is proposed that the stator alternating magnetic field generated under different operating conditions could decompose into oscillating and rotating magnetic field, so that complex issues could be greatly simplified. A 1200KW superconducting generator was analyzed. The distribution of the oscillating magnetic field and the rotating magnetic field in rotor area, which are generated by stator winding currents, and the distribution of the eddy currents in electromagnetic shielding tube, which are induced by these stator winding magnetic fields, are calculated without electromagnetic shielding system and with three different structures of electromagnetic shielding system respectively. On the basis of the results of FEM, the shielding factor of the electromagnetic shielding systems is calculated and the shielding effect of the three different structures on the oscillating magnetic field and the rotating magnetic field is compared. The method and the results in this paper can provide reference for optimal design and loss calculation of superconducting generators.

  3. Measurement uncertainty analysis techniques applied to PV performance measurements

    SciTech Connect

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  4. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.

  5. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.

  6. Performance analysis of parallel supernodal sparse LU factorization

    SciTech Connect

    Grigori, Laura; Li, Xiaoye S.

    2004-02-05

    We investigate performance characteristics for the LU factorization of large matrices with various sparsity patterns. We consider supernodal right-looking parallel factorization on a bi-dimensional grid of processors, making use of static pivoting. We develop a performance model and we validate it using the implementation in SuperLU-DIST, the real matrices and the IBM Power3 machine at NERSC. We use this model to obtain performance bounds on parallel computers, to perform scalability analysis and to identify performance bottlenecks. We also discuss the role of load balance and data distribution in this approach.

  7. Using Importance-Performance Analysis to Evaluate Training

    ERIC Educational Resources Information Center

    Siniscalchi, Jason M.; Beale, Edward K.; Fortuna, Ashley

    2008-01-01

    The importance-performance analysis (IPA) is a tool that can provide timely and usable feedback to improve training. IPA measures the gaps between the importance and how good (performance) a class is perceived by a student and is presented on a 2x2 matrix. The quadrant in which data land in this matrix aids in determining potential future action.…

  8. An Exploratory Analysis of Performance on the SAT.

    ERIC Educational Resources Information Center

    Wainer, Howard

    1984-01-01

    Techniques of exploratory data analysis (EDA) were used to decompose data tables portraying performance of ethnic groups on the Scholastic Aptitude Test. These analyses indicate the size and structure of differences in performance among groups studied, nature of changes across time, and interactions between group membership and time. (Author/DWH)

  9. Analysis of a Ubiquitous Performance Support System for Teachers

    ERIC Educational Resources Information Center

    Chen, Chao-Hsiu; Hwang, Gwo-Jen; Yang, Tzu-Chi; Chen, Shih-Hsuan; Huang, Shen-Yu

    2009-01-01

    This paper describes a Ubiquitous Performance Support System for Teachers (UPSST) and its implementation model. Personal Digital Assistants (PDAs) were used as the platform to support high-school teachers. Based on concepts of Electronic Performance Support Systems and design-based research, the authors conducted an iterative process of analysis,…

  10. A Systemic Cause Analysis Model for Human Performance Technicians

    ERIC Educational Resources Information Center

    Sostrin, Jesse

    2011-01-01

    This article presents a systemic, research-based cause analysis model for use in the field of human performance technology (HPT). The model organizes the most prominent barriers to workplace learning and performance into a conceptual framework that explains and illuminates the architecture of these barriers that exist within the fabric of everyday…

  11. An Alternative Method to Predict Performance: Canonical Redundancy Analysis.

    ERIC Educational Resources Information Center

    Dawson-Saunders, Beth; Doolen, Deane R.

    1981-01-01

    The relationships between predictors of performance and subsequent measures of clinical performance in medical school were examined for two classes at Southern Illinois University of Medicine. Canonical redundancy analysis was used to evaluate the association between six academic and three biographical preselection characteristics and four…

  12. Performance and stability analysis of a photovoltaic power system

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Blaha, R. J.; Pickrell, R. L.

    1978-01-01

    The performance and stability characteristics of a 10 kVA photovoltaic power system are studied using linear Bode analysis and a nonlinear analog simulation. Power conversion efficiencies, system stability, and system transient performance results are given for system operation at various levels of solar insolation. Additionally, system operation and the modeling of system components for the purpose of computer simulation are described.

  13. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  14. Using Importance-Performance Analysis To Evaluate Teaching Effectiveness.

    ERIC Educational Resources Information Center

    Attarian, Aram

    This paper introduces Importance-Performance (IP) analysis as a method to evaluate teaching effectiveness in a university outdoor program. Originally developed for use in the field of marketing, IP analysis is simple and easy to administer, and provides the instructor with a visual representation of what teaching attributes are important, how…

  15. Preliminary Analysis of Remote Monitoring & Robotic Concepts for Performance Confirmation

    SciTech Connect

    D.A. McAffee

    1997-02-18

    As defined in 10 CFR Part 60.2, Performance Confirmation is the ''program of tests, experiments and analyses which is conducted to evaluate the accuracy and adequacy of the information used to determine with reasonable assurance that the performance objectives for the period after permanent closure will be met''. The overall Performance Confirmation program begins during site characterization and continues up to repository closure. The main purpose of this document is to develop, explore and analyze initial concepts for using remotely operated and robotic systems in gathering repository performance information during Performance Confirmation. This analysis focuses primarily on possible Performance Confirmation related applications within the emplacement drifts after waste packages have been emplaced (post-emplacement) and before permanent closure of the repository (preclosure). This will be a period of time lasting approximately 100 years and basically coincides with the Caretaker phase of the project. This analysis also examines, to a lesser extent, some applications related to Caretaker operations. A previous report examined remote handling and robotic technologies that could be employed during the waste package emplacement phase of the project (Reference 5.1). This analysis is being prepared to provide an early investigation of possible design concepts and technical challenges associated with developing remote systems for monitoring and inspecting activities during Performance Confirmation. The writing of this analysis preceded formal development of Performance Confirmation functional requirements and program plans and therefore examines, in part, the fundamental Performance Confirmation monitoring needs and operating conditions. The scope and primary objectives of this analysis are to: (1) Describe the operating environment and conditions expected in the emplacement drifts during the preclosure period. (Presented in Section 7.2). (2) Identify and discuss the

  16. The development of a reliable amateur boxing performance analysis template.

    PubMed

    Thomson, Edward; Lamb, Kevin; Nicholas, Ceri

    2013-01-01

    The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing. PMID:23121380

  17. Measurement uncertainty analysis techniques applied to PV performance measurements

    SciTech Connect

    Wells, C

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  18. Mean streamline aerodynamic performance analysis of centrifugal compressors

    SciTech Connect

    Aungier, R.H.

    1995-07-01

    Aerodynamic performance prediction models for centrifugal compressor impellers are presented. In combination with similar procedures for stationary components, previously published in the open literature, a comprehensive mean streamline performance analysis for centrifugal compressor stages is provided. The accuracy and versatility of the overall analysis is demonstrated for several centrifugal compressor stages of various types, including comparison with intrastage component performance data. Detailed validation of the analysis against experimental data has been accomplished for over a hundred stages, including stage flow coefficients from 0.009 to 0.15 and pressure ratios up to about 3.5. Its application to turbocharger stages includes pressure ratios up to 4.2, but with test uncertainty much greater than for the data used in the detailed validation studies.

  19. Performing modal analysis for multi-metric measurements: a discussion

    NASA Astrophysics Data System (ADS)

    Soman, R.; Majewska, K.; Radzienski, M.; Ostachowicz, W.

    2016-04-01

    This work addresses the severe lack of literature in the area of modal analysis for multi-metric sensing. The paper aims at providing a step by step tutorial for performance of modal analysis using Fiber Bragg Grating (FBG) strain sensors and Laser Doppler Vibrometer (LDV) for displacement measurements. The paper discusses in detail the different parameters which affect the accuracy of the experimental results. It highlights the often implied, and un-mentioned problems, that researchers face while performing experiments. The paper tries to bridge the gap between the theoretical idea of the experiment and its actual execution by discussing each aspect including the choice of specimen, boundary conditions, sensors, sensor position, excitation mechanism and its location as well as the post processing of the data. The paper may be viewed as a checklist for performing modal analysis in order to ensure high quality measurements by avoiding the systematic errors to creep in.

  20. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  1. Network DEA: an application to analysis of academic performance

    NASA Astrophysics Data System (ADS)

    Saniee Monfared, Mohammad Ali; Safi, Mahsa

    2013-05-01

    As governmental subsidies to universities are declining in recent years, sustaining excellence in academic performance and more efficient use of resources have become important issues for university stakeholders. To assess the academic performances and the utilization of the resources, two important issues need to be addressed, i.e., a capable methodology and a set of good performance indicators as we consider in this paper. In this paper, we propose a set of performance indicators to enable efficiency analysis of academic activities and apply a novel network DEA structure to account for subfunctional efficiencies such as teaching quality, research productivity, as well as the overall efficiency. We tested our approach on the efficiency analysis of academic colleges at Alzahra University in Iran.

  2. Performance Analysis of Web Applications Based on User Navigation

    NASA Astrophysics Data System (ADS)

    Zhou, Quanshu; Ye, Hairong; Ding, Zuohua

    This paper proposes a method to conduct performance eanalysis of web applications. The behavior model is firstly built from log file after user navigation, then an extended state diagram is extracted from this log file, finally multiple Markov model is cooperated to this state diagram and the performance analysis can be obtained from the Markov model. Five indexes are used to measure the performance and they are: service response time, service path length, service utilization, service implementation rate and access error rate. Our performance analysis result will provide a suggestion to improve the design of web applications and optimize the services. A case study of Zhejiang Chess web site has been used to demonstrate the advantage of our method.

  3. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Caubet, Jordi; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In this paper we describe how to apply powerful performance analysis techniques to understand the behavior of multilevel parallel applications. We use the Paraver/OMPItrace performance analysis system for our study. This system consists of two major components: The OMPItrace dynamic instrumentation mechanism, which allows the tracing of processes and threads and the Paraver graphical user interface for inspection and analyses of the generated traces. We describe how to use the system to conduct a detailed comparative study of a benchmark code implemented in five different programming paradigms applicable for shared memory

  4. Temporal geospatial analysis of secondary school students’ examination performance

    NASA Astrophysics Data System (ADS)

    Nik Abd Kadir, ND; Adnan, NA

    2016-06-01

    Malaysia's Ministry of Education has improved the organization of the data to have the geographical information system (GIS) school database. However, no further analysis is done using geospatial analysis tool. Mapping has emerged as a communication tool and becomes effective way to publish the digital and statistical data such as school performance results. The objective of this study is to analyse secondary school student performance of science and mathematics scores of the Sijil Pelajaran Malaysia Examination result in the year 2010 to 2014 for the Kelantan's state schools with the aid of GIS software and geospatial analysis. The school performance according to school grade point average (GPA) from Grade A to Grade G were interpolated and mapped and query analysis using geospatial tools able to be done. This study will be beneficial to the education sector to analyse student performance not only in Kelantan but to the whole Malaysia and this will be a good method to publish in map towards better planning and decision making to prepare young Malaysians for the challenges of education system and performance.

  5. Quantitative analysis of regional myocardial performance in coronary artery disease

    NASA Technical Reports Server (NTRS)

    Stewart, D. K.; Dodge, H. T.; Frimer, M.

    1975-01-01

    Findings from a group of subjects with significant coronary artery stenosis are given. A group of controls determined by use of a quantitative method for the study of regional myocardial performance based on the frame-by-frame analysis of biplane left ventricular angiograms are presented. Particular emphasis was placed upon the analysis of wall motion in terms of normalized segment dimensions, timing and velocity of contraction. The results were compared with the method of subjective assessment used clinically.

  6. Integrated design environment for human performance and human reliability analysis

    SciTech Connect

    Nelson, W.R.

    1997-05-01

    Work over the last few years at the Idaho National Engineering and Environmental Laboratory (INEEL) has included a major focus on applying human performance and human reliability knowledge and methods as an integral element of system design and development. This work has been pursued in programs in a wide variety of technical domains, beginning with nuclear power plant operations. Since the mid-1980`s the laboratory has transferred the methods and tools developed in the nuclear domain to military weapons systems and aircraft, offshore oil and shipping operations, and commercial aviation operations and aircraft design. Through these diverse applications the laboratory has developed an integrated approach and framework for application of human performance analysis, human reliability analysis (HRA), operational data analysis, and simulation studies of human performance to the design and development of complex systems. This approach was recently tested in the NASA Advanced Concepts Program {open_quotes}Structured Human Error Analysis for Aircraft Design.{close_quotes} This program resulted in the prototype software tool THEA (Tool for Human Error Analysis) for incorporating human error analysis in the design of commercial aircraft, focusing on airplane maintenance tasks. Current effort is directed toward applying this framework to the development of advanced Air Traffic Management (ATM) systems as part of NASA`s Advanced Air Transportation Technologies (AATT) program. This paper summarizes the approach, describes recent and current applications in commercial aviation, and provides perspectives on how the approach could be utilized in the nuclear power industry.

  7. Analysis of Photovoltaic System Energy Performance Evaluation Method

    SciTech Connect

    Kurtz, S.; Newmiller, J.; Kimber, A.; Flottemesch, R.; Riley, E.; Dierauf, T.; McKee, J.; Krishnani, P.

    2013-11-01

    Documentation of the energy yield of a large photovoltaic (PV) system over a substantial period can be useful to measure a performance guarantee, as an assessment of the health of the system, for verification of a performance model to then be applied to a new system, or for a variety of other purposes. Although the measurement of this performance metric might appear to be straight forward, there are a number of subtleties associated with variations in weather and imperfect data collection that complicate the determination and data analysis. A performance assessment is most valuable when it is completed with a very low uncertainty and when the subtleties are systematically addressed, yet currently no standard exists to guide this process. This report summarizes a draft methodology for an Energy Performance Evaluation Method, the philosophy behind the draft method, and the lessons that were learned by implementing the method.

  8. An Empirical Analysis of Human Performance and Nuclear Safety Culture

    SciTech Connect

    Jeffrey Joe; Larry G. Blackwood

    2006-06-01

    The purpose of this analysis, which was conducted for the US Nuclear Regulatory Commission (NRC), was to test whether an empirical connection exists between human performance and nuclear power plant safety culture. This was accomplished through analyzing the relationship between a measure of human performance and a plant’s Safety Conscious Work Environment (SCWE). SCWE is an important component of safety culture the NRC has developed, but it is not synonymous with it. SCWE is an environment in which employees are encouraged to raise safety concerns both to their own management and to the NRC without fear of harassment, intimidation, retaliation, or discrimination. Because the relationship between human performance and allegations is intuitively reciprocal and both relationship directions need exploration, two series of analyses were performed. First, human performance data could be indicative of safety culture, so regression analyses were performed using human performance data to predict SCWE. It also is likely that safety culture contributes to human performance issues at a plant, so a second set of regressions were performed using allegations to predict HFIS results.

  9. Frontiers of Performance Analysis on Leadership-Class Systems

    SciTech Connect

    Fowler, R J; Adhianto, L; de Supinski, B R; Fagan, M; Gamblin, T; Krentel, M; Mellor-Crummey, J; Schulz, M; Tallent, N

    2009-06-15

    The number of cores in high-end systems for scientific computing are employing is increasing rapidly. As a result, there is an pressing need for tools that can measure, model, and diagnose performance problems in highly-parallel runs. We describe two tools that employ complementary approaches for analysis at scale and we illustrate their use on DOE leadership-class systems.

  10. Job Analysis, Job Descriptions, and Performance Appraisal Systems.

    ERIC Educational Resources Information Center

    Sims, Johnnie M.; Foxley, Cecelia H.

    1980-01-01

    Job analysis, job descriptions, and performance appraisal can benefit student services administration in many ways. Involving staff members in the development and implementation of these techniques can increase commitment to and understanding of the overall objectives of the office, as well as communication and cooperation among colleagues.…

  11. A Semiotic Reading and Discourse Analysis of Postmodern Street Performance

    ERIC Educational Resources Information Center

    Lee, Mimi Miyoung; Chung, Sheng Kuan

    2009-01-01

    Postmodern street art operates under a set of references that requires art educators and researchers to adopt alternative analytical frameworks in order to understand its meanings. In this article, we describe social semiotics, critical discourse analysis, and postmodern street performance as well as the relevance of the former two in interpreting…

  12. The Analysis of Athletic Performance: Some Practical and Philosophical Considerations

    ERIC Educational Resources Information Center

    Nelson, Lee J.; Groom, Ryan

    2012-01-01

    This article presents a hypothetical dialogue between a notational analyst (NA) recently schooled in the positivistic assessment of athletic performance, an "old-school" traditional coach (TC) who favours subjective analysis, and a pragmatic educator (PE). The conversation opens with NA and TC debating the respective value of quantitative and…

  13. Identification of human operator performance models utilizing time series analysis

    NASA Technical Reports Server (NTRS)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  14. Performance on the Pharmacy College Admission Test: An Exploratory Analysis.

    ERIC Educational Resources Information Center

    Kawahara, Nancy E.; Ethington, Corinna

    1994-01-01

    Median polishing, an exploratory data statistical analysis technique, was used to study achievement patterns for men and women on the Pharmacy College Admission Test over a six-year period. In general, a declining trend in scores was found, and males performed better than females, with the largest differences found in chemistry and biology.…

  15. Performance Analysis of GAME: A Generic Automated Marking Environment

    ERIC Educational Resources Information Center

    Blumenstein, Michael; Green, Steve; Fogelman, Shoshana; Nguyen, Ann; Muthukkumarasamy, Vallipuram

    2008-01-01

    This paper describes the Generic Automated Marking Environment (GAME) and provides a detailed analysis of its performance in assessing student programming projects and exercises. GAME has been designed to automatically assess programming assignments written in a variety of languages based on the "structure" of the source code and the correctness…

  16. Visuo-Spatial Performance in Autism: A Meta-Analysis

    ERIC Educational Resources Information Center

    Muth, Anne; Hönekopp, Johannes; Falter, Christine M.

    2014-01-01

    Visuo-spatial skills are believed to be enhanced in autism spectrum disorders (ASDs). This meta-analysis tests the current state of evidence for Figure Disembedding, Block Design, Mental Rotation and Navon tasks in ASD and neurotypicals. Block Design (d = 0.32) and Figure Disembedding (d = 0.26) showed superior performance for ASD with large…

  17. Storage element performance optimization for CMS analysis jobs

    NASA Astrophysics Data System (ADS)

    Behrmann, G.; Dahlblom, J.; Guldmyr, J.; Happonen, K.; Lindén, T.

    2012-12-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I/O-performance

  18. Eddy-current steam generator data analysis performance. Final report

    SciTech Connect

    Harris, D.H.

    1993-06-01

    This study assessed the accuracy of eddy current, bobbin coil data analysis of steam generator tubes conducted under the structure of the PWR Steam Generator Examination Guidelines, Individual and team performance measures were obtained from independent analyses of data from 1619 locations in a sample of 199 steam generator tubes. The 92 reportable indications contained in the tube sample, including 64 repairable indications, were attributable to: wear at anti-vibration bars, intergranular attack/stress-corrosion cracking (IGA/SCC) within tube sheet crevice regions, primary-water stress-corrosion cracking (PWSCC) at tube roll transitions, or thinning at cold-leg tube supports. Analyses were conducted by 20 analysts, four each from five vendors of eddy current steam generator examination services. In accordance with the guidelines, site orientation was provided with plant-specific guidelines; preanalysis practice was completed on plant-specific data; analysts were qualified by performance testing; and independent primary-secondary analyses were conducted with resolution of discrepancies (team analyses). Measures of analysis performance included percentages of indications correctly reported, percentages of false reports, and relative operating characteristic (ROC) curves. ROC curves presented comprehensive pictures of analysis accuracy generalizable beyond the specific conditions of this study. They also provided single-value measures of analysis accuracy. Conclusions and recommendations were provided relative to analysis accuracy, effect of primary-secondary analyses, analyses of tube sheet crevice regions, establishment of reporting criteria, improvement of examination guidelines, and needed research.

  19. Modeling and performance analysis of GPS vector tracking algorithms

    NASA Astrophysics Data System (ADS)

    Lashley, Matthew

    This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without

  20. Performance Indicators in Math: Implications for Brief Experimental Analysis of Academic Performance

    ERIC Educational Resources Information Center

    VanDerheyden, Amanda M.; Burns, Matthew K.

    2009-01-01

    Brief experimental analysis (BEA) can be used to specify intervention characteristics that produce positive learning gains for individual students. A key challenge to the use of BEA for intervention planning is the identification of performance indicators (including topography of the skill, measurement characteristics, and decision criteria) that…

  1. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  2. Analysis of portable impactor performance for enumeration of viable bioaerosols.

    PubMed

    Yao, Maosheng; Mainelis, Gediminas

    2007-07-01

    Portable impactors are increasingly being used to estimate concentration of bioaerosols in residential and occupational environments; however, little data are available about their performance. This study investigated the overall performances of the SMA MicroPortable, BioCulture, Microflow, Microbiological Air Sampler (MAS-100), Millipore Air Tester, SAS Super 180, and RCS High Flow portable microbial samplers when collecting bacteria and fungi both indoors and outdoors. The performance of these samplers was compared with that of the BioStage impactor. The Button Aerosol Sampler equipped with gelatin filter was also included in the study. Results showed that the sampling environment can have a statistically significant effect on sampler performance, most likely due to the differences in airborne microorganism composition and/or their size distribution. Data analysis using analysis of variance showed that the relative performance of all samplers (except the RCS High Flow and MAS-100) was statistically different (lower) compared with the BioStage. The MAS-100 also had statistically higher performance compared with other portable samplers except the RCS High Flow. The Millipore Air Tester and the SMA had the lowest performances. The relative performance of the impactors was described using a multiple linear regression model (R(2) = 0.83); the effects of the samplers' cutoff sizes and jet-to-plate distances as predictor variables were statistically significant. The data presented in this study will help field professionals in selecting bioaerosol samplers. The developed empirical formula describing the overall performance of bioaerosol impactors can assist in sampler design. PMID:17538812

  3. Performance demonstration program plan for analysis of simulated headspace gases

    SciTech Connect

    1995-06-01

    The Performance Demonstration Program (PDP) for analysis of headspace gases will consist of regular distribution and analyses of test standards to evaluate the capability for analyzing VOCs, hydrogen, and methane in the headspace of transuranic (TRU) waste throughout the Department of Energy (DOE) complex. Each distribution is termed a PDP cycle. These evaluation cycles will provide an objective measure of the reliability of measurements performed for TRU waste characterization. Laboratory performance will be demonstrated by the successful analysis of blind audit samples of simulated TRU waste drum headspace gases according to the criteria set within the text of this Program Plan. Blind audit samples (hereinafter referred to as PDP samples) will be used as an independent means to assess laboratory performance regarding compliance with the QAPP QAOs. The concentration of analytes in the PDP samples will encompass the range of concentrations anticipated in actual waste characterization gas samples. Analyses which are required by the WIPP to demonstrate compliance with various regulatory requirements and which are included in the PDP must be performed by laboratories which have demonstrated acceptable performance in the PDP.

  4. Performance analysis of mini centrifugal pump with splitter blades

    NASA Astrophysics Data System (ADS)

    Shigemitsu, T.; Fukutomi, J.; Wada, T.; Shinohara, H.

    2013-12-01

    Design method for a mini centrifugal pump is not established because the internal flow condition for these small-sized fluid machines is not clarified and conventional theory is not suitable for small-sized pumps. Then, a semi-open impeller for the mini centrifugal pump with 55mm impeller diameter is adopted in this research to take simplicity and maintenance into consideration. Splitter blades are adopted in this research to improve the performance and internal flow condition of mini centrifugal pump having large blade outlet angle. The performance tests are conducted with these rotors in order to investigate the effect of the splitter blades on the performance and internal flow condition of the mini centrifugal pump. A three dimensional steady numerical flow analysis is conducted to analyze rotor, volute efficiency and loss caused by a vortex. It is clarified from the experimental results that the performance of the mini centrifugal pump is improved by the effect of the splitter blades. Flow condition at outlet of the rotor becomes uniform and back flow regions are suppressed in the case with the splitter blades. Further, the volute efficiency increases and the vortex loss decreases. In the present paper, the performance of the mini centrifugal pump is shown and the flow condition is clarified with the results of the experiment and the numerical flow analysis. Furthermore, the performance analyses of the mini centrifugal pumps with and without the splitter blades are conducted.

  5. Mission analysis and performance specification studies report, appendix A

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The Near Term Hybrid Passenger Vehicle Development Program tasks included defining missions, developing distributions of daily travel and composite driving cycles for these missions, providing information necessary to estimate the potential replacement of the existing fleet by hybrids, and estimating acceleration/gradeability performance requirements for safe operation. The data was then utilized to develop mission specifications, define reference vehicles, develop hybrid vehicle performance specifications, and make fuel consumption estimates for the vehicles. The major assumptions which underlie the approach taken to the mission analysis and development of performance specifications are the following: the daily operating range of a hybrid vehicle should not be limited by the stored energy capacity and the performance of such a vehicle should not be strongly dependent on the battery state of charge.

  6. Safety and performance analysis of a commercial photovoltaic installation

    NASA Astrophysics Data System (ADS)

    Hamzavy, Babak T.; Bradley, Alexander Z.

    2013-09-01

    Continuing to better understand the performance of PV systems and changes in performance with the system life is vital to the sustainable growth of solar. A systematic understanding of degradation mechanisms that are induced as a result of variables such as the service environment, installation, module/material design, weather, operation and maintenance, and manufacturing is required for reliable operation throughout a system's lifetime. We wish to report the results from an analysis of a commercial c-Si PV array owned and operated by DuPont. We assessed the electrical performance of the modules by comparing the original manufacturers' performance data with the measurements obtained using a solar simulator to determine the degradation rate. This evaluation provides valuable PV system field experience and document key issues regarding safety and performance. A review of the nondestructive and destructive analytical methods and characterization strategies we have found useful for system, module, and subsequent material component evaluations are presented. We provide an overview of our inspection protocol and subsequent control process to mitigate risk. The objective is to explore and develop best practice protocols regarding PV asset optimization and provide a rationale to reduce risk based on the analysis of our own commercial installations.

  7. Performance analysis of static locking in distributed database systems

    SciTech Connect

    Shyu, S.C. ); Li, V.O.K. . Dept. of Electrical Engineering)

    1990-06-01

    Numerous performance models have been proposed for locking algorithms in centralized database systems, but few have been developed for distributed ones. Existing results on distributed locking usually ignore the deadlock problem so as to simplify the analysis. In this paper, a new performance model for static locking in distributed database systems is developed.A queuing model is used to approximate static locking in distributed database systems without deadlocks. Then a random graph model is proposed to find the deadlock probability of each transaction. The above two models are integrated, so that given the transaction arrival rate, the response time and the effective throughput can be calculated.

  8. Performance requirements analysis for payload delivery from a space station

    NASA Technical Reports Server (NTRS)

    Friedlander, A. L.; Soldner, J. K.; Bell, J. (Editor); Ricks, G. W.; Kincade, R. E.; Deatkins, D.; Reynolds, R.; Nader, B. A.; Hill, O.; Babb, G. R.

    1983-01-01

    Operations conducted from a space station in low Earth orbit which have different constraints and opportunities than those conducted from direct Earth launch were examined. While a space station relieves many size and performance constraints on the space shuttle, the space station's inertial orbit has different launch window constraints from those associated with customary Earth launches which reflect upon upper stage capability. A performance requirements analysis was developed to provide a reference source of parametric data, and specific case solutions and upper stage sizing trade to assist potential space station users and space station and upper stage developers assess the impacts of a space station on missions of interest.

  9. A Study of ATLAS Grid Performance for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    Panitkin, Sergey; Fine, Valery; Wenaus, Torre

    2012-12-01

    In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.

  10. INL FY2014 1st Quarterly Performance Analysis

    SciTech Connect

    Loran Kinghorn

    2014-07-01

    This report is published quarterly by the Idaho National Laboratory (INL) Performance Assurance Organization. The Department of Energy Occurrence Reporting and Processing System (ORPS), as prescribed in DOE Order 232.2 “Occurrence Reporting and Processing of Operations Information,” requires a quarterly analysis of events, both reportable and not reportable, for the previous 12 months. This report is the analysis of 76 occurrence reports and over 16 other deficiency reports (including not reportable events) identified at the INL during the period of October 2013 through December 2013. Battelle Energy Alliance (BEA) operates the INL under contract DE AC 07 051D14517

  11. Identifying influential factors of business process performance using dependency analysis

    NASA Astrophysics Data System (ADS)

    Wetzstein, Branimir; Leitner, Philipp; Rosenberg, Florian; Dustdar, Schahram; Leymann, Frank

    2011-02-01

    We present a comprehensive framework for identifying influential factors of business process performance. In particular, our approach combines monitoring of process events and Quality of Service (QoS) measurements with dependency analysis to effectively identify influential factors. The framework uses data mining techniques to construct tree structures to represent dependencies of a key performance indicator (KPI) on process and QoS metrics. These dependency trees allow business analysts to determine how process KPIs depend on lower-level process metrics and QoS characteristics of the IT infrastructure. The structure of the dependencies enables a drill-down analysis of single factors of influence to gain a deeper knowledge why certain KPI targets are not met.

  12. Microfabricated devices for performing chemical and biochemical analysis

    SciTech Connect

    Ramsey, J.M.; Jacobson, S.C.; Foote, R.S.

    1997-05-01

    There is growing interest in microfabricated devices that perform chemical and biochemical analysis. The general goal is to use microfabrication tools to construct miniature devices that can perform a complete analysis starting with an unprocessed sample. Such devices have been referred to as lab-on-a-chip devices. Initial efforts on microfluidic laboratory-on-a-chip devices focused on chemical separations. There are many potential applications of these fluidic microchip devices. Some applications such as chemical process control or environmental monitoring would require that a chip be used over an extended period of time or for many analyses. Other applications such as forensics, clinical diagnostics, and genetic diagnostics would employ the chip devices as single use disposable devices.

  13. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  14. Performance analysis of solar powered absorption refrigeration system

    NASA Astrophysics Data System (ADS)

    Abu-Ein, Suleiman Qaseem; Fayyad, Sayel M.; Momani, Waleed; Al-Bousoul, Mamdouh

    2009-12-01

    The present work provides a detailed thermodynamic analysis of a 10 kW solar absorption refrigeration system using ammonia-water mixtures as a working medium. This analysis includes both first law and second law of thermodynamics. The coefficient of performance (COP), exergetic coefficient of performance (ECOP) and the exergy losses (Δ E) through each component of the system at different operating conditions are obtained. The minimum and maximum values of COP and ECOP were found to be at 110 and 200°C generator temperatures respectively. About 40% of the system exergy losses were found to be in the generator. The maximum exergy losses in the absorber occur at generator temperature of 130°C for all evaporator temperatures. A computer simulation model is developed to carry out the calculations and to obtain the results of the present study.

  15. Performance analysis of a generalized upset detection procedure

    NASA Technical Reports Server (NTRS)

    Blough, Douglas M.; Masson, Gerald M.

    1987-01-01

    A general procedure for upset detection in complex systems, called the data block capture and analysis upset monitoring process is described and analyzed. The process consists of repeatedly recording a fixed amount of data from a set of predetermined observation lines of the system being monitored (i.e., capturing a block of data), and then analyzing the captured block in an attempt to determine whether the system is functioning correctly. The algorithm which analyzes the data blocks can be characterized in terms of the amount of time it requires to examine a given length data block to ascertain the existence of features/conditions that have been predetermined to characterize the upset-free behavior of the system. The performance of linear, quadratic, and logarithmic data analysis algorithms is rigorously characterized in terms of three performance measures: (1) the probability of correctly detecting an upset; (2) the expected number of false alarms; and (3) the expected latency in detecting upsets.

  16. Performance of multifractal detrended fluctuation analysis on short time series

    NASA Astrophysics Data System (ADS)

    López, Juan Luis; Contreras, Jesús Guillermo

    2013-02-01

    The performance of the multifractal detrended analysis on short time series is evaluated for synthetic samples of several mono- and multifractal models. The reconstruction of the generalized Hurst exponents is used to determine the range of applicability of the method and the precision of its results as a function of the decreasing length of the series. As an application the series of the daily exchange rate between the U.S. dollar and the euro is studied.

  17. Analysis of guidance law performance using personal computers

    NASA Technical Reports Server (NTRS)

    Barrios, J. Rene

    1990-01-01

    A point mass, three-degree of freedom model is presented as a basic development tool for PC based simulation models. The model has been used in the development of guidance algorithms as well as in other applications such as performance management systems to compute optimal speeds. Its limitations and advantages are discussed with regard to the windshear environment. A method for simulating a simple autopilot is explained in detail and applied in the analysis of different guidance laws.

  18. Performance Demonstration Program Plan for Analysis of Simulated Headspace Gases

    SciTech Connect

    Carlsbad Field Office

    2007-11-19

    The Performance Demonstration Program (PDP) for headspace gases distributes blind audit samples in a gas matrix for analysis of volatile organic compounds (VOCs). Participating measurement facilities (i.e., fixed laboratories, mobile analysis systems, and on-line analytical systems) are located across the United States. Each sample distribution is termed a PDP cycle. These evaluation cycles provide an objective measure of the reliability of measurements performed for transuranic (TRU) waste characterization. The primary documents governing the conduct of the PDP are the Quality Assurance Program Document (QAPD) (DOE/CBFO-94-1012) and the Waste Isolation Pilot Plant (WIPP) Waste Analysis Plan (WAP) contained in the Hazardous Waste Facility Permit (NM4890139088-TSDF) issued by the New Mexico Environment Department (NMED). The WAP requires participation in the PDP; the PDP must comply with the QAPD and the WAP. This plan implements the general requirements of the QAPD and the applicable requirements of the WAP for the Headspace Gas (HSG) PDP. Participating measurement facilities analyze blind audit samples of simulated TRU waste package headspace gases according to the criteria set by this PDP Plan. Blind audit samples (hereafter referred to as PDP samples) are used as an independent means to assess each measurement facility’s compliance with the WAP quality assurance objectives (QAOs). To the extent possible, the concentrations of VOC analytes in the PDP samples encompass the range of concentrations anticipated in actual TRU waste package headspace gas samples. Analyses of headspace gases are required by the WIPP to demonstrate compliance with regulatory requirements. These analyses must be performed by measurement facilities that have demonstrated acceptable performance in this PDP. These analyses are referred to as WIPP analyses and the TRU waste package headspace gas samples on which they are performed are referred to as WIPP samples in this document

  19. Performance Demonstration Program Plan for Analysis of Simulated Headspace Gases

    SciTech Connect

    Carlsbad Field Office

    2007-11-13

    The Performance Demonstration Program (PDP) for headspace gases distributes blind audit samples in a gas matrix for analysis of volatile organic compounds (VOCs). Participating measurement facilities (i.e., fixed laboratories, mobile analysis systems, and on-line analytical systems) are located across the United States. Each sample distribution is termed a PDP cycle. These evaluation cycles provide an objective measure of the reliability of measurements performed for transuranic (TRU) waste characterization. The primary documents governing the conduct of the PDP are the Quality Assurance Program Document (QAPD) (DOE/CBFO-94-1012) and the Waste Isolation Pilot Plant (WIPP) Waste Analysis Plan (WAP) contained in the Hazardous Waste Facility Permit (NM4890139088-TSDF) issued by the New Mexico Environment Department (NMED). The WAP requires participation in the PDP; the PDP must comply with the QAPD and the WAP. This plan implements the general requirements of the QAPD and the applicable requirements of the WAP for the Headspace Gas (HSG) PDP. Participating measurement facilities analyze blind audit samples of simulated TRU waste package headspace gases according to the criteria set by this PDP Plan. Blind audit samples (hereafter referred to as PDP samples) are used as an independent means to assess each measurement facility’s compliance with the WAP quality assurance objectives (QAOs). To the extent possible, the concentrations of VOC analytes in the PDP samples encompass the range of concentrations anticipated in actual TRU waste package headspace gas samples. Analyses of headspace gases are required by the WIPP to demonstrate compliance with regulatory requirements. These analyses must be performed by measurement facilities that have demonstrated acceptable performance in this PDP. These analyses are referred to as WIPP analyses and the TRU waste package headspace gas samples on which they are performed are referred to as WIPP samples in this document

  20. Performance Demonstration Program Plan for Analysis of Simulated Headspace Gases

    SciTech Connect

    Carlsbad Field Office

    2006-04-01

    The Performance Demonstration Program (PDP) for headspace gases distributes sample gases of volatile organic compounds (VOCs) for analysis. Participating measurement facilities (i.e., fixed laboratories, mobile analysis systems, and on-line analytical systems) are located across the United States. Each sample distribution is termed a PDP cycle. These evaluation cycles provide an objective measure of the reliability of measurements performed for transuranic (TRU) waste characterization. The primary documents governing the conduct of the PDP are the Quality Assurance Program Document (QAPD) (DOE/CBFO-94-1012) and the Waste Isolation Pilot Plant (WIPP) Waste Analysis Plan (WAP) contained in the Hazardous Waste Facility Permit (NM4890139088-TSDF) issued by the New Mexico Environment Department (NMED). The WAP requires participation in the PDP; the PDP must comply with the QAPD and the WAP. This plan implements the general requirements of the QAPD and the applicable requirements of the WAP for the Headspace Gas (HSG) PDP. Participating measurement facilities analyze blind audit samples of simulated TRU waste package headspace gases according to the criteria set by this PDP Plan. Blind audit samples (hereafter referred to as PDP samples) are used as an independent means to assess each measurement facility’s compliance with the WAP quality assurance objectives (QAOs). To the extent possible, the concentrations of VOC analytes in the PDP samples encompass the range of concentrations anticipated in actual TRU waste package headspace gas samples. Analyses of headspace gases are required by the WIPP to demonstrate compliance with regulatory requirements. These analyses must be performed by measurement facilities that have demonstrated acceptable performance in this PDP. These analyses are referred to as WIPP analyses and the TRU waste package headspace gas samples on which they are performed are referred to as WIPP samples in this document. Participating measurement

  1. Performance analysis of wireless sensor networks in geophysical sensing applications

    NASA Astrophysics Data System (ADS)

    Uligere Narasimhamurthy, Adithya

    Performance is an important criteria to consider before switching from a wired network to a wireless sensing network. Performance is especially important in geophysical sensing where the quality of the sensing system is measured by the precision of the acquired signal. Can a wireless sensing network maintain the same reliability and quality metrics that a wired system provides? Our work focuses on evaluating the wireless GeoMote sensor motes that were developed by previous computer science graduate students at Mines. Specifically, we conducted a set of experiments, namely WalkAway and Linear Array experiments, to characterize the performance of the wireless motes. The motes were also equipped with the Sticking Heartbeat Aperture Resynchronization Protocol (SHARP), a time synchronization protocol developed by a previous computer science graduate student at Mines. This protocol should automatically synchronize the mote's internal clocks and reduce time synchronization errors. We also collected passive data to evaluate the response of GeoMotes to various frequency components associated with the seismic waves. With the data collected from these experiments, we evaluated the performance of the SHARP protocol and compared the performance of our GeoMote wireless system against the industry standard wired seismograph system (Geometric-Geode). Using arrival time analysis and seismic velocity calculations, we set out to answer the following question. Can our wireless sensing system (GeoMotes) perform similarly to a traditional wired system in a realistic scenario?

  2. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  3. Performance Analysis of Rotary Dehumidifier/Humidifier and Systems

    NASA Astrophysics Data System (ADS)

    Hamamoto, Yoshinori; Okajima, Jiro; Matsuoka, Fumio; Akisawa, Atsushi; Kashiwagi, Takao

    The study aims at clarifying the performance of desiccant rotor, and at obtaining the design aspects of high efficient desiccant rotor and systems. In the paper, theoretical analysis is performed for rotary dehumidifier and humidifier. The validity of the model is confirmed by comparison between experimental data and calculation. The influences of several complex factors such as adsorption/desorption time ratio, air flow path patterns and air conditions on rotor performance are examined. It is clarified that there is an optimum angle of adsorption, desorption and purge zone becoming a maximum amount of humidifying And also, there is an optimum desorption side air flow rate. It is confirmed that air flow rate and air temperature influence significantly the amount of dehumidifing and humidifing. Furthermore, it is suggested that heat transfer enhancement of the rotor is efficient in the mass transfer enhancement at the beginning of desorption process.

  4. SIMS analysis of high-performance accelerator niobium

    SciTech Connect

    Maheshwari, P.; Stevie, F. A.; Myneni, Ganapati Rao; Rigsbee, J, M.; Dhakal, Pashupati; Ciovati, Gianluigi; Griffis, D. P.

    2014-11-01

    Niobium is used to fabricate superconducting radio frequency accelerator modules because of its high critical temperature, high critical magnetic field, and easy formability. Recent experiments have shown a very significant improvement in performance (over 100%) after a high-temperature bake at 1400 degrees C for 3h. SIMS analysis of this material showed the oxygen profile was significantly deeper than the native oxide with a shape that is indicative of diffusion. Positive secondary ion mass spectra showed the presence of Ti with a depth profile similar to that of O. It is suspected that Ti is associated with the performance improvement. The source of Ti contamination in the anneal furnace has been identified, and a new furnace was constructed without Ti. Initial results from the new furnace do not show the yield improvement. Further analyses should determine the relationship of Ti to cavity performance.

  5. Crew Exploration Vehicle Launch Abort Controller Performance Analysis

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Raney, David L.

    2007-01-01

    This paper covers the simulation and evaluation of a controller design for the Crew Module (CM) Launch Abort System (LAS), to measure its ability to meet the abort performance requirements. The controller used in this study is a hybrid design, including features developed by the Government and the Contractor. Testing is done using two separate 6-degree-of-freedom (DOF) computer simulation implementations of the LAS/CM throughout the ascent trajectory: 1) executing a series of abort simulations along a nominal trajectory for the nominal LAS/CM system; and 2) using a series of Monte Carlo runs with perturbed initial flight conditions and perturbed system parameters. The performance of the controller is evaluated against a set of criteria, which is based upon the current functional requirements of the LAS. Preliminary analysis indicates that the performance of the present controller meets (with the exception of a few cases) the evaluation criteria mentioned above.

  6. Dynamic performances analysis of a real vehicle driving

    NASA Astrophysics Data System (ADS)

    Abdullah, M. A.; Jamil, J. F.; Salim, M. A.

    2015-12-01

    Vehicle dynamic is the effects of movement of a vehicle generated from the acceleration, braking, ride and handling activities. The dynamic behaviours are determined by the forces from tire, gravity and aerodynamic which acting on the vehicle. This paper emphasizes the analysis of vehicle dynamic performance of a real vehicle. Real driving experiment on the vehicle is conducted to determine the effect of vehicle based on roll, pitch, and yaw, longitudinal, lateral and vertical acceleration. The experiment is done using the accelerometer to record the reading of the vehicle dynamic performance when the vehicle is driven on the road. The experiment starts with weighing a car model to get the center of gravity (COG) to place the accelerometer sensor for data acquisition (DAQ). The COG of the vehicle is determined by using the weight of the vehicle. A rural route is set to launch the experiment and the road conditions are determined for the test. The dynamic performance of the vehicle are depends on the road conditions and driving maneuver. The stability of a vehicle can be controlled by the dynamic performance analysis.

  7. The Vehicle Integrated Performance Analysis Experience: Reconnecting With Technical Integration

    NASA Technical Reports Server (NTRS)

    McGhee, D. S.

    2006-01-01

    Very early in the Space Launch Initiative program, a small team of engineers at MSFC proposed a process for performing system-level assessments of a launch vehicle. Aimed primarily at providing insight and making NASA a smart buyer, the Vehicle Integrated Performance Analysis (VIPA) team was created. The difference between the VIPA effort and previous integration attempts is that VIPA a process using experienced people from various disciplines, which focuses them on a technically integrated assessment. The foundations of VIPA s process are described. The VIPA team also recognized the need to target early detailed analysis toward identifying significant systems issues. This process is driven by the T-model for technical integration. VIPA s approach to performing system-level technical integration is discussed in detail. The VIPA process significantly enhances the development and monitoring of realizable project requirements. VIPA s assessment validates the concept s stated performance, identifies significant issues either with the concept or the requirements, and then reintegrates these issues to determine impacts. This process is discussed along with a description of how it may be integrated into a program s insight and review process. The VIPA process has gained favor with both engineering and project organizations for being responsive and insightful

  8. Quantitative analysis of the reconstruction performance of interpolants

    NASA Technical Reports Server (NTRS)

    Lansing, Donald L.; Park, Stephen K.

    1987-01-01

    The analysis presented provides a quantitative measure of the reconstruction or interpolation performance of linear, shift-invariant interpolants. The performance criterion is the mean square error of the difference between the sampled and reconstructed functions. The analysis is applicable to reconstruction algorithms used in image processing and to many types of splines used in numerical analysis and computer graphics. When formulated in the frequency domain, the mean square error clearly separates the contribution of the interpolation method from the contribution of the sampled data. The equations provide a rational basis for selecting an optimal interpolant; that is, one which minimizes the mean square error. The analysis has been applied to a selection of frequently used data splines and reconstruction algorithms: parametric cubic and quintic Hermite splines, exponential and nu splines (including the special case of the cubic spline), parametric cubic convolution, Keys' fourth-order cubic, and a cubic with a discontinuous first derivative. The emphasis in this paper is on the image-dependent case in which no a priori knowledge of the frequency spectrum of the sampled function is assumed.

  9. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  10. Reproducible LTE uplink performance analysis using precomputed interference signals

    NASA Astrophysics Data System (ADS)

    Pauli, Volker; Nisar, Muhammad Danish; Seidel, Eiko

    2011-12-01

    The consideration of realistic uplink inter-cell interference is essential for the overall performance testing of future cellular systems, and in particular for the evaluation of the radio resource management (RRM) algorithms. Most beyond-3G communication systems employ orthogonal multiple access in uplink (SC-FDMA in LTE and OFDMA in WiMAX), and additionally rely on frequency-selective RRM (scheduling) algorithms. This makes the task of accurate modeling of uplink interference both crucial and non-trivial. Traditional methods for its modeling (e.g., via additive white Gaussian noise interference sources) are therefore proving to be ineffective to realistically model the uplink interference in the next generation cellular systems. In this article, we propose the use of realistic precomputed interference patterns for LTE uplink performance analysis and testing. The interference patterns are generated via an LTE system-level simulator for a given set of scenario parameters, such as cell configuration, user configurations, and traffic models. The generated interference patterns (some of which are made publicly available) can be employed to benchmark the performance of any LTE uplink system in both lab simulations and field trials for practical deployments. It is worth mentioning that the proposed approach can also be extended to other cellular communication systems employing OFDMA-like multiple access with frequency-selective RRM techniques. The proposed approach offers twofold advantages. First, it allows for repeatability and reproducibility of the performance analysis. This is of crucial significance not only for researchers and developers to analyze the behavior and performance of their systems, but also for the network operators to compare the performance of competing system vendors. Second, the proposed testing mechanism evades the need for deployment of multiple cells (with multiple active users in each) to achieve realistic field trials, thereby resulting in

  11. Clinical laboratory as an economic model for business performance analysis

    PubMed Central

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  12. Cross-industry Performance Modeling: Toward Cooperative Analysis

    SciTech Connect

    Reece, Wendy Jane; Blackman, Harold Stabler

    1998-10-01

    One of the current unsolved problems in human factors is the difficulty in acquiring information from lessons learned and data collected among human performance analysts in different domains. There are several common concerns and generally accepted issues of importance for human factors, psychology and industry analysts of performance and safety. Among these are the need to incorporate lessons learned in design, to carefully consider implementation of new designs and automation, and the need to reduce human performance-based contributions to risk. In spite of shared concerns, there are several roadblocks to widespread sharing of data and lessons learned from operating experience and simulation, including the fact that very few publicly accessible data bases exist (Gertman & Blackman, 1994, and Kirwan, 1997). There is a need to draw together analysts and analytic methodologies to comprise a centralized source of data with sufficient detail to be meaningful while ensuring source anonymity. We propose that a generic source of performance data and a multi-domain data store may provide the first steps toward cooperative performance modeling and analysis across industries.

  13. Cross-Industry Performance Modeling: Toward Cooperative Analysis

    SciTech Connect

    H. S. Blackman; W. J. Reece

    1998-10-01

    One of the current unsolved problems in human factors is the difficulty in acquiring information from lessons learned and data collected among human performance analysts in different domains. There are several common concerns and generally accepted issues of importance for human factors, psychology and industry analysts of performance and safety. Among these are the need to incorporate lessons learned in design, to carefully consider implementation of new designs and automation, and the need to reduce human performance-based contributions to risk. In spite of shared concerns, there are several road blocks to widespread sharing of data and lessons learned from operating experience and simulation, including the fact that very few publicly accessible data bases exist(Gertman & Blackman, 1994, and Kirwan, 1997). There is a need to draw together analysts and analytic methodologies to comprise a centralized source of data with sufficient detail to be meaningful while ensuring source anonymity. We propose that a generic source of performance data and a multi-domain data store may provide the first steps toward cooperative performance modeling and analysis across industries.

  14. Analysis of latency performance of bluetooth low energy (BLE) networks.

    PubMed

    Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun

    2015-01-01

    Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266

  15. Performance analysis of image fusion methods in transform domain

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Image fusion involves merging two or more images in such a way as to retain the most desirable characteristics of each. There are various image fusion methods and they can be classified into three main categories: i) Spatial domain, ii) Transform domain, and iii) Statistical domain. We focus on the transform domain in this paper as spatial domain methods are primitive and statistical domain methods suffer from a significant increase of computational complexity. In the field of image fusion, performance analysis is important since the evaluation result gives valuable information which can be utilized in various applications, such as military, medical imaging, remote sensing, and so on. In this paper, we analyze and compare the performance of fusion methods based on four different transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of images are used in our experiments. Furthermore, various performance evaluation metrics are adopted to quantitatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet transform method performs better than the other three methods. During the experiments, we also found out that the decomposition level of 3 showed the best fusion performance, and decomposition levels beyond level-3 did not significantly affect the fusion results.

  16. Analysis of Latency Performance of Bluetooth Low Energy (BLE) Networks

    PubMed Central

    Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun

    2015-01-01

    Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266

  17. Performance analysis and visualization of electric power systems

    NASA Astrophysics Data System (ADS)

    Dong, Xuejiang; Shinozuka, Masanobu

    2003-08-01

    This paper describes a method of system performance evaluation for electric power network. The basic element that plays a crucial role here is the fragility information for transmission system equipment. The method utilizes the fragility information for evaluation of system performance degradation of LADWP's (Los Angeles Department of Water and Power's) power network damaged by a severe earthquake by comparing its performance before and after the earthquake event. One of the highlights of this paper is the use of computer code "PowerWorld" to visualize the state of power flow of the network, segment by segment. Similarly, the method can evaluate quantitatively the effect of various measures of rehabilitation or retrofit performed on equipment and/or facilities of the network. This is done by comparing the system performance with or without the rehabilitation. In this context, the results of experimental and analytical studies carried out by other researchers are used to determine the possible range of fragility enhancement associated with the rehabilitation of transformers in terms of base-isolation systems. In this analysis, 47 scenario earthquakes are used to develop the risk curves for the LADWP"s power transmission system. The risk curve can then be correlated to economic impact of the reduction in power supply due to earthquake. Recovery aspects of the damaged power system will be studied from this point of view in future.

  18. Performance-based design and analysis of flexible composite propulsors

    NASA Astrophysics Data System (ADS)

    Motley, Michael R.; Young, Yin L.

    2011-11-01

    Advanced composite propellers, turbines, and jet engines have become increasingly popular in part because of their ability to provide improved performance over traditional metallic rotors through exploitation of the intrinsic bend-twist coupling characteristics of anisotropic composite materials. While these performance improvements can be significant from a conceptual perspective, the load-dependent deformation responses of adaptive blades make the design of these structures highly non-trivial. Hence, it is necessary to understand and predict the dependence of the deformations on the geometry, material constitution, and fluid-structure interaction responses across the entire range of expected loading conditions.The objective of this work is to develop a probabilistic performance-based design and analysis methodology for flexible composite propulsors. To demonstrate the method, it is applied for the design and analysis of two (rigid) metallic and (flexible) composite propellers for a twin-shafted naval combatant craft. The probabilistic operational space is developed by considering the variation of vessel thrust requirements as a function of the vessel speed and wave conditions along with the probabilistic speed profiles. The performance of the metallic and composite propellers are compared and discussed. The implications of load-dependent deformations of the flexible composite propeller on the operating conditions and the resulting performance with respect to propeller efficiency, power demand, and fluid cavitation are presented for both spatially uniform and varying flows. While the proposed framework is demonstrated for marine propellers, the methodology can be generally applied for any marine, aerospace, or wind energy structure that must operate in a wide range of loading conditions over its expected life.

  19. Results of a 24-inch Hybrid Motor Performance Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Sims, Joseph D.; Coleman, Hugh W.

    1998-01-01

    The subscale (11 and 24-inch) hybrid motors at the Marshall Space Flight Center (MSFC) have been used as versatile and cost effective testbeds for developing new technology. Comparisons between motor configuration, ignition systems, feed systems, fuel formulations, and nozzle materials have been carried out without detailed consideration as to haw "good" the motor performance data were. For the 250,000 lb/thrust motor developed by the Hybrid Propulsion Demonstration Program consortium, this shortcoming is particularly risky because motor performance will likely be used as put of a set of downselect criteria to choose between competing ignition and feed systems under development. This analysis directly addresses that shortcoming by applying uncertainty analysis techniques to the experimental determination of the characteristic velocity, theoretical characteristic velocity, and characteristic velocity efficiency for a 24-inch motor firing. With the adoption of fuel-lined headends, flow restriction, and aft mixing chambers, state of the an 24-inch hybrid motors have become very efficient However, impossibly high combustion efficiencies (some computed as high as 108%) have been measured in some tests with 11-inch motors. This analysis has given new insight into explaining how these efficiencies were measured to be so high, and into which experimental measurements contribute the most to the overall uncertainty.

  20. Using SWE Standards for Ubiquitous Environmental Sensing: A Performance Analysis

    PubMed Central

    Tamayo, Alain; Granell, Carlos; Huerta, Joaquín

    2012-01-01

    Although smartphone applications represent the most typical data consumer tool from the citizen perspective in environmental applications, they can also be used for in-situ data collection and production in varied scenarios, such as geological sciences and biodiversity. The use of standard protocols, such as SWE, to exchange information between smartphones and sensor infrastructures brings benefits such as interoperability and scalability, but their reliance on XML is a potential problem when large volumes of data are transferred, due to limited bandwidth and processing capabilities on mobile phones. In this article we present a performance analysis about the use of SWE standards in smartphone applications to consume and produce environmental sensor data, analysing to what extent the performance problems related to XML can be alleviated by using alternative uncompressed and compressed formats.

  1. Fluid and thermal performance analysis of PMSM used for driving

    NASA Astrophysics Data System (ADS)

    Ding, Shuye; Cui, Guanghui; Li, Zhongyu; Guan, Tianyu

    2016-03-01

    The permanent magnet synchronous motor (PMSM) is widely used in ships under frequency conversion control system. The fluid flow performance and temperature distribution of the PMSM are difficult to clarify due to its complex structure and variable frequency control condition. Therefore, in order to investigate the fluid and thermal characteristics of the PMSM, a 50 kW PMSM was taken as an example in this study, and a 3-D coupling analysis model of fluid and thermal was established. The fluid and temperature fields were calculated by using finite volume method. The cooling medium's properties, such a velocity, streamlines, and temperature, were then analyzed. The correctness of the proposed model, and the rationality of the solution method, were verified by a temperature test of the PMSM. In this study, the changing rheology on the performance of the cooling medium and the working temperature of the PMSM were revealed, which could be helpful for designing the PMSM.

  2. Performance analysis of fractional order extremum seeking control.

    PubMed

    Malek, Hadi; Dadras, Sara; Chen, YangQuan

    2016-07-01

    Extremum-seeking scheme is a powerful adaptive technique to optimize steady-state system performance. In this paper, a novel extremum-seeking scheme for the optimization of nonlinear plants using fractional order calculus is proposed. The fractional order extremum-seeking algorithm only utilizes output measurements of the plant, however, it performs superior in many aspects such as convergence speed and robustness. A detailed stability analysis is given to not only guarantee a faster convergence of the system to an adjustable neighborhood of the optimum but also confirm a better robustness for proposed algorithm. Furthermore, simulation and experimental results demonstrate that the fractional order extremum-seeking scheme for nonlinear systems outperforms the traditional integer order one. PMID:27000632

  3. Performance analysis of a laser propelled interorbital tansfer vehicle

    NASA Technical Reports Server (NTRS)

    Minovitch, M. A.

    1976-01-01

    Performance capabilities of a laser-propelled interorbital transfer vehicle receiving propulsive power from one ground-based transmitter was investigated. The laser transmits propulsive energy to the vehicle during successive station fly-overs. By applying a series of these propulsive maneuvers, large payloads can be economically transferred between low earth orbits and synchronous orbits. Operations involving the injection of large payloads onto escape trajectories are also studied. The duration of each successive engine burn must be carefully timed so that the vehicle reappears over the laser station to receive additional propulsive power within the shortest possible time. The analytical solution for determining these time intervals is presented, as is a solution to the problem of determining maximum injection payloads. Parameteric computer analysis based on these optimization studies is presented. The results show that relatively low beam powers, on the order of 50 MW to 60 MW, produce significant performance capabilities.

  4. An analysis of calendar performance in two autistic calendar savants.

    PubMed

    Kennedy, Daniel P; Squire, Larry R

    2007-08-01

    We acquired large data sets of calendar performance from two autistic calendar savants, DG and RN. An analysis of their errors and reaction times revealed that (1) both individuals had knowledge of calendar information from a limited range of years; (2) there was no evidence for the use of memorized anchor dates that could, by virtue of counting away from the anchors, allow correct responses to questions about other dates; and (3) the two individuals differed in their calendar knowledge, as well as in their ability to perform secondary tasks in which calendar knowledge was assessed indirectly. In view of the fact that there are only 14 possible annual calendars, we suggest that both savants worked by memorizing these 14 possible calendar arrangements. PMID:17686947

  5. Computational analysis of Variable Thrust Engine (VTE) performance

    NASA Technical Reports Server (NTRS)

    Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.

    1993-01-01

    The Variable Thrust Engine (VTE) of the Orbital Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The performance of the VTE depends on a number of complex interacting phenomena such as atomization, spray dynamics, vaporization, turbulent mixing, convective/radiative heat transfer, and hypergolic combustion. This study involved the development of a comprehensive numerical methodology to facilitate detailed analysis of the VTE. An existing Computational Fluid Dynamics (CFD) code was extensively modified to include the following models: a two-liquid, two-phase Eulerian-Lagrangian spray model; a chemical equilibrium model; and a discrete ordinate radiation heat transfer model. The modified code was used to conduct a series of simulations to assess the effects of various physical phenomena and boundary conditions on the VTE performance. The details of the models and the results of the simulations are presented.

  6. Performance analysis of coherent wireless optical communications with atmospheric turbulence.

    PubMed

    Niu, Mingbo; Song, Xuegui; Cheng, Julian; Holzman, Jonathan F

    2012-03-12

    Coherent wireless optical communication systems with heterodyne detection are analyzed for binary phase-shift keying (BPSK), differential PSK (DPSK), and M-ary PSK over Gamma-Gamma turbulence channels. Closed-form error rate expressions are derived using a series expansion approach. It is shown that, in the special case of K-distributed turbulence channel, the DPSK incurs a 3 dB signal-to-noise ratio (SNR) penalty compared to BPSK in the large SNR regime. The outage probability is also obtained, and a detailed outage truncation error analysis is presented and used to assess the accuracy in system performance estimation. It is shown that our series error rate expressions are simple to use and highly accurate for practical system performance estimation. PMID:22418534

  7. Performance analysis of charge plasma based dual electrode tunnel FET

    NASA Astrophysics Data System (ADS)

    Anand, Sunny; Intekhab Amin, S.; Sarin, R. K.

    2016-05-01

    This paper proposes the charge plasma based dual electrode doping-less tunnel FET (DEDLTFET). The paper compares the device performance of the conventional doping-less TFET (DLTFET) and doped TFET (DGTFET). DEDLTEFT gives the superior results with high ON state current (ION ∼ 0.56 mA/μm), ION/IOFF ratio ∼ 9.12 × 1013 and an average subthreshold swing (AV-SS ∼ 48 mV/dec). The variation of different device parameters such as channel length, gate oxide material, gate oxide thickness, silicon thickness, gate work function and temperature variation are done and compared with DLTFET and DGTFET. Through the extensive analysis it is found that DEDLTFET shows the better performance than the other two devices, which gives the indication for an excellent future in low power applications.

  8. Performance analysis of two high actuator count MEMS deformable mirrors

    NASA Astrophysics Data System (ADS)

    Ryan, Peter J.; Cornelissen, Steven A.; Lam, Charlie V.; Bierden, Paul A.

    2013-03-01

    Two new MEMS deformable mirrors have been designed and fabricated, one having a continuous facesheet with an active aperture of 20mm and 2040 actuators and the other, a similarly sized segmented tip tilt piston DM containing 1021 elements and 3063 actuators. The surface figures, electro mechanical performances, and actuator yield of these devices, with statistical information, are reported here. The statistical distributions of these measurements directly illustrate the surface variance of Boston Micromachines deformable mirrors. Measurements of the surface figure were also performed with the elements at different actuation states. Also presented here are deviations of the surface figure under actuation versus at its rest state, the electromechanical distribution, and a dynamic analysis.

  9. Commissioning and Performance Analysis of WhisperGen Stirling Engine

    NASA Astrophysics Data System (ADS)

    Pradip, Prashant Kaliram

    Stirling engine based cogeneration systems have potential to reduce energy consumption and greenhouse gas emission, due to their high cogeneration efficiency and emission control due to steady external combustion. To date, most studies on this unit have focused on performance based on both experimentation and computer models, and lack experimental data for diversified operating ranges. This thesis starts with the commissioning of a WhisperGen Stirling engine with components and instrumentation to evaluate power and thermal performance of the system. Next, a parametric study on primary engine variables, including air, diesel, and coolant flowrate and temperature were carried out to further understand their effect on engine power and efficiency. Then, this trend was validated with the thermodynamic model developed for the energy analysis of a Stirling cycle. Finally, the energy balance of the Stirling engine was compared without and with heat recovery from the engine block and the combustion chamber exhaust.

  10. Performance analysis of a SOFC under direct internal reforming conditions

    NASA Astrophysics Data System (ADS)

    Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf

    This paper presents the performance analysis of a planar solid-oxide fuel cell (SOFC) under direct internal reforming conditions. A detailed solid-oxide fuel cell model is used to study the influences of various operating parameters on cell performance. Significant differences in efficiency and power density are observed for isothermal and adiabatic operational regimes. The influence of air number, specific catalyst area, anode thickness, steam to carbon (s/c) ratio of the inlet fuel, and extend of pre-reforming on cell performance is analyzed. In all cases except for the case of pre-reformed fuel, adiabatic operation results in lower performance compared to isothermal operation. It is further discussed that, though direct internal reforming may lead to cost reduction and increased efficiency by effective utilization of waste heat, the efficiency of the fuel cell itself is higher for pre-reformed fuel compared to non-reformed fuel. Furthermore, criteria for the choice of optimal operating conditions for cell stacks operating under direct internal reforming conditions are discussed.

  11. [An analysis of maicaodi by high performance liquid chromatography].

    PubMed

    Yang, H; Chen, R; Jiang, M

    1997-05-01

    Maicaodi has recently been developed and produced by the pesticide plant of Nanjing Agricultural University. The quantitative analysis of the effective components--tribenuron methyl and R (-)napropamide in wettable powder of Maicaode, by a high performance liquid chromatographic method was carried out with a Lichrosorb Si-60 20cm x 0.46cm i.d. column, mobile phase of petroleum ether/isopropanol/methanol/acetonitrile/chloroform mixture solvent (80:5:5:5:5) and internal standard of diisooctyl phthalate. The sample was detected by ultraviolet absorption at 254 nm. The retention times of tribenuron methyl and R (-)napropamide were 10-11min and 6-7min respectively. The coefficient of variation of this analysis was 0.34% with a recovery of 99.51%-100.32%. The coefficient of linear correlation was 0.9999. PMID:15739379

  12. Space mission scenario development and performance analysis tool

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Baker, John; Gilbert, John; Hanks, David

    2004-01-01

    This paper discusses a new and innovative approach for a rapid spacecraft multi-disciplinary performance analysis using a tool called the Mission Scenario Development Workbench (MSDW). To meet the needs of new classes of space missions, analysis tools with proven models were developed and integrated into a framework to enable rapid trades and analyses between spacecraft designs and operational scenarios during the formulation phase of a mission. Generally speaking, spacecraft resources are highly constrained on deep space missions and this approach makes it possible to maximize the use of existing resources to attain the best possible science return. This approach also has the potential benefit of reducing the risk of costly design changes made later in the design cycle necessary to meet the mission requirements by understanding system design sensitivities early and adding appropriate margins. This paper will describe the approach used by the Mars Science Laboratory Project to accomplish this result.

  13. Removing Grit During Wastewater Treatment: CFD Analysis of HDVS Performance.

    PubMed

    Meroney, Robert N; Sheker, Robert E

    2016-05-01

    Computational Fluid Dynamics (CFD) was used to simulate the grit and sand separation effectiveness of a typical hydrodynamic vortex separator (HDVS) system. The analysis examined the influences on the separator efficiency of: flow rate, fluid viscosities, total suspended solids (TSS), and particle size and distribution. It was found that separator efficiency for a wide range of these independent variables could be consolidated into a few curves based on the particle fall velocity to separator inflow velocity ratio, Ws/Vin. Based on CFD analysis it was also determined that systems of different sizes with length scale ratios ranging from 1 to 10 performed similarly when Ws/Vin and TSS were held constant. The CFD results have also been compared to a limited range of experimental data. PMID:27131307

  14. Transient analysis techniques in performing impact and crash dynamic studies

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Winter, R.

    1989-01-01

    Because of the emphasis being placed on crashworthiness as a design requirement, increasing demands are being made by various organizations to analyze a wide range of complex structures that must perform safely when subjected to severe impact loads, such as those generated in a crash event. The ultimate goal of crashworthiness design and analysis is to produce vehicles with the ability to reduce the dynamic forces experienced by the occupants to specified levels, while maintaining a survivable envelope around them during a specified crash event. DYCAST is a nonlinear structural dynamic finite element computer code that started from the plans systems of a finite element program for static nonlinear structural analysis. The essential features of DYCAST are outlined.

  15. Theoretical analysis of the performance of a foam fractionation column

    PubMed Central

    Tobin, S. T.; Weaire, D.; Hutzler, S.

    2014-01-01

    A model system for theory and experiment which is relevant to foam fractionation consists of a column of foam moving through an inverted U-tube between two pools of surfactant solution. The foam drainage equation is used for a detailed theoretical analysis of this process. In a previous paper, we focused on the case where the lengths of the two legs are large. In this work, we examine the approach to the limiting case (i.e. the effects of finite leg lengths) and how it affects the performance of the fractionation column. We also briefly discuss some alternative set-ups that are of interest in industry and experiment, with numerical and analytical results to support them. Our analysis is shown to be generally applicable to a range of fractionation columns. PMID:24808752

  16. A Divergence Statistics Extension to VTK for Performance Analysis.

    SciTech Connect

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  17. Analysis of Random Segment Errors on Coronagraph Performance

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; N'Diaye, Mamadou; Stahl, Mark T.; Stahl, H. Philip

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  18. Performance Analysis of Visible Light Communication Using CMOS Sensors.

    PubMed

    Do, Trong-Hop; Yoo, Myungsik

    2016-01-01

    This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis. PMID:26938535

  19. Performance Analysis of Visible Light Communication Using CMOS Sensors

    PubMed Central

    Do, Trong-Hop; Yoo, Myungsik

    2016-01-01

    This paper elucidates the fundamentals of visible light communication systems that use the rolling shutter mechanism of CMOS sensors. All related information involving different subjects, such as photometry, camera operation, photography and image processing, are studied in tandem to explain the system. Then, the system performance is analyzed with respect to signal quality and data rate. To this end, a measure of signal quality, the signal to interference plus noise ratio (SINR), is formulated. Finally, a simulation is conducted to verify the analysis. PMID:26938535

  20. Active charge/passive discharge solar heating systems: Thermal analysis and performance comparisons and performance comparisons

    NASA Astrophysics Data System (ADS)

    Swisher, J.

    1981-06-01

    This type of system combines liquid-cooled solar collector panels with a massive integral storage component that passively heats the building interior by radiation and free convection. The TRNSYS simulation program is used to evaluate system performance and to provide input for the development of a simplified analysis method. This method, which provides monthly calculations of delivered solar energy, is based on Klein's Phi-bar procedure and data from hourly TRNSYS simulations. The method can be applied to systems using a floor slab, a structural wall, or a water tank as the storage component. Important design parameters include collector area and orientation, building heat loss, collector and heat exchanger efficiencies, storage capacity, and storage to room coupling. Performance simulation results are used for comparisons with active and passive solar designs.

  1. Effects of specified performance criterion and performance feedback on staff behavior: a component analysis.

    PubMed

    Hardesty, Samantha L; Hagopian, Louis P; McIvor, Melissa M; Wagner, Leaora L; Sigurdsson, Sigurdur O; Bowman, Lynn G

    2014-09-01

    The present study isolated the effects of frequently used staff training intervention components to increase communication between direct care staff and clinicians working on an inpatient behavioral unit. Written "protocol review" quizzes developed by clinicians were designed to assess knowledge about a patient's behavioral protocols. Direct care staff completed these at the beginning of each day and evening shift. Clinicians were required to score and discuss these protocol reviews with direct care staff for at least 75% of shifts over a 2-week period. During baseline, only 21% of clinicians met this requirement. Completing and scoring of protocol reviews did not improve following additional in-service training (M = 15%) or following an intervention aimed at decreasing response effort combined with prompting (M = 28%). After implementing an intervention involving specified performance criterion and performance feedback, 86% of clinicians reached the established goal. Results of a component analysis suggested that the presentation of both the specified performance criterion and supporting contingencies was necessary to maintain acceptable levels of performance. PMID:24928213

  2. Voxel model in BNCT treatment planning: performance analysis and improvements

    NASA Astrophysics Data System (ADS)

    González, Sara J.; Carando, Daniel G.; Santa Cruz, Gustavo A.; Zamenhof, Robert G.

    2005-02-01

    In recent years, many efforts have been made to study the performance of treatment planning systems in deriving an accurate dosimetry of the complex radiation fields involved in boron neutron capture therapy (BNCT). The computational model of the patient's anatomy is one of the main factors involved in this subject. This work presents a detailed analysis of the performance of the 1 cm based voxel reconstruction approach. First, a new and improved material assignment algorithm implemented in NCTPlan treatment planning system for BNCT is described. Based on previous works, the performances of the 1 cm based voxel methods used in the MacNCTPlan and NCTPlan treatment planning systems are compared by standard simulation tests. In addition, the NCTPlan voxel model is benchmarked against in-phantom physical dosimetry of the RA-6 reactor of Argentina. This investigation shows the 1 cm resolution to be accurate enough for all reported tests, even in the extreme cases such as a parallelepiped phantom irradiated through one of its sharp edges. This accuracy can be degraded at very shallow depths in which, to improve the estimates, the anatomy images need to be positioned in a suitable way. Rules for this positioning are presented. The skin is considered one of the organs at risk in all BNCT treatments and, in the particular case of cutaneous melanoma of extremities, limits the delivered dose to the patient. Therefore, the performance of the voxel technique is deeply analysed in these shallow regions. A theoretical analysis is carried out to assess the distortion caused by homogenization and material percentage rounding processes. Then, a new strategy for the treatment of surface voxels is proposed and tested using two different irradiation problems. For a parallelepiped phantom perpendicularly irradiated with a 5 keV neutron source, the large thermal neutron fluence deviation present at shallow depths (from 54% at 0 mm depth to 5% at 4 mm depth) is reduced to 2% on average

  3. Aerocapture Performance Analysis of A Venus Exploration Mission

    NASA Technical Reports Server (NTRS)

    Starr, Brett R.; Westhelle, Carlos H.

    2005-01-01

    A performance analysis of a Discovery Class Venus Exploration Mission in which aerocapture is used to capture a spacecraft into a 300km polar orbit for a two year science mission has been conducted to quantify its performance. A preliminary performance assessment determined that a high heritage 70 sphere-cone rigid aeroshell with a 0.25 lift to drag ratio has adequate control authority to provide an entry flight path angle corridor large enough for the mission s aerocapture maneuver. A 114 kilograms per square meter ballistic coefficient reference vehicle was developed from the science requirements and the preliminary assessment s heating indicators and deceleration loads. Performance analyses were conducted for the reference vehicle and for sensitivity studies on vehicle ballistic coefficient and maximum bank rate. The performance analyses used a high fidelity flight simulation within a Monte Carlo executive to define the aerocapture heating environment and deceleration loads and to determine mission success statistics. The simulation utilized the Program to Optimize Simulated Trajectories (POST) that was modified to include Venus specific atmospheric and planet models, aerodynamic characteristics, and interplanetary trajectory models. In addition to Venus specific models, an autonomous guidance system, HYPAS, and a pseudo flight controller were incorporated in the simulation. The Monte Carlo analyses incorporated a reference set of approach trajectory delivery errors, aerodynamic uncertainties, and atmospheric density variations. The reference performance analysis determined the reference vehicle achieves 100% successful capture and has a 99.87% probability of attaining the science orbit with a 90 meters per second delta V budget for post aerocapture orbital adjustments. A ballistic coefficient trade study conducted with reference uncertainties determined that the 0.25 L/D vehicle can achieve 100% successful capture with a ballistic coefficient of 228 kilograms

  4. Performance analysis of jump-gliding locomotion for miniature robotics.

    PubMed

    Vidyasagar, A; Zufferey, Jean-Christohphe; Floreano, Dario; Kovač, M

    2015-04-01

    Recent work suggests that jumping locomotion in combination with a gliding phase can be used as an effective mobility principle in robotics. Compared to pure jumping without a gliding phase, the potential benefits of hybrid jump-gliding locomotion includes the ability to extend the distance travelled and reduce the potentially damaging impact forces upon landing. This publication evaluates the performance of jump-gliding locomotion and provides models for the analysis of the relevant dynamics of flight. It also defines a jump-gliding envelope that encompasses the range that can be achieved with jump-gliding robots and that can be used to evaluate the performance and improvement potential of jump-gliding robots. We present first a planar dynamic model and then a simplified closed form model, which allow for quantification of the distance travelled and the impact energy on landing. In order to validate the prediction of these models, we validate the model with experiments using a novel jump-gliding robot, named the 'EPFL jump-glider'. It has a mass of 16.5 g and is able to perform jumps from elevated positions, perform steered gliding flight, land safely and traverse on the ground by repetitive jumping. The experiments indicate that the developed jump-gliding model fits very well with the measured flight data using the EPFL jump-glider, confirming the benefits of jump-gliding locomotion to mobile robotics. The jump-glide envelope considerations indicate that the EPFL jump-glider, when traversing from a 2 m height, reaches 74.3% of optimal jump-gliding distance compared to pure jumping without a gliding phase which only reaches 33.4% of the optimal jump-gliding distance. Methods of further improving flight performance based on the models and inspiration from biological systems are presented providing mechanical design pathways to future jump-gliding robot designs. PMID:25811417

  5. WRF model performance analysis for a suite of simulation design

    NASA Astrophysics Data System (ADS)

    Mohan, Manju; Sati, Ankur Prabhat

    2016-03-01

    At present scientists are successfully using Numerical Weather Prediction (NWP) models to achieve a reliable forecast. Nested domains are preferred by the modelling community with varying grid ratios having wider applications. The impact of the nesting grid ratio (NGR) on the model performance needs systematic analysis and explored in the present study. The usage of WRF is mostly as a mesoscale model in simulating either extreme events or events of smaller duration shown with statistical model evaluation for the correspondingly similar and short period of time. Thus, influence of the simulation period on model performance has been examined for key meteorological parameters. Several works done earlier on episodes involve model implementation for longer duration and for that single simulation is performed often for a continuous stretch. This study scrutinizes the influence on model performance due to one single simulation versus several smaller simulations for the same duration; essentially splitting the run-time. In the present study, the surface wind (i.e., winds at 10 meters), temperature and Relative humidity at 2 meters as obtained from model simulations are compared with the Observations. The sensitivity study of nesting grid ratio, continuous versus smaller split simulations and realistic simulation period is done in the present study. It is found that there is no statistically significant difference in the simulated results on changing the nesting grid ratio while the smaller time split schemes (2 days and 4 days schemes on comparison with 8 days and 16 days continuous run) improve the results significantly. The impact of increasing number of observations from different sites on model performance is also scrutinised. Furthermore, conceptual framework is provided for Optimum time period for simulations to have confidence in statistical model evaluation.

  6. A conceptual design tool for RBCC engine performance analysis

    SciTech Connect

    Olds, J.R.; Saks, G.

    1997-01-01

    Future reusable launch vehicles will depend on new propulsion technologies to lower system operational costs while maintaining adequate performance. Recently, a number of vehicle systems utilizing rocket-based combined-cycle (RBCC) propulsion have been proposed as possible low-cost space launch solutions. Vehicles using RBCC propulsion have the potential to combine the best aspects of airbreathing propulsion (high average Isp) with the best aspects of rocket propulsion (high propellant bulk density and engine T/W). Proper conceptual assessment of each proposed vehicle will require computer-based tools that allow for quick and cheap, yet sufficiently accurate disciplinary analyses. At Georgia Tech, a spreadsheet-based tool has been developed that uses quasi-1D flow analysis with component efficiencies to parametrically model RBCC engine performance in ejector, fan-ramjet, ramjet and pure rocket modes. The technique is similar to an earlier RBCC modeling technique developed by the Marquardt Corporation in the mid-1960{close_quote}s. For a given sea-level static thrust requirement, the current tool generates engine weight and size data, as well as Isp and thrust data vs. altitude and Mach number. The latter is output in tabular form for use in a trajectory optimization program. This paper reviews the current state of the RBCC analysis tool and the effort to upgrade it from a Microsoft Excel spreadsheet to a design-oriented UNIX program in C suitable for integration into a multidisciplinary design optimization (MDO) framework. {copyright} {ital 1997 American Institute of Physics.}

  7. A Preliminary Analysis of LANDSAT-4 Thematic Mapper Radiometric Performance

    NASA Technical Reports Server (NTRS)

    Justice, C.; Fusco, L.; Mehl, W.

    1984-01-01

    Analysis was performed to characterize the radiometry of three Thematic Mapper (TM) digital products of a scene of Arkansas. The three digital products examined were the NASA raw (BT) product, the radiometrically corrected (AT) product and the radiometrically and geometrically corrected (PT) product. The frequency distribution of the digital data; the statistical correlation between the bands; and the variability between the detectors within a band were examined on a series of image subsets from the full scene. The results are presented from one 1024 x 1024 pixel subset of Realfoot Lake, Tennessee which displayed a representative range of ground conditions and cover types occurring within the full frame image. Bands 1, 2 and 5 of the sample area are presented. The subsets were extracted from the three digital data products to cover the same geographic area. This analysis provides the first step towards a full appraisal of the TM radiometry being performed as part of the ESA/CEC contribution to the NASA/LIDQA program.

  8. A conceptual design tool for RBCC engine performance analysis

    NASA Astrophysics Data System (ADS)

    Olds, John R.; Saks, Greg

    1997-01-01

    Future reusable launch vehicles will depend on new propulsion technologies to lower system operational costs while maintaining adequate performance. Recently, a number of vehicle systems utilizing rocket-based combined-cycle (RBCC) propulsion have been proposed as possible low-cost space launch solutions. Vehicles using RBCC propulsion have the potential to combine the best aspects of airbreathing propulsion (high average Isp) with the best aspects of rocket propulsion (high propellant bulk density and engine T/W). Proper conceptual assessment of each proposed vehicle will require computer-based tools that allow for quick and cheap, yet sufficiently accurate disciplinary analyses. At Georgia Tech, a spreadsheet-based tool has been developed that uses quasi-1D flow analysis with component efficiencies to parametrically model RBCC engine performance in ejector, fan-ramjet, ramjet and pure rocket modes. The technique is similar to an earlier RBCC modeling technique developed by the Marquardt Corporation in the mid-1960's. For a given sea-level static thrust requirement, the current tool generates engine weight and size data, as well as Isp and thrust data vs. altitude and Mach number. The latter is output in tabular form for use in a trajectory optimization program. This paper reviews the current state of the RBCC analysis tool and the effort to upgrade it from a Microsoft Excel spreadsheet to a design-oriented UNIX program in C suitable for integration into a multidisciplinary design optimization (MDO) framework.

  9. Topology design and performance analysis of an integrated communication network

    NASA Technical Reports Server (NTRS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-01-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  10. Autotasked Performance in the NAS Workload: A Statistical Analysis

    NASA Technical Reports Server (NTRS)

    Carter, R. L.; Stockdale, I. E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    A statistical analysis of the workload performance of a production quality FORTRAN code for five different Cray Y-MP hardware and system software configurations is performed. The analysis was based on an experimental procedure that was designed to minimize correlations between the number of requested CPUs and the time of day the runs were initiated. Observed autotasking over heads were significantly larger for the set of jobs that requested the maximum number of CPUs. Speedups for UNICOS 6 releases show consistent wall clock speedups in the workload of around 2. which is quite good. The observed speed ups were very similar for the set of jobs that requested 8 CPUs and the set that requested 4 CPUs. The original NAS algorithm for determining charges to the user discourages autotasking in the workload. A new charging algorithm to be applied to jobs run in the NQS multitasking queues also discourages NAS users from using auto tasking. The new algorithm favors jobs requesting 8 CPUs over those that request less, although the jobs requesting 8 CPUs experienced significantly higher over head and presumably degraded system throughput. A charging algorithm is presented that has the following desirable characteristics when applied to the data: higher overhead jobs requesting 8 CPUs are penalized when compared to moderate overhead jobs requesting 4 CPUs, thereby providing a charging incentive to NAS users to use autotasking in a manner that provides them with significantly improved turnaround while also maintaining system throughput.

  11. Ultra performance liquid chromatography tandem mass spectrometry performance evaluation for analysis of antibiotics in natural waters.

    PubMed

    Tamtam, Fatima; Mercier, Fabien; Eurin, Joëlle; Chevreuil, Marc; Le Bot, Barbara

    2009-03-01

    An ultra performance liquid chromatography electrospray tandem mass spectrometry (UPLC/MS/MS) method was developed and validated for the determination of 17 antibiotics in natural waters in one single extraction and chromatographic procedure. Gradient separation conditions were optimised for 17 compounds belonging to five different antibiotic groups: quinolones (oxolinic acid, nalidixic acid, pipemidic acid, flumequine), fluoroquinolones (enoxacin, ciprofloxacin, norfloxacin, ofloxacin, enrofloxacin, sarafloxacin, danofloxacin, difloxacin, lomefloxacin), sulphonamides (sulphamethoxazole, sulphamethazine), nitro-imidazole (ornidazole) and diaminopyrimidine (trimethoprim). The separation of all compounds, obtained using a 1.7 microm particle size column (100 mm x 2.1 mm), was achieved within 10 min time. Water samples were adjusted to pH 7 and extracted using Oasis hydrophilic-lipophilic balance (HLB) solid phase extraction cartridges. After elution with methanol and concentration, extracts were injected in a C18 column (Acquity UPLC BEH C18) and detected by tandem mass spectrometry. Average recovery from 100 ng L(-1) fortified samples was higher than 70% for most of the compounds, with relative standard deviations below 20%. Performances of the method (recoveries, detection limit, quantification limit and relative standard deviation) and matrix effects were studied, and results obtained showed that method was suitable for routine analysis of antibiotics in surface water. Samples analysis from Seine River (France) confirmed the interest of antibiotic contamination evaluation in that area. PMID:19148627

  12. Correlation analysis between ionospheric scintillation levels and receiver tracking performance

    NASA Astrophysics Data System (ADS)

    Sreeja, V.; Aquino, M.; Elmas, Z. G.; Forte, B.

    2012-06-01

    Rapid fluctuations in the amplitude and phase of a transionospheric radio signal caused by small scale plasma density irregularities in the ionosphere are known as scintillation. Scintillation can seriously impair a GNSS (Global Navigation Satellite Systems) receiver tracking performance, thus affecting the required levels of availability, accuracy and integrity, and consequently the reliability of modern day GNSS based applications. This paper presents an analysis of correlation between scintillation levels and tracking performance of a GNSS receiver for GPS L1C/A, L2C and GLONASS L1, L2 signals. The analyses make use of data recorded over Presidente Prudente (22.1°S, 51.4°W, dip latitude ˜12.3°S) in Brazil, a location close to the Equatorial Ionisation Anomaly (EIA) crest in Latin America. The study presents for the first time this type of correlation analysis for GPS L2C and GLONASS L1, L2 signals. The scintillation levels are defined by the amplitude scintillation index, S4 and the receiver tracking performance is evaluated by the phase tracking jitter. Both S4 and the phase tracking jitter are estimated from the post correlation In-Phase (I) and Quadra-Phase (Q) components logged by the receiver at a high rate. Results reveal that the dependence of the phase tracking jitter on the scintillation levels can be represented by a quadratic fit for the signals. The results presented in this paper are of importance to GNSS users, especially in view of the forthcoming high phase of solar cycle 24 (predicted for 2013).

  13. A convolution integral approach for performance assessments with uncertainty analysis

    SciTech Connect

    Dawoud, E.; Miller, L.F.

    1999-09-01

    Performance assessments that include uncertainty analyses and risk assessments are typically not obtained for time-dependent releases of radioactive contaminants to the geosphere when a series of sequentially coupled transport models is required for determining results. This is due, in part, to the geophysical complexity of the site, and to the numerical complexity of the fate and transport models. The lack of a practical tool for linking the transport models in a fashion that facilitates uncertainty analysis is another reason for not performing uncertainty analyses in these studies. The multiconvolution integral (MCI) approach presented herein greatly facilitates the practicality of incorporating uncertainty analyses into performance assessments. In this research an MCI approach is developed, and the decoupling of fate and transport processes into an independent system is described. A conceptual model, extracted from the Inactive Tanks project at the Oak Ridge National Laboratory (ORNL), is used to demonstrate the approach. Numerical models are used for transport of {sup 90}Sr from a disposal facility, WC-1 at ORNL, through the vadose and saturated zones to a downgradient point at Fifth Creek, and an analytical surface water model is used to transport the contaminants to a downstream potential receptor point at White Oak Creek. The probability density functions of the final concentrations obtained by the MCI approach are in excellent agreement with those obtained by a Monte Carlo approach that propagated uncertainties through all submodels for each random sample.

  14. Instantaneous BeiDou-GPS attitude determination: A performance analysis

    NASA Astrophysics Data System (ADS)

    Nadarajah, Nandakumaran; Teunissen, Peter J. G.; Raziq, Noor

    2014-09-01

    The advent of modernized and new global navigation satellite systems (GNSS) has enhanced the availability of satellite based positioning, navigation, and timing (PNT) solutions. Specifically, it increases redundancy and yields operational back-up or independence in case of failure or unavailability of one system. Among existing GNSS, the Chinese BeiDou system (BDS) is being developed and will consist of geostationary (GEO) satellites, inclined geosynchronous orbit (IGSO) satellites, and medium-Earth-orbit (MEO) satellites. In this contribution, a BeiDou-GPS robustness analysis is carried out for instantaneous, unaided attitude determination. Precise attitude determination using multiple GNSS antennas mounted on a platform relies on the successful resolution of the integer carrier phase ambiguities. The constrained Least-squares AMBiguity Decorrelation Adjustment (C-LAMBDA) method has been developed for the quadratically constrained GNSS compass model that incorporates the known baseline length. In this contribution the method is used to analyse the attitude determination performance when using the GPS and BeiDou systems. The attitude determination performance is evaluated using GPS/BeiDou data sets from a real data campaign in Australia spanning several days. The study includes the performance analyses of both stand-alone and mixed constellation (GPS/BeiDou) attitude estimation under various satellite deprived environments. We demonstrate and quantify the improved availability and accuracy of attitude determination using the combined constellation.

  15. Design and Performance Analysis of Incremental Networked Predictive Control Systems.

    PubMed

    Pang, Zhong-Hua; Liu, Guo-Ping; Zhou, Donghua

    2016-06-01

    This paper is concerned with the design and performance analysis of networked control systems with network-induced delay, packet disorder, and packet dropout. Based on the incremental form of the plant input-output model and an incremental error feedback control strategy, an incremental networked predictive control (INPC) scheme is proposed to actively compensate for the round-trip time delay resulting from the above communication constraints. The output tracking performance and closed-loop stability of the resulting INPC system are considered for two cases: 1) plant-model match case and 2) plant-model mismatch case. For the former case, the INPC system can achieve the same output tracking performance and closed-loop stability as those of the corresponding local control system. For the latter case, a sufficient condition for the stability of the closed-loop INPC system is derived using the switched system theory. Furthermore, for both cases, the INPC system can achieve a zero steady-state output tracking error for step commands. Finally, both numerical simulations and practical experiments on an Internet-based servo motor system illustrate the effectiveness of the proposed method. PMID:26186798

  16. Aerocapture Performance Analysis for a Neptune-Triton Exploration Mission

    NASA Technical Reports Server (NTRS)

    Starr, Brett R.; Westhelle, Carlos H.; Masciarelli, James P.

    2004-01-01

    A systems analysis has been conducted for a Neptune-Triton Exploration Mission in which aerocapture is used to capture a spacecraft at Neptune. Aerocapture uses aerodynamic drag instead of propulsion to decelerate from the interplanetary approach trajectory to a captured orbit during a single pass through the atmosphere. After capture, propulsion is used to move the spacecraft from the initial captured orbit to the desired science orbit. A preliminary assessment identified that a spacecraft with a lift to drag ratio of 0.8 was required for aerocapture. Performance analyses of the 0.8 L/D vehicle were performed using a high fidelity flight simulation within a Monte Carlo executive to determine mission success statistics. The simulation was the Program to Optimize Simulated Trajectories (POST) modified to include Neptune specific atmospheric and planet models, spacecraft aerodynamic characteristics, and interplanetary trajectory models. To these were added autonomous guidance and pseudo flight controller models. The Monte Carlo analyses incorporated approach trajectory delivery errors, aerodynamic characteristics uncertainties, and atmospheric density variations. Monte Carlo analyses were performed for a reference set of uncertainties and sets of uncertainties modified to produce increased and reduced atmospheric variability. For the reference uncertainties, the 0.8 L/D flatbottom ellipsled vehicle achieves 100% successful capture and has a 99.87 probability of attaining the science orbit with a 360 m/s V budget for apoapsis and periapsis adjustment. Monte Carlo analyses were also performed for a guidance system that modulates both bank angle and angle of attack with the reference set of uncertainties. An alpha and bank modulation guidance system reduces the 99.87 percentile DELTA V 173 m/s (48%) to 187 m/s for the reference set of uncertainties.

  17. Advanced multiphysics coupling for LWR fuel performance analysis

    SciTech Connect

    Hales, J. D.; Tonks, M. R.; Gleicher, F. N.; Spencer, B. W.; Novascone, S. R.; Williamson, R. L.; Pastore, G.; Perez, D. M.

    2015-10-01

    Even the most basic nuclear fuel analysis is a multiphysics undertaking, as a credible simulation must consider at a minimum coupled heat conduction and mechanical deformation. The need for more realistic fuel modeling under a variety of conditions invariably leads to a desire to include coupling between a more complete set of the physical phenomena influencing fuel behavior, including neutronics, thermal hydraulics, and mechanisms occurring at lower length scales. This paper covers current efforts toward coupled multiphysics LWR fuel modeling in three main areas. The first area covered in this paper concerns thermomechanical coupling. The interaction of these two physics, particularly related to the feedback effect associated with heat transfer and mechanical contact at the fuel/clad gap, provides numerous computational challenges. An outline is provided of an effective approach used to manage the nonlinearities associated with an evolving gap in BISON, a nuclear fuel performance application. A second type of multiphysics coupling described here is that of coupling neutronics with thermomechanical LWR fuel performance. DeCART, a high-fidelity core analysis program based on the method of characteristics, has been coupled to BISON. DeCART provides sub-pin level resolution of the multigroup neutron flux, with resonance treatment, during a depletion or a fast transient simulation. Two-way coupling between these codes was achieved by mapping fission rate density and fast neutron flux fields from DeCART to BISON and the temperature field from BISON to DeCART while employing a Picard iterative algorithm. Finally, the need for multiscale coupling is considered. Fission gas production and evolution significantly impact fuel performance by causing swelling, a reduction in the thermal conductivity, and fission gas release. The mechanisms involved occur at the atomistic and grain scale and are therefore not the domain of a fuel performance code. However, it is possible to use

  18. Advanced multiphysics coupling for LWR fuel performance analysis

    DOE PAGESBeta

    Hales, J. D.; Tonks, M. R.; Gleicher, F. N.; Spencer, B. W.; Novascone, S. R.; Williamson, R. L.; Pastore, G.; Perez, D. M.

    2015-10-01

    Even the most basic nuclear fuel analysis is a multiphysics undertaking, as a credible simulation must consider at a minimum coupled heat conduction and mechanical deformation. The need for more realistic fuel modeling under a variety of conditions invariably leads to a desire to include coupling between a more complete set of the physical phenomena influencing fuel behavior, including neutronics, thermal hydraulics, and mechanisms occurring at lower length scales. This paper covers current efforts toward coupled multiphysics LWR fuel modeling in three main areas. The first area covered in this paper concerns thermomechanical coupling. The interaction of these two physics,more » particularly related to the feedback effect associated with heat transfer and mechanical contact at the fuel/clad gap, provides numerous computational challenges. An outline is provided of an effective approach used to manage the nonlinearities associated with an evolving gap in BISON, a nuclear fuel performance application. A second type of multiphysics coupling described here is that of coupling neutronics with thermomechanical LWR fuel performance. DeCART, a high-fidelity core analysis program based on the method of characteristics, has been coupled to BISON. DeCART provides sub-pin level resolution of the multigroup neutron flux, with resonance treatment, during a depletion or a fast transient simulation. Two-way coupling between these codes was achieved by mapping fission rate density and fast neutron flux fields from DeCART to BISON and the temperature field from BISON to DeCART while employing a Picard iterative algorithm. Finally, the need for multiscale coupling is considered. Fission gas production and evolution significantly impact fuel performance by causing swelling, a reduction in the thermal conductivity, and fission gas release. The mechanisms involved occur at the atomistic and grain scale and are therefore not the domain of a fuel performance code. However, it is

  19. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  20. Performance analysis for geometrical attack on digital image watermarking

    NASA Astrophysics Data System (ADS)

    Jayanthi, VE.; Rajamani, V.; Karthikayen, P.

    2011-11-01

    We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.

  1. Performance Analysis: ITS Data through September 30, 2009

    SciTech Connect

    Kerr, C E

    2009-12-07

    Data from ITS was analyzed to understand the issues at LLNL and to identify issues that may require additional management attention and these that meet the threshold for reporting to the DOE Noncompliance Tracking System (NTS). In this report we discuss assessments and issues entered in ITS and compare the number and type presently entered in ITS to previous time periods. Issues reported in ITS were evaluated and discussed. The analysis identified two noncompliances that meet the threshold for reporting to the DOE NTS. All of the data in ITS is analyzed; however, the primary focus of this report is to meet requirements for performance analysis of specific functional areas. The DOE Office of Enforcement expects LLNL to 'implement comprehensive management and independent assessments that are effective in identifying deficiencies and broader problems in safety and security programs, as well as opportunities for continuous improvement within the organization' and to 'regularly perform assessments to evaluate implementation of the contractor's's processes for screening and internal reporting.' LLNL has a self-assessment program, described in the document applicable during this time period, ES&H Manual Document 4.1, that includes line, management and independent assessments. LLNL also has in place a process to identify and report deficiencies of nuclear, worker safety and health and security requirements. In addition, the DOE Office of Enforcement expects that 'issues management databases are used to identify adverse trends, dominant problem areas, and potential repetitive events or conditions' (page 15, DOE Enforcement Process Overview, June 2009). LLNL requires that all worker safety and health and nuclear safety noncompliances be tracked as 'deficiencies' in the LLNL Issues Tracking System (ITS). Data from the ITS are analyzed for worker safety and health (WSH) and nuclear safety noncompliances that may meet the threshold for reporting to the DOE Noncompliance

  2. Performance analysis of bearings-only tracking algorithm

    NASA Astrophysics Data System (ADS)

    van Huyssteen, David; Farooq, Mohamad

    1998-07-01

    A number of 'bearing-only' target motion analysis algorithms have appeared in the literature over the years, all suited to track an object based solely on noisy measurements of its angular position. In their paper 'Utilization of Modified Polar (MP) Coordinates for Bearings-Only Tracking' Aidala and Hammel advocate a filter in which the observable and unobservable states are naturally decoupled. While the MP filter has certain advantages over Cartesian and pseudolinear extended Kalman filters, it does not escape the requirement for the observer to steer an optimum maneuvering course to guarantee acceptable performance. This paper demonstrates by simulation the consequence if the observer deviates from this profile, even if it is sufficient to produce full state observability.

  3. A theoretical analysis of vacuum arc thruster performance

    NASA Technical Reports Server (NTRS)

    Polk, James E.; Sekerak, Mike; Ziemer, John K.; Schein, Jochen; Qi, Niansheng; Binder, Robert; Anders, Andre

    2001-01-01

    In vacuum arc discharges the current is conducted through vapor evaporated from the cathode surface. In these devices very dense, highly ionized plasmas can be created from any metallic or conducting solid used as the cathode. This paper describes theoretical models of performance for several thruster configurations which use vacuum arc plasma sources. This analysis suggests that thrusters using vacuum arc sources can be operated efficiently with a range of propellant options that gives great flexibility in specific impulse. In addition, the efficiency of plasma production in these devices appears to be largely independent of scale because the metal vapor is ionized within a few microns of the cathode electron emission sites, so this approach is well-suited for micropropulsion.

  4. Hydrodynamic body shape analysis and their impact on swimming performance.

    PubMed

    Li, Tian-Zeng; Zhan, Jie-Min

    2015-01-01

    This study presents the hydrodynamic characteristics of different adult male swimmer's body shape using computational fluid dynamics method. This simulation strategy is carried out by CFD fluent code with solving the 3D incompressible Navier-Stokes equations using the RNG k-ε turbulence closure. The water free surface is captured by the volume of fluid (VOF) method. A set of full body models, which is based on the anthropometrical characteristics of the most common male swimmers, is created by Computer Aided Industrial Design (CAID) software, Rhinoceros. The analysis of CFD results revealed that swimmer's body shape has a noticeable effect on the hydrodynamics performances. This explains why male swimmer with an inverted triangle body shape has good hydrodynamic characteristics for competitive swimming. PMID:26898107

  5. Portable Life Support Subsystem Thermal Hydraulic Performance Analysis

    NASA Technical Reports Server (NTRS)

    Barnes, Bruce; Pinckney, John; Conger, Bruce

    2010-01-01

    This paper presents the current state of the thermal hydraulic modeling efforts being conducted for the Constellation Space Suit Element (CSSE) Portable Life Support Subsystem (PLSS). The goal of these efforts is to provide realistic simulations of the PLSS under various modes of operation. The PLSS thermal hydraulic model simulates the thermal, pressure, flow characteristics, and human thermal comfort related to the PLSS performance. This paper presents modeling approaches and assumptions as well as component model descriptions. Results from the models are presented that show PLSS operations at steady-state and transient conditions. Finally, conclusions and recommendations are offered that summarize results, identify PLSS design weaknesses uncovered during review of the analysis results, and propose areas for improvement to increase model fidelity and accuracy.

  6. Human performance analysis of industrial radiography radiation exposure events

    SciTech Connect

    Reece, W.J.; Hill, S.G.

    1995-12-01

    A set of radiation overexposure event reports were reviewed as part of a program to examine human performance in industrial radiography for the US Nuclear Regulatory Commission. Incident records for a seven year period were retrieved from an event database. Ninety-five exposure events were initially categorized and sorted for further analysis. Descriptive models were applied to a subset of severe overexposure events. Modeling included: (1) operational sequence tables to outline the key human actions and interactions with equipment, (2) human reliability event trees, (3) an application of an information processing failures model, and (4) an extrapolated use of the error influences and effects diagram. Results of the modeling analyses provided insights into the industrial radiography task and suggested areas for further action and study to decrease overexposures.

  7. Performance analysis of spread spectrum modulation in data hiding

    NASA Astrophysics Data System (ADS)

    Gang, Litao; Akansu, Ali N.; Ramkumar, Mahalingam

    2001-12-01

    Watermarking or steganography technology provides a possible solution in digital multimedia copyright protection and pirate tracking. Most of the current data hiding schemes are based on spread spectrum modulation. A small value watermark signal is embedded into the content signal in some watermark domain. The information bits can be extracted via correlation. The schemes are applied both in escrow and oblivious cases. This paper reveals, through analysis and simulation, that in oblivious applications where the original signal is not available, the commonly used correlation detection is not optimal. Its maximum likelihood detection is analyzed and a feasible suboptimal detector is derived. Its performance is explored and compared with the correlation detector. Subsequently a linear embedding scheme is proposed and studied. Experiments with image data hiding demonstrates its effectiveness in applications.

  8. Performance Analysis of Paraboloidal Reflector Antennas in Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Yeap, Kim Ho; Law, Young Hui; Rizman, Zairi Ismael; Cheong, Yuen Kiat; Ong, Chu En; Chong, Kok Hen

    2013-10-01

    In this paper, we present an analysis on the performance of the three most commonly used paraboloidal reflector antennas in radio telescopes - i.e. the prime focus, Cassegrain, and Gregorian antennas. In our study, we have adopted the design parameters for the Cassegrain configuration used in the Atacama Large Millimeter Array (ALMA) project. The parameters are subsequently re-calculated so as to meet the design requirement of the Gregorian and prime focus configurations. The simulation results obtained from GRASP reveal that the prime focus configuration produces the lowest side lobes and the highest main lobe level. Such configuration, however, has the disadvantage of being highly susceptible to thermal ground noise radiation. The radiation characteristics produced by both the Cassegrain and Gregorian configurations are very close to each other. Indeed, the results show that there is no significant advantage between the two designs. Hence, we can conclude that both co! nfigurations are comparable in the application of radio telescopes.

  9. Automotive Gas Turbine Power System-Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    1997-01-01

    An open cycle gas turbine numerical modelling code suitable for thermodynamic performance analysis (i.e. thermal efficiency, specific fuel consumption, cycle state points, working fluid flowrates etc.) of automotive and aircraft powerplant applications has been generated at the NASA Lewis Research Center's Power Technology Division. The use this code can be made available to automotive gas turbine preliminary design efforts, either in its present version, or, assuming that resources can be obtained to incorporate empirical models for component weight and packaging volume, in later version that includes the weight-volume estimator feature. The paper contains a brief discussion of the capabilities of the presently operational version of the code, including a listing of input and output parameters and actual sample output listings.

  10. Performance Analysis: Work Control Events Identified January - August 2010

    SciTech Connect

    De Grange, C E; Freeman, J W; Kerr, C E; Holman, G; Marsh, K; Beach, R

    2011-01-14

    This performance analysis evaluated 24 events that occurred at LLNL from January through August 2010. The analysis identified areas of potential work control process and/or implementation weaknesses and several common underlying causes. Human performance improvement and safety culture factors were part of the causal analysis of each event and were analyzed. The collective significance of all events in 2010, as measured by the occurrence reporting significance category and by the proportion of events that have been reported to the DOE ORPS under the ''management concerns'' reporting criteria, does not appear to have increased in 2010. The frequency of reporting in each of the significance categories has not changed in 2010 compared to the previous four years. There is no change indicating a trend in the significance category and there has been no increase in the proportion of occurrences reported in the higher significance category. Also, the frequency of events, 42 events reported through August 2010, is not greater than in previous years and is below the average of 63 occurrences per year at LLNL since 2006. Over the previous four years, an average of 43% of the LLNL's reported occurrences have been reported as either ''management concerns'' or ''near misses.'' In 2010, 29% of the occurrences have been reported as ''management concerns'' or ''near misses.'' This rate indicates that LLNL is now reporting fewer ''management concern'' and ''near miss'' occurrences compared to the previous four years. From 2008 to the present, LLNL senior management has undertaken a series of initiatives to strengthen the work planning and control system with the primary objective to improve worker safety. In 2008, the LLNL Deputy Director established the Work Control Integrated Project Team to develop the core requirements and graded elements of an institutional work planning and control system. By the end of that year this system was documented and implementation had begun. In 2009