40 CFR 63.344 - Performance test requirements and test methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... blanket type fume suppressants are used to control chromium emissions from a hard chromium electroplating... National Emission Standards for Chromium Emissions From Hard and Decorative Chromium Electroplating and Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test...
40 CFR 63.344 - Performance test requirements and test methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... blanket type fume suppressants are used to control chromium emissions from a hard chromium electroplating... National Emission Standards for Chromium Emissions From Hard and Decorative Chromium Electroplating and Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test...
40 CFR 63.344 - Performance test requirements and test methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... blanket type fume suppressants are used to control chromium emissions from a hard chromium electroplating... National Emission Standards for Chromium Emissions From Hard and Decorative Chromium Electroplating and Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test...
40 CFR 60.185 - Monitoring of operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) The continuous monitoring system performance evaluation required under § 60.13(c) shall be completed... monitoring system performance evaluation required under § 60.13(c), the reference method referred to under... be Method 6. For the performance evaluation, each concentration measurement shall be of one hour...
Rotor design for maneuver performance
NASA Technical Reports Server (NTRS)
Berry, John D.; Schrage, Daniel
1986-01-01
A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.
Primer Stepper Motor Nomenclature, Definition, Performance and Recommended Test Methods
NASA Technical Reports Server (NTRS)
Starin, Scott; Shea, Cutter
2014-01-01
There has been an unfortunate lack of standardization of the terms and components of stepper motor performance, requirements definition, application of torque margin and implementation of test methods. This paper will address these inconsistencies and discuss in detail the implications of performance parameters, affects of load inertia, control electronics, operational resonances and recommended test methods. Additionally, this paper will recommend parameters for defining and specifying stepper motor actuators. A useful description of terms as well as consolidated equations and recommended requirements is included.
OPTiM: Optical projection tomography integrated microscope using open-source hardware and software
Andrews, Natalie; Davis, Samuel; Bugeon, Laurence; Dallman, Margaret D.; McGinty, James
2017-01-01
We describe the implementation of an OPT plate to perform optical projection tomography (OPT) on a commercial wide-field inverted microscope, using our open-source hardware and software. The OPT plate includes a tilt adjustment for alignment and a stepper motor for sample rotation as required by standard projection tomography. Depending on magnification requirements, three methods of performing OPT are detailed using this adaptor plate: a conventional direct OPT method requiring only the addition of a limiting aperture behind the objective lens; an external optical-relay method allowing conventional OPT to be performed at magnifications >4x; a remote focal scanning and region-of-interest method for improved spatial resolution OPT (up to ~1.6 μm). All three methods use the microscope’s existing incoherent light source (i.e. arc-lamp) and all of its inherent functionality is maintained for day-to-day use. OPT acquisitions are performed on in vivo zebrafish embryos to demonstrate the implementations’ viability. PMID:28700724
40 CFR 60.154 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Sewage Treatment Plants § 60.154 Test methods and procedures. (a) In conducting the performance tests required in § 60.8...
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods
2007-08-01
Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements
40 CFR 63.1188 - What performance test requirements must I meet?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Performance Tests and Methods § 63.1188 What performance test requirements must I meet? You must meet the... numerical emission limit for PM, CO, or formaldehyde, or at the inlet and outlet of the control device if...
Photometric requirements for portable changeable message signs.
DOT National Transportation Integrated Search
2001-09-01
This project reviewed the performance of pchangeable message signs (PCMSs) and developed photometric standards to establish performance requirements. In addition, researchers developed photometric test methods and recommended them for use in evaluati...
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
Application of capability indices and control charts in the analytical method control strategy.
Oliva, Alexis; Llabres Martinez, Matías
2017-08-01
In this study, we assessed the usefulness of control charts in combination with the process capability indices, C pm and C pk , in the control strategy of an analytical method. The traditional X-chart and moving range chart were used to monitor the analytical method over a 2-year period. The results confirmed that the analytical method is in-control and stable. Different criteria were used to establish the specifications limits (i.e. analyst requirements) for fixed method performance (i.e. method requirements). If the specification limits and control limits are equal in breadth, the method can be considered "capable" (C pm = 1), but it does not satisfy the minimum method capability requirements proposed by Pearn and Shu (2003). Similar results were obtained using the C pk index. The method capability was also assessed as a function of method performance for fixed analyst requirements. The results indicate that the method does not meet the requirements of the analytical target approach. A real-example data of a SEC with light-scattering detection method was used as a model whereas previously published data were used to illustrate the applicability of the proposed approach. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Schweikhard, W. G.
1974-01-01
A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.
A Method for Evaluation of Microcomputers for Tactical Applications.
1980-06-01
application. The computational requirements of a tactical application are specified in terms of performance parameters. The presently marketed microcomputer...computational requirements of a tactical application are specified in terms of performance parameters. The presently marketed microcomputer and multi...also to provide a method to evaluate microcomputer systems for tactical applications, i.e., Command Control Communications (C 3), weapon systems, etc
Applying Sigma Metrics to Reduce Outliers.
Litten, Joseph
2017-03-01
Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR Table 2 to Subpart Cccc of... - Requirements for Performance Tests
Code of Federal Regulations, 2010 CFR
2010-07-01
... port's location and the number of traverse points Method 1* 3. Measure volumetric flow rate. Method 2* 4. Perform gas analysis to determine the dry molecular weight of the stack gas Method 3* 5...
Sampling methods for microbiological analysis of red meat and poultry carcasses.
Capita, Rosa; Prieto, Miguel; Alonso-Calleja, Carlos
2004-06-01
Microbiological analysis of carcasses at slaughterhouses is required in the European Union for evaluating the hygienic performance of carcass production processes as required for effective hazard analysis critical control point implementation. The European Union microbial performance standards refer exclusively to the excision method, even though swabbing using the wet/dry technique is also permitted when correlation between both destructive and nondestructive methods can be established. For practical and economic reasons, the swab technique is the most extensively used carcass surface-sampling method. The main characteristics, advantages, and limitations of the common excision and swabbing methods are described here.
CrossTalk: The Journal of Defense Software Engineering. Volume 19, Number 12, December 2006
2006-12-01
Feature-Oriented Domain Analysis ( FODA ) FODA is a domain analysis and engineer- ing method that focuses on developing reusable assets [9]. By examining...Eliciting Security Requirements This article describes an approach for doing trade-off analysis among requirements elicitation methods. by Dr. Nancy R...high-level requirements are addressed and met in the requirements work products. 3. Unclear requirements Mitigation Perform requirements analysis and
40 CFR 60.93 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Test methods and procedures. 60.93... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt Facilities § 60.93 Test methods and procedures. (a) In conducting the performance tests required in § 60.8...
40 CFR Table 8 to Subpart Bbbb of... - Model Rule-Requirements for Stack Tests
Code of Federal Regulations, 2010 CFR
2010-07-01
... at full load. 2. Metals Cadmium Method 1 Method 29 a Compliance testing must be performed while the... be performed while the municipal waste combustion unit is operating at full load. Mercury Method 1...
Uncovering the requirements of cognitive work.
Roth, Emilie M
2008-06-01
In this article, the author provides an overview of cognitive analysis methods and how they can be used to inform system analysis and design. Human factors has seen a shift toward modeling and support of cognitively intensive work (e.g., military command and control, medical planning and decision making, supervisory control of automated systems). Cognitive task analysis and cognitive work analysis methods extend traditional task analysis techniques to uncover the knowledge and thought processes that underlie performance in cognitively complex settings. The author reviews the multidisciplinary roots of cognitive analysis and the variety of cognitive task analysis and cognitive work analysis methods that have emerged. Cognitive analysis methods have been used successfully to guide system design, as well as development of function allocation, team structure, and training, so as to enhance performance and reduce the potential for error. A comprehensive characterization of cognitive work requires two mutually informing analyses: (a) examination of domain characteristics and constraints that define cognitive requirements and challenges and (b) examination of practitioner knowledge and strategies that underlie both expert and error-vulnerable performance. A variety of specific methods can be adapted to achieve these aims within the pragmatic constraints of particular projects. Cognitive analysis methods can be used effectively to anticipate cognitive performance problems and specify ways to improve individual and team cognitive performance (be it through new forms of training, user interfaces, or decision aids).
40 CFR 501.15 - Requirements for permitting.
Code of Federal Regulations, 2011 CFR
2011-07-01
... individual(s) who performed the analyses; (E) The analytical techniques or methods used; and (F) The results... monitoring device or method required to be maintained under this permit shall, upon conviction, be punished... permittee's use or disposal methods is promulgated under section 405(d) of the CWA before the expiration of...
40 CFR 501.15 - Requirements for permitting.
Code of Federal Regulations, 2012 CFR
2012-07-01
... individual(s) who performed the analyses; (E) The analytical techniques or methods used; and (F) The results... monitoring device or method required to be maintained under this permit shall, upon conviction, be punished... permittee's use or disposal methods is promulgated under section 405(d) of the CWA before the expiration of...
40 CFR 501.15 - Requirements for permitting.
Code of Federal Regulations, 2013 CFR
2013-07-01
... individual(s) who performed the analyses; (E) The analytical techniques or methods used; and (F) The results... monitoring device or method required to be maintained under this permit shall, upon conviction, be punished... permittee's use or disposal methods is promulgated under section 405(d) of the CWA before the expiration of...
40 CFR 501.15 - Requirements for permitting.
Code of Federal Regulations, 2014 CFR
2014-07-01
... individual(s) who performed the analyses; (E) The analytical techniques or methods used; and (F) The results... monitoring device or method required to be maintained under this permit shall, upon conviction, be punished... permittee's use or disposal methods is promulgated under section 405(d) of the CWA before the expiration of...
40 CFR 501.15 - Requirements for permitting.
Code of Federal Regulations, 2010 CFR
2010-07-01
... individual(s) who performed the analyses; (E) The analytical techniques or methods used; and (F) The results... monitoring device or method required to be maintained under this permit shall, upon conviction, be punished... permittee's use or disposal methods is promulgated under section 405(d) of the CWA before the expiration of...
Quality control for federal clean water act and safe drinking water act regulatory compliance.
Askew, Ed
2013-01-01
QC sample results are required in order to have confidence in the results from analytical tests. Some of the AOAC water methods include specific QC procedures, frequencies, and acceptance criteria. These are considered to be the minimum controls needed to perform the method successfully. Some regulatory programs, such as those in 40 CFR Part 136.7, require additional QC or have alternative acceptance limits. Essential QC measures include method calibration, reagent standardization, assessment of each analyst's capabilities, analysis of blind check samples, determination of the method's sensitivity (method detection level or quantification limit), and daily evaluation of bias, precision, and the presence of laboratory contamination or other analytical interference. The details of these procedures, their performance frequency, and expected ranges of results are set out in this manuscript. The specific regulatory requirements of 40 CFR Part 136.7 for the Clean Water Act, the laboratory certification requirements of 40 CFR Part 141 for the Safe Drinking Water Act, and the ISO 17025 accreditation requirements under The NELAC Institute are listed.
King 2 2519 ATM residual gyros: Reestablishing 5 year life requirements
NASA Technical Reports Server (NTRS)
Kayal, B.; Carbocci, L. J.
1978-01-01
The technical expertise required to assess the condition of the residual ATM 2519 Singer gyros is discussed. Past build history records, past performance characteristics, and recommendations for particular tests (which were performed by NASA personnel) are summarized. Test results are analyzed. A study of motor performance data and recommendations concerning gyro spin bearing life was performed. A method of reestablishing potential reliability of the bearing for the 5-year life requirement of the power module is also included.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Inorganic HAP Emissions From Catalytic Reforming Units 25 Table 25 to Subpart UUU of Part 63 Protection of... Units Pt. 63, Subpt. UUU, Table 25 Table 25 to Subpart UUU of Part 63—Requirements for Performance Tests... Procedure) in appendix A to subpart UUU; or EPA Method 5050 combined either with EPA Method 9056, or with...
Improved Boundary Layer Module (BLM) for the Solid Performance Program (SPP)
NASA Astrophysics Data System (ADS)
Coats, D. E.; Cebeci, T.
1982-03-01
The requirements for a replacement to the Bartz boundary layer code, the standard method of computing the performance loss due to viscous effects by the solid performance program, were discussed by the propulsion community along with four nationally recognized boundary layer experts. A consensus was reached regarding the preferred features for the analysis of the replacement code. The major points that were agreed upon are: (1) finite difference methods are preferred over integral methods; (2) a single equation eddy viscosity model was considered to be adequate for the purpose of computing performance loss; (3) a variable grid capability in both coordinate directions would be required; (4) a proven finite difference algorithm which is not stability restricted should be used, that is, an implicit numerical scheme would be required; and (5) the replacement code should be able to compute both turbulent and laminar flows. The program should treat mass addition at the wall as well as being able to calculate a stagnation point starting line.
NASA Technical Reports Server (NTRS)
Ferrenberg, A.; Hunt, K.; Duesberg, J.
1985-01-01
The primary objective was the obtainment of atomization and mixing performance data for a variety of typical liquid oxygen/hydrocarbon injector element designs. Such data are required to establish injector design criteria and to provide critical inputs to liquid rocket engine combustor performance and stability analysis, and computational codes and methods. Deficiencies and problems with the atomization test equipment were identified, and action initiated to resolve them. Test results of the gas/liquid mixing tests indicated that an assessment of test methods was required. A series of 71 liquid/liquid tests were performed.
Using string invariants for prediction searching for optimal parameters
NASA Astrophysics Data System (ADS)
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
Subsonic aircraft: Evolution and the matching of size to performance
NASA Technical Reports Server (NTRS)
Loftin, L. K., Jr.
1980-01-01
Methods for estimating the approximate size, weight, and power of aircraft intended to meet specified performance requirements are presented for both jet-powered and propeller-driven aircraft. The methods are simple and require only the use of a pocket computer for rapid application to specific sizing problems. Application of the methods is illustrated by means of sizing studies of a series of jet-powered and propeller-driven aircraft with varying design constraints. Some aspects of the technical evolution of the airplane from 1918 to the present are also briefly discussed.
Functional Mobility Testing: A Novel Method to Create Suit Design Requirements
NASA Technical Reports Server (NTRS)
England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar L.
2008-01-01
This study was performed to aide in the creation of design requirements for the next generation of space suits that more accurately describe the level of mobility necessary for a suited crewmember through the use of an innovative methodology utilizing functional mobility. A novel method was utilized involving the collection of kinematic data while 20 subjects (10 male, 10 female) performed pertinent functional tasks that will be required of a suited crewmember during various phases of a lunar mission. These tasks were selected based on relevance and criticality from a larger list of tasks that may be carried out by the crew. Kinematic data was processed through Vicon BodyBuilder software to calculate joint angles for the ankle, knee, hip, torso, shoulder, elbow, and wrist. Maximum functional mobility was consistently lower than maximum isolated mobility. This study suggests that conventional methods for establishing design requirements for human-systems interfaces based on maximal isolated joint capabilities may overestimate the required mobility. Additionally, this method provides a valuable means of evaluating systems created from these requirements by comparing the mobility available in a new spacesuit, or the mobility required to use a new piece of hardware, to this newly established database of functional mobility.
40 CFR 63.11517 - What are my monitoring requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) of this section. (1) Daily Method 9 testing for welding, Tier 2 or 3. Perform visual determination of... to the requirements of paragraph (d)(1) of this section. (3) Monthly Method 9 testing for welding... Method 22 testing for welding, Tier 2 or 3. If, after two consecutive months of testing, the average of...
KU-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Griffin, J. W.
1980-01-01
The preparation of a real time computer simulation model of the KU band rendezvous radar to be integrated into the shuttle mission simulator (SMS), the shuttle engineering simulator (SES), and the shuttle avionics integration laboratory (SAIL) simulator is described. To meet crew training requirements a radar tracking performance model, and a target modeling method were developed. The parent simulation/radar simulation interface requirements, and the method selected to model target scattering properties, including an application of this method to the SPAS spacecraft are described. The radar search and acquisition mode performance model and the radar track mode signal processor model are examined and analyzed. The angle, angle rate, range, and range rate tracking loops are also discussed.
Root, Patsy; Hunt, Margo; Fjeld, Karla; Kundrat, Laurie
2014-01-01
Quality assurance (QA) and quality control (QC) data are required in order to have confidence in the results from analytical tests and the equipment used to produce those results. Some AOAC water methods include specific QA/QC procedures, frequencies, and acceptance criteria, but these are considered to be the minimum controls needed to perform a microbiological method successfully. Some regulatory programs, such as those at Code of Federal Regulations (CFR), Title 40, Part 136.7 for chemistry methods, require additional QA/QC measures beyond those listed in the method, which can also apply to microbiological methods. Essential QA/QC measures include sterility checks, reagent specificity and sensitivity checks, assessment of each analyst's capabilities, analysis of blind check samples, and evaluation of the presence of laboratory contamination and instrument calibration and checks. The details of these procedures, their performance frequency, and expected results are set out in this report as they apply to microbiological methods. The specific regulatory requirements of CFR Title 40 Part 136.7 for the Clean Water Act, the laboratory certification requirements of CFR Title 40 Part 141 for the Safe Drinking Water Act, and the International Organization for Standardization 17025 accreditation requirements under The NELAC Institute are also discussed.
Aerobic conditioning for team sport athletes.
Stone, Nicholas M; Kilding, Andrew E
2009-01-01
Team sport athletes require a high level of aerobic fitness in order to generate and maintain power output during repeated high-intensity efforts and to recover. Research to date suggests that these components can be increased by regularly performing aerobic conditioning. Traditional aerobic conditioning, with minimal changes of direction and no skill component, has been demonstrated to effectively increase aerobic function within a 4- to 10-week period in team sport players. More importantly, traditional aerobic conditioning methods have been shown to increase team sport performance substantially. Many team sports require the upkeep of both aerobic fitness and sport-specific skills during a lengthy competitive season. Classic team sport trainings have been shown to evoke marginal increases/decreases in aerobic fitness. In recent years, aerobic conditioning methods have been designed to allow adequate intensities to be achieved to induce improvements in aerobic fitness whilst incorporating movement-specific and skill-specific tasks, e.g. small-sided games and dribbling circuits. Such 'sport-specific' conditioning methods have been demonstrated to promote increases in aerobic fitness, though careful consideration of player skill levels, current fitness, player numbers, field dimensions, game rules and availability of player encouragement is required. Whilst different conditioning methods appear equivalent in their ability to improve fitness, whether sport-specific conditioning is superior to other methods at improving actual game performance statistics requires further research.
Survey of existing performance requirements in codes and standards for light-frame construction
G. E. Sherwood
1980-01-01
Present building codes and standards are a combination of specifications and performance criteria. Where specifications prevail, the introduction f new materials or methods can be a long, cumbersome process. To facilitate the introduction of new technology, performance requirements are becoming more prevalent. In some areas, there is a lack of information on which to...
Earth observing system instrument pointing control modeling for polar orbiting platforms
NASA Technical Reports Server (NTRS)
Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.
1987-01-01
An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., performance criteria, inspection requirements, marking requirements, testing equipment, test procedures and... purchase, installation, and use of the product being standardized. (b) Requirements for Department of... organization to such an extent that it would contain similar requirements and test methods for identical types...
A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.
Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L
2018-05-16
During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.
Cartridge output testing - Methods to overcome closed-bomb shortcomings
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Schimmel, Morry L.
1991-01-01
Although the closed-bomb test has achieved virtually universal acceptance for measuring the output performance of pyrotechnic cartridges, there are serious shortcomings in its ability to quantify the performance of cartridges used as energy sources for pyrotechnic-activated mechanical devices. This paper presents several examples of cartridges (including the NASA Standard Initiator NSI) that successfully met closed-bomb performance requirements, but resulted in functional failures in mechanisms. To resolve these failures, test methods were developed to demonstrate a functional margin, based on comparing energy required to accomplish the function to energy deliverable by the cartridge.
Testing and evaluation of tactical electro-optical sensors
NASA Astrophysics Data System (ADS)
Middlebrook, Christopher T.; Smith, John G.
2002-07-01
As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
NASA Technical Reports Server (NTRS)
Linley, L. J.; Luper, A. B.; Dunn, J. H.
1982-01-01
The Bureau of Mines, U.S. Department of the Interior, is reviewing explosion protection methods for use in gassy coal mines. This performance criteria guideline is an evaluation of three explosion protection methods of machines electrically powered with voltages up to 15,000 volts ac. A sufficient amount of basic research has been accomplished to verify that the explosion proof and pressurized enclosure methods can provide adequate explosion protection with the present state of the art up to 15,000 volts ac. This routine application of the potted enclosure as a stand alone protection method requires further investigation or development in order to clarify performance criteria and verification certification requirements. An extensive literature search, a series of high voltage tests, and a design evaluation of the three explosion protection methods indicate that the explosion proof, pressurized, and potted enclosures can all be used to enclose up to 15,000 volts ac.
40 CFR 63.1348 - Compliance requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... emissions standards and operating limits by using the test methods and procedures in §§ 63.1349 and 63.7... Emission Standards and Operating Limits § 63.1348 Compliance requirements. (a) Initial Performance Test... with the PM emissions standards by using the test methods and procedures in § 63.1349(b)(1). (2...
40 CFR 63.1348 - Compliance requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... emissions standards and operating limits by using the test methods and procedures in §§ 63.1349 and 63.7... Emission Standards and Operating Limits § 63.1348 Compliance requirements. (a) Initial Performance Test... with the PM emissions standards by using the test methods and procedures in § 63.1349(b)(1). (2...
Comparison of two methods to determine fan performance curves using computational fluid dynamics
NASA Astrophysics Data System (ADS)
Onma, Patinya; Chantrasmi, Tonkid
2018-01-01
This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.
Subrandom methods for multidimensional nonuniform sampling.
Worley, Bradley
2016-08-01
Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Linqin; Xu, Sheng; Jiang, Dezhi
2015-12-01
Industrial wireless networked control system has been widely used, and how to evaluate the performance of the wireless network is of great significance. In this paper, considering the shortcoming of the existing performance evaluation methods, a comprehensive performance evaluation method of networks multi-indexes fuzzy analytic hierarchy process (MFAHP) combined with the fuzzy mathematics and the traditional analytic hierarchy process (AHP) is presented. The method can overcome that the performance evaluation is not comprehensive and subjective. Experiments show that the method can reflect the network performance of real condition. It has direct guiding role on protocol selection, network cabling, and node setting, and can meet the requirements of different occasions by modifying the underlying parameters.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Performance planning for rural planning organizations - final report.
DOT National Transportation Integrated Search
2017-02-01
Recent federal rules place increased emphasis on performance-based management of the multimodal : transportation system and require the use of performance based methods in state, metropolitan, and : non-metropolitan transportation planning and progra...
Standardised Benchmarking in the Quest for Orthologs
Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe
2016-01-01
The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882
Helicopter rotor and engine sizing for preliminary performance estimation
NASA Technical Reports Server (NTRS)
Talbot, P. D.; Bowles, J. V.; Lee, H. C.
1986-01-01
Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.
Solar cell and module performance assessment based on indoor calibration methods
NASA Astrophysics Data System (ADS)
Bogus, K.
A combined space/terrestrial solar cell test calibration method that requires five steps and can be performed indoors is described. The test conditions are designed to qualify the cell or module output data in standard illumination and temperature conditions. Measurements are made of the short-circuit current, the open circuit voltage, the maximum power, the efficiency, and the spectral response. Standard sunlight must be replicated both in earth surface and AM0 conditions; Xe lamps are normally used for the light source, with spectral measurements taken of the light. Cell and module spectral response are assayed by using monochromators and narrow band pass monochromatic filters. Attention is required to define the performance characteristics of modules under partial shadowing. Error sources that may effect the measurements are discussed, as are previous cell performance testing and calibration methods and their effectiveness in comparison with the behaviors of satellite solar power panels.
Functional Performance of Pyrovalves
NASA Technical Reports Server (NTRS)
Bement, Laurence J.
1996-01-01
Following several flight and ground test failures of spacecraft systems using single-shot, 'normally closed' pyrotechnically actuated valves (pyrovalves), a government/industry cooperative program was initiated to assess the functional performance of five qualified designs. The goal of the program was to improve performance-based requirements for the procurement of pyrovalves. Specific objectives included the demonstration of performance test methods, the measurement of 'blowby' (the passage of gases from the pyrotechnic energy source around the activating piston into the valve's fluid path), and the quantification of functional margins for each design. Experiments were conducted in-house at NASA on several units each of the five valve designs. The test methods used for this program measured the forces and energies required to actuate the valves, as well as the energies and the pressures (where possible) delivered by the pyrotechnic sources. Functional performance ranged widely among the designs. Blowby cannot be prevented by o-ring seals; metal-to-metal seals were effective. Functional margin was determined by dividing the energy delivered by the pyrotechnic sources in excess to that required to accomplish the function by the energy required for that function. All but two designs had adequate functional margins with the pyrotechnic cartridges evaluated.
Prediction of pump cavitation performance
NASA Technical Reports Server (NTRS)
Moore, R. D.
1974-01-01
A method for predicting pump cavitation performance with various liquids, liquid temperatures, and rotative speeds is presented. Use of the method requires that two sets of test data be available for the pump of interest. Good agreement between predicted and experimental results of cavitation performance was obtained for several pumps operated in liquids which exhibit a wide range of properties. Two cavitation parameters which qualitatively evaluate pump cavitation performance are also presented.
Assessing FAÇADE Visibility in 3d City Models for City Marketing
NASA Astrophysics Data System (ADS)
Albrecht, F.; Moser, J.; Hijazi, I.
2013-08-01
In city marketing, different applications require the evaluation of the visual impression of displays in the urban environment on people that visit the city. Therefore, this research focuses on the way how visual displays on façades for movie performances are perceived during a cultural event triggered by city marketing. We describe the different visibility analysis methods that are applicable to the analysis of façades. The methods advanced from the domains of Geographic Information Science, architecture and computer graphics. A detailed scenario is described in order to perform a requirements analysis for identifying the requirements to visibility information. This visibility information needs to describe the visual perception of displays on façades adequately. The requirements are compared to the visibility information that can be provided by the visibility methods. A discussion of the comparison summarizes the advantages and disadvantages of existing visibility analysis methods for describing the visibility of façades. The results show that part of the researched approaches is able to support the requirements to visibility information. But they also show that for a complete support of the entire analysis workflow, there remain unsolved workflow integration issues.
Project FIRES. Volume 1: Program Overview and Summary, Phase 1B
NASA Technical Reports Server (NTRS)
Abeles, F. J.
1980-01-01
Overall performance requirements and evaluation methods for firefighters protective equipment were established and published as the Protective Ensemble Performance Standards (PEPS). Current firefighters protective equipment was tested and evaluated against the PEPS requirements, and the preliminary design of a prototype protective ensemble was performed. In phase 1B, the design of the prototype ensemble was finalized. Prototype ensembles were fabricated and then subjected to a series of qualification tests which were based upon the PEPS requirements. Engineering drawings and purchase specifications were prepared for the new protective ensemble.
Characterizing Task-Based OpenMP Programs
Muddukrishna, Ananya; Jonsson, Peter A.; Brorsson, Mats
2015-01-01
Programmers struggle to understand performance of task-based OpenMP programs since profiling tools only report thread-based performance. Performance tuning also requires task-based performance in order to balance per-task memory hierarchy utilization against exposed task parallelism. We provide a cost-effective method to extract detailed task-based performance information from OpenMP programs. We demonstrate the utility of our method by quickly diagnosing performance problems and characterizing exposed task parallelism and per-task instruction profiles of benchmarks in the widely-used Barcelona OpenMP Tasks Suite. Programmers can tune performance faster and understand performance tradeoffs more effectively than existing tools by using our method to characterize task-based performance. PMID:25860023
Built-In-Test Equipment Requirements Workshop. Workshop Presentation
1981-08-01
quantitatively evaluated in test. (2) It is necessary to develop the statistical methods that should be used for predicting and confirming of diagnostic...of different performance levels of BIT peacetime and wartime applications, and the corresponding manpower and other support requirements should be...reports. The scope of the workshop involves the areas of require- ments for built-in-test and diagnostics, and the methods of testing to ensure that the
A novel implementation of homodyne time interval analysis method for primary vibration calibration
NASA Astrophysics Data System (ADS)
Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo
2011-12-01
In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.
Bockman, Alexander; Fackler, Cameron; Xiang, Ning
2015-04-01
Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.
Recommendations for the performance rating of flat plate terrestrial photovoltaic solar panels
NASA Technical Reports Server (NTRS)
Treble, F. C.
1976-01-01
A review of recommendations for standardizing the performance rating of flat plate terrestrial solar panels is given to develop an international standard code of practice for performance rating. Required data to characterize the performance of a solar panel are listed. Other items discussed are: (1) basic measurement procedures; (2) performance measurement in natural sunlight and simulated sunlight; (3) standard solar cells; (4) the normal incidence method; (5) global method and (6) definition of peak power.
40 CFR 98.54 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in paragraphs (b)(1) through (b)(3) of this section. (1) EPA Method 320, Measurement of Vapor Phase...) Direct measurement (such as using flow meters or weigh scales). (2) Existing plant procedures used for accounting purposes. (d) You must conduct all required performance tests according to the methods in § 98.54...
76 FR 9495 - Airworthiness Directives; Air Tractor, Inc. Models AT-802 and AT-802A Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-18
...-18, which requires you to repetitively inspect (using the eddy current method) the two outboard... through 0101 and AT-802A-0092 through 0101: To perform, using the eddy current method, two inspections at... through 0178 and AT-802A-0102 through 0178 to perform using the eddy current method, two inspections at 5...
Principals' Informal Methods for Appraising Poor-Performing Teachers
ERIC Educational Resources Information Center
Yariv, Eliezer
2009-01-01
Teacher appraisal is never an easy task, especially of teachers experiencing difficulties and failures. Nevertheless it is a requirement for good management, in our schools no less than our corporations. Forty elementary school principals in Israel described the informal methods they use to appraise teachers who are performing poorly. Most…
NASA Astrophysics Data System (ADS)
Mao, Chao; Chen, Shou
2017-01-01
According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.
Closed Loop System Identification with Genetic Algorithms
NASA Technical Reports Server (NTRS)
Whorton, Mark S.
2004-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.
Reference Proteome Extracts for Mass Spec Instrument Performance Validation and Method Development
Rosenblatt, Mike; Urh, Marjeta; Saveliev, Sergei
2014-01-01
Biological samples of high complexity are required to test protein mass spec sample preparation procedures and validate mass spec instrument performance. Total cell protein extracts provide the needed sample complexity. However, to be compatible with mass spec applications, such extracts should meet a number of design requirements: compatibility with LC/MS (free of detergents, etc.)high protein integrity (minimal level of protein degradation and non-biological PTMs)compatibility with common sample preparation methods such as proteolysis, PTM enrichment and mass-tag labelingLot-to-lot reproducibility Here we describe total protein extracts from yeast and human cells that meet the above criteria. Two extract formats have been developed: Intact protein extracts with primary use for sample preparation method development and optimizationPre-digested extracts (peptides) with primary use for instrument validation and performance monitoring
Standard Test Procedures for Evaluating Various Leak Detection Methods
Learn about protocols that testers could use to demonstrate that an individual release detection equipment type could meet the performance requirements noted in the federal UST requirements for detecting leaks.
Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-01-01
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
NASA Technical Reports Server (NTRS)
Kania, Michael
1991-01-01
A discussion on coated particle fuel performance from a modular High Temperature Gas Reactor (HTGR) is presented along with experimental results. The following topics are covered: (1) the coated particle fuel concept; (2) the functional requirements; (3) performance limiting mechanisms; (4) fuel performance; and (5) methods/techniques for characterizing performance.
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Wang, Mingchao; Li, Li; Yin, Dali
2017-03-01
Asymmetric reactions often need to be evaluated during the synthesis of chiral compounds. However, traditional evaluation methods require the isolation of the individual enantiomer, which is tedious and time-consuming. Thus, it is desirable to develop simple, practical online detection methods. We developed a method based on high-performance liquid chromatography-electronic circular dichroism (HPLC-ECD) that simultaneously analyzes the material conversion ratio and absolute optical purity of each enantiomer. In particular, only a reverse-phase C18 column instead of a chiral column is required in our method because the ECD measurement provides a g-factor that describes the ratio of each enantiomer in the mixtures. We used our method to analyze the asymmetric hydrosilylation of β-enamino esters, and we discussed the advantage, feasibility, and effectiveness of this new methodology.
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Damm, Irina; Enger, Eileen; Chrubasik-Hausmann, Sigrun; Schieber, Andreas; Zimmermann, Benno F
2016-08-01
Fast methods for the extraction and analysis of various secondary metabolites from cocoa products were developed and optimized regarding speed and separation efficiency. Extraction by pressurized liquid extraction is automated and the extracts are analyzed by rapid reversed-phase ultra high-performance liquid chromatography and normal-phase high-performance liquid chromatography methods. After extraction, no further sample treatment is required before chromatographic analysis. The analytes comprise monomeric and oligomeric flavanols, flavonols, methylxanthins, N-phenylpropenoyl amino acids, and phenolic acids. Polyphenols and N-phenylpropenoyl amino acids are separated in a single run of 33 min, procyanidins are analyzed by normal-phase high-performance liquid chromatography within 16 min, and methylxanthins require only 6 min total run time. A fourth method is suitable for phenolic acids, but only protocatechuic acid was found in relevant quantities. The optimized methods were validated and applied to 27 dark chocolates, one milk chocolate, two cocoa powders and two food supplements based on cocoa extract. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Novel TMS coils designed using an inverse boundary element method
NASA Astrophysics Data System (ADS)
Cobos Sánchez, Clemente; María Guerrero Rodriguez, Jose; Quirós Olozábal, Ángel; Blanco-Navarro, David
2017-01-01
In this work, a new method to design TMS coils is presented. It is based on the inclusion of the concept of stream function of a quasi-static electric current into a boundary element method. The proposed TMS coil design approach is a powerful technique to produce stimulators of arbitrary shape, and remarkably versatile as it permits the prototyping of many different performance requirements and constraints. To illustrate the power of this approach, it has been used for the design of TMS coils wound on rectangular flat, spherical and hemispherical surfaces, subjected to different constraints, such as minimum stored magnetic energy or power dissipation. The performances of such coils have been additionally described; and the torque experienced by each stimulator in the presence of a main magnetic static field have theoretically found in order to study the prospect of using them to perform TMS and fMRI concurrently. The obtained results show that described method is an efficient tool for the design of TMS stimulators, which can be applied to a wide range of coil geometries and performance requirements.
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Chimczak, Grzegorz; Lemr, Karel
2017-02-01
We describe a direct method for experimental determination of the negativity of an arbitrary two-qubit state with 11 measurements performed on multiple copies of the two-qubit system. Our method is based on the experimentally accessible sequences of singlet projections performed on up to four qubit pairs. In particular, our method permits the application of the Peres-Horodecki separability criterion to an arbitrary two-qubit state. We explicitly demonstrate that measuring entanglement in terms of negativity requires three measurements more than detecting two-qubit entanglement. The reported minimal set of interferometric measurements provides a complete description of bipartite quantum entanglement in terms of two-photon interference. This set is smaller than the set of 15 measurements needed to perform a complete quantum state tomography of an arbitrary two-qubit system. Finally, we demonstrate that the set of nine Makhlin's invariants needed to express the negativity can be measured by performing 13 multicopy projections. We demonstrate both that these invariants are a useful theoretical concept for designing specialized quantum interferometers and that their direct measurement within the framework of linear optics does not require performing complete quantum state tomography.
A practical material decomposition method for x-ray dual spectral computed tomography.
Hu, Jingjing; Zhao, Xing
2016-03-17
X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.
Survey and Method for Determination of Trajectory Predictor Requirements
NASA Technical Reports Server (NTRS)
Rentas, Tamika L.; Green, Steven M.; Cate, Karen Tung
2009-01-01
A survey of air-traffic-management researchers, representing a broad range of automation applications, was conducted to document trajectory-predictor requirements for future decision-support systems. Results indicated that the researchers were unable to articulate a basic set of trajectory-prediction requirements for their automation concepts. Survey responses showed the need to establish a process to help developers determine the trajectory-predictor-performance requirements for their concepts. Two methods for determining trajectory-predictor requirements are introduced. A fast-time simulation method is discussed that captures the sensitivity of a concept to the performance of its trajectory-prediction capability. A characterization method is proposed to provide quicker, yet less precise results, based on analysis and simulation to characterize the trajectory-prediction errors associated with key modeling options for a specific concept. Concept developers can then identify the relative sizes of errors associated with key modeling options, and qualitatively determine which options lead to significant errors. The characterization method is demonstrated for a case study involving future airport surface traffic management automation. Of the top four sources of error, results indicated that the error associated with accelerations to and from turn speeds was unacceptable, the error associated with the turn path model was acceptable, and the error associated with taxi-speed estimation was of concern and needed a higher fidelity concept simulation to obtain a more precise result
Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.
Wang, Yubo; Veluvolu, Kalyana C
2017-01-01
The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.
Functional performance of pyrovalves
NASA Technical Reports Server (NTRS)
Bement, Laurence J.
1996-01-01
Following several flight and ground test failures of spacecraft systems using single-shot, 'normally closed' pyrotechnically actuated valves (pyrovalves), a Government/Industry cooperative program was initiated to assess the functional performance of five qualified designs. The goal of the program was to provide information on functional performance of pyrovalves to allow users the opportunity to improve procurement requirements. Specific objectives included the demonstration of performance test methods, the seating; these gases/particles entered the fluid path of measurement of 'blowby' (the passage of gases from the pyrotechnic energy source around the activating piston into the valve's fluid path), and the quantification of functional margins for each design. Experiments were conducted at NASA's Langley Research Center on several units for each of the five valve designs. The test methods used for this program measured the forces and energies required to actuate the valves, as well as the energies and the pressures (where possible) delivered by the pyrotechnic sources. Functional performance ranged widely among the designs. Blowby cannot be prevented by o-ring seals; metal-to-metal seals were effective. Functional margin was determined by dividing the energy delivered by the pyrotechnic sources in excess to that required to accomplish the function by the energy required for that function. Two of the five designs had inadequate functional margins with the pyrotechnic cartridges evaluated.
Beam-modulation methods in quantitative and flow visualization holographic interferometry
NASA Technical Reports Server (NTRS)
Decker, A.
1986-01-01
This report discusses heterodyne holographic interferometry and time-average holography with a frequency shifted reference beam. Both methods will be used for the measurement and visualization of internal transonic flows, where the target facility is a flutter cascade. The background and experimental requirements for both methods are reviewed. Measurements using heterodyne holographic interferometry are presented. The performance of the laser required for time-average holography of time-varying transonic flows is discussed.
Beam-modulation methods in quantitative and flow-visualization holographic interferometry
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
1986-01-01
Heterodyne holographic interferometry and time-average holography with a frequency shifted reference beam are discussed. Both methods will be used for the measurement and visualization of internal transonic flows where the target facility is a flutter cascade. The background and experimental requirements for both methods are reviewed. Measurements using heterodyne holographic interferometry are presented. The performance of the laser required for time-average holography of time-varying transonic flows is discussed.
Real-time automatic registration in optical surgical navigation
NASA Astrophysics Data System (ADS)
Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming
2016-05-01
An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.
40 CFR 60.52Da - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Electric Utility... opacity field data sheets; (2) For each performance test conducted using Method 22 of appendix A-4 of this... performance test; (iii) Copies of all visible emission observer opacity field data sheets; and (iv...
48 CFR 970.1100-1 - Performance-based contracting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... methods of accomplishing the work; use measurable (i.e., terms of quality, timeliness, quantity) performance standards and objectives and quality assurance surveillance plans; provide performance incentives... work and other documents used to establish work requirements. (d) Quality assurance surveillance plans...
48 CFR 970.1100-1 - Performance-based contracting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... methods of accomplishing the work; use measurable (i.e., terms of quality, timeliness, quantity) performance standards and objectives and quality assurance surveillance plans; provide performance incentives... work and other documents used to establish work requirements. (d) Quality assurance surveillance plans...
48 CFR 970.1100-1 - Performance-based contracting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... methods of accomplishing the work; use measurable (i.e., terms of quality, timeliness, quantity) performance standards and objectives and quality assurance surveillance plans; provide performance incentives... work and other documents used to establish work requirements. (d) Quality assurance surveillance plans...
Development of Airport Surface Required Navigation Performance (RNP)
NASA Technical Reports Server (NTRS)
Cassell, Rick; Smith, Alex; Hicok, Dan
1999-01-01
The U.S. and international aviation communities have adopted the Required Navigation Performance (RNP) process for defining aircraft performance when operating the en-route, approach and landing phases of flight. RNP consists primarily of the following key parameters - accuracy, integrity, continuity, and availability. The processes and analytical techniques employed to define en-route, approach and landing RNP have been applied in the development of RNP for the airport surface. To validate the proposed RNP requirements several methods were used. Operational and flight demonstration data were analyzed for conformance with proposed requirements, as were several aircraft flight simulation studies. The pilot failure risk component was analyzed through several hypothetical scenarios. Additional simulator studies are recommended to better quantify crew reactions to failures as well as additional simulator and field testing to validate achieved accuracy performance, This research was performed in support of the NASA Low Visibility Landing and Surface Operations Programs.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
7 CFR 1755.910 - RUS specification for outside plant housings and serving area interface systems.
Code of Federal Regulations, 2012 CFR
2012-01-01
... requirements are interrelated to several tests designed to determine the performance aspects of terminals and... environments. Included are the mechanical, electrical, and environmental requirements, desired design features, and test methods for evaluation of the product. (2) The housing and terminal requirements reflect the...
7 CFR 1755.910 - RUS specification for outside plant housings and serving area interface systems.
Code of Federal Regulations, 2013 CFR
2013-01-01
... requirements are interrelated to several tests designed to determine the performance aspects of terminals and... environments. Included are the mechanical, electrical, and environmental requirements, desired design features, and test methods for evaluation of the product. (2) The housing and terminal requirements reflect the...
7 CFR 1755.910 - RUS specification for outside plant housings and serving area interface systems.
Code of Federal Regulations, 2014 CFR
2014-01-01
... requirements are interrelated to several tests designed to determine the performance aspects of terminals and... environments. Included are the mechanical, electrical, and environmental requirements, desired design features, and test methods for evaluation of the product. (2) The housing and terminal requirements reflect the...
7 CFR 1755.910 - RUS specification for outside plant housings and serving area interface systems.
Code of Federal Regulations, 2011 CFR
2011-01-01
... requirements are interrelated to several tests designed to determine the performance aspects of terminals and... environments. Included are the mechanical, electrical, and environmental requirements, desired design features, and test methods for evaluation of the product. (2) The housing and terminal requirements reflect the...
20 CFR 640.3 - Interpretation of Federal law requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... require that a State law include provision for such methods of administration as will reasonable insure... Security Act to require that, in the administration of a State law, there shall be substantial compliance... benefits. Factors reasonably beyond a State's control may cause its performance to drop below the level of...
Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros
NASA Technical Reports Server (NTRS)
Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.
1973-01-01
Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.
Hughes, Douglas A.
2006-04-04
A method and system are provided for determining the torque required to launch a vehicle having a hybrid drive-train that includes at least two independently operable prime movers. The method includes the steps of determining the value of at least one control parameter indicative of a vehicle operating condition, determining the torque required to launch the vehicle from the at least one determined control parameter, comparing the torque available from the prime movers to the torque required to launch the vehicle, and controlling operation of the prime movers to launch the vehicle in response to the comparing step. The system of the present invention includes a control unit configured to perform the steps of the method outlined above.
Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.
2015-01-01
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Sanderson, A. C.
1994-01-01
Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.
ERIC Educational Resources Information Center
Lin, P. L.; Tan, W. H.
2003-01-01
Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)
Furnace and support equipment for space processing. [space manufacturing - Czochralski method
NASA Technical Reports Server (NTRS)
Mazelsky, R.; Duncan, C. S.; Seidensticker, R. G.; Johnson, R. A.; Hopkins, R. H.; Roland, G. W.
1975-01-01
A core facility capable of performing a majority of materials processing experiments is discussed. Experiment classes are described, the needs peculiar to each experiment type are outlined, and projected facility requirements to perform the experiments are treated. Control equipment (automatic control) and variations of the Czochralski method for use in space are discussed.
Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza
2018-01-01
The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.
Statistical and Machine Learning forecasting methods: Concerns and ways forward
Makridakis, Spyros; Assimakopoulos, Vassilios
2018-01-01
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784
Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors
Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech
2011-01-01
Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935
Johnston, Patrick A; Brown, Robert C
2014-08-13
A rapid method for the quantitation of total sugars in pyrolysis liquids using high-performance liquid chromatography (HPLC) was developed. The method avoids the tedious and time-consuming sample preparation required by current analytical methods. It is possible to directly analyze hydrolyzed pyrolysis liquids, bypassing the neutralization step usually required in determination of total sugars. A comparison with traditional methods was used to determine the validity of the results. The calibration curve coefficient of determination on all standard compounds was >0.999 using a refractive index detector. The relative standard deviation for the new method was 1.13%. The spiked sugar recoveries on the pyrolysis liquid samples were between 104 and 105%. The research demonstrates that it is possible to obtain excellent accuracy and efficiency using HPLC to quantitate glucose after acid hydrolysis of polymeric and oligomeric sugars found in fast pyrolysis bio-oils without neutralization.
Economic method for helical gear flank surface characterisation
NASA Astrophysics Data System (ADS)
Koulin, G.; Reavie, T.; Frazer, R. C.; Shaw, B. A.
2018-03-01
Typically the quality of a gear pair is assessed based on simplified geometric tolerances which do not always correlate with functional performance. In order to identify and quantify functional performance based parameters, further development of the gear measurement approach is required. Methodology for interpolation of the full active helical gear flank surface, from sparse line measurements, is presented. The method seeks to identify the minimum number of line measurements required to sufficiently characterise an active gear flank. In the form ground gear example presented, a single helix and three profile line measurements was considered to be acceptable. The resulting surfaces can be used to simulate the meshing engagement of a gear pair and therefore provide insight into functional performance based parameters. Therefore the assessment of the quality can be based on the predicted performance in the context of an application.
Roles and methods of performance evaluation of hospital academic leadership.
Zhou, Ying; Yuan, Huikang; Li, Yang; Zhao, Xia; Yi, Lihua
2016-01-01
The rapidly advancing implementation of public hospital reform urgently requires the identification and classification of a pool of exceptional medical specialists, corresponding with incentives to attract and retain them, providing a nucleus of distinguished expertise to ensure public hospital preeminence. This paper examines the significance of academic leadership, from a strategic management perspective, including various tools, methods and mechanisms used in the theory and practice of performance evaluation, and employed in the selection, training and appointment of academic leaders. Objective methods of assessing leadership performance are also provided for reference.
Rapid B-rep model preprocessing for immersogeometric analysis using analytic surfaces
Wang, Chenglong; Xu, Fei; Hsu, Ming-Chen; Krishnamurthy, Adarsh
2017-01-01
Computational fluid dynamics (CFD) simulations of flow over complex objects have been performed traditionally using fluid-domain meshes that conform to the shape of the object. However, creating shape conforming meshes for complicated geometries like automobiles require extensive geometry preprocessing. This process is usually tedious and requires modifying the geometry, including specialized operations such as defeaturing and filling of small gaps. Hsu et al. (2016) developed a novel immersogeometric fluid-flow method that does not require the generation of a boundary-fitted mesh for the fluid domain. However, their method used the NURBS parameterization of the surfaces for generating the surface quadrature points to enforce the boundary conditions, which required the B-rep model to be converted completely to NURBS before analysis can be performed. This conversion usually leads to poorly parameterized NURBS surfaces and can lead to poorly trimmed or missing surface features. In addition, converting simple geometries such as cylinders to NURBS imposes a performance penalty since these geometries have to be dealt with as rational splines. As a result, the geometry has to be inspected again after conversion to ensure analysis compatibility and can increase the computational cost. In this work, we have extended the immersogeometric method to generate surface quadrature points directly using analytic surfaces. We have developed quadrature rules for all four kinds of analytic surfaces: planes, cones, spheres, and toroids. We have also developed methods for performing adaptive quadrature on trimmed analytic surfaces. Since analytic surfaces have frequently been used for constructing solid models, this method is also faster to generate quadrature points on real-world geometries than using only NURBS surfaces. To assess the accuracy of the proposed method, we perform simulations of a benchmark problem of flow over a torpedo shape made of analytic surfaces and compare those to immersogeometric simulations of the same model with NURBS surfaces. We also compare the results of our immersogeometric method with those obtained using boundary-fitted CFD of a tessellated torpedo shape, and quantities of interest such as drag coefficient are in good agreement. Finally, we demonstrate the effectiveness of our immersogeometric method for high-fidelity industrial scale simulations by performing an aerodynamic analysis of a truck that has a large percentage of analytic surfaces. Using analytic surfaces over NURBS avoids unnecessary surface type conversion and significantly reduces model-preprocessing time, while providing the same accuracy for the aerodynamic quantities of interest. PMID:29051678
Review of fire test methods and incident data for portable electric cables in underground coal mines
NASA Astrophysics Data System (ADS)
Braun, E.
1981-06-01
Electrically powered underground coal mining machinery is connected to a load center or distribution box by electric cables. The connecting cables used on mobile machines are required to meet fire performance requirements defined in the Code of Federal Regulations. This report reviews Mine Safety and Health Administration's (MSHA) current test method and compares it to British practices. Incident data for fires caused by trailing cable failures and splice failures were also reviewed. It was found that the MSHA test method is more severe than the British but that neither evaluated grouped cable fire performance. The incident data indicated that the grouped configuration of cables on a reel accounted for a majority of the fires since 1970.
NASA Astrophysics Data System (ADS)
Wang, Xi; Chen, Shouhui; Zheng, Tianyong; Ning, Xiangchun; Dai, Yifei
2018-03-01
The filament yarns spreading techniques of electronic fiberglass fabric were developed in the past few years in order to meet the requirements of the development of electronic industry. Copper clad laminate (CCL) requires that the warp and weft yarns of the fabric could be spread out of apart and formed flat. The penetration performance of resin could be improved due to the filament yarns spreading techniques of electronic fiberglass fabric, the same as peeling strength of CCL and drilling performance of printed circuit board (PCB). This paper shows the filament yarns spreading techniques of electronic fiberglass fabric from several aspects, such as methods and functions, also with the assessment methods of their effects.
Niaksu, Olegas; Zaptorius, Jonas
2014-01-01
This paper presents the methodology suitable for creation of a performance related remuneration system in healthcare sector, which would meet requirements for efficiency and sustainable quality of healthcare services. Methodology for performance indicators selection, ranking and a posteriori evaluation has been proposed and discussed. Priority Distribution Method is applied for unbiased performance criteria weighting. Data mining methods are proposed to monitor and evaluate the results of motivation system.We developed a method for healthcare specific criteria selection consisting of 8 steps; proposed and demonstrated application of Priority Distribution Method for the selected criteria weighting. Moreover, a set of data mining methods for evaluation of the motivational system outcomes was proposed. The described methodology for calculating performance related payment needs practical approbation. We plan to develop semi-automated tools for institutional and personal performance indicators monitoring. The final step would be approbation of the methodology in a healthcare facility.
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Chen, Y. H.
1974-01-01
An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.
LANDSAT-D conical scanner evaluation plan
NASA Technical Reports Server (NTRS)
Bilanow, S.; Chen, L. C. (Principal Investigator)
1982-01-01
The planned activities involved in the inflight sensor calibration and performance evaluation are discussed and the supporting software requirements are specified. The possible sensor error sources and their effects on sensor measurements are summarized. The methods by which the inflight sensor performance will be analyzed and the sensor modeling parameters will be calibrated are presented. In addition, a brief discussion on the data requirement for the study is provided.
1990-02-01
inspections are performed before each formal review of each software life cycle phase. * Required software audits are performed . " The software is acceptable... Audits : Software audits are performed bySQA consistent with thegeneral audit rules and an auditreportis prepared. Software Quality Inspection (SQI...DSD Software Development Method 3-34 DEFINITION OF ACRONYMS Acronym Full Name or Description MACH Methode d’Analyse et de Conception Flierarchisee
40 CFR 60.675 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Test methods and procedures. 60.675... Mineral Processing Plants § 60.675 Test methods and procedures. (a) In conducting the performance tests required in § 60.8, the owner or operator shall use as reference methods and procedures the test methods in...
40 CFR 60.503 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Test methods and procedures. 60.503... Terminals § 60.503 Test methods and procedures. (a) In conducting the performance tests required in § 60.8, the owner or operator shall use as reference methods and procedures the test methods in appendix A of...
40 CFR 60.503 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Test methods and procedures. 60.503... Terminals § 60.503 Test methods and procedures. (a) In conducting the performance tests required in § 60.8, the owner or operator shall use as reference methods and procedures the test methods in appendix A of...
40 CFR 60.503 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Test methods and procedures. 60.503... Terminals § 60.503 Test methods and procedures. (a) In conducting the performance tests required in § 60.8, the owner or operator shall use as reference methods and procedures the test methods in appendix A of...
40 CFR 60.503 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Test methods and procedures. 60.503... Terminals § 60.503 Test methods and procedures. (a) In conducting the performance tests required in § 60.8, the owner or operator shall use as reference methods and procedures the test methods in appendix A of...
40 CFR 60.503 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Test methods and procedures. 60.503... Terminals § 60.503 Test methods and procedures. (a) In conducting the performance tests required in § 60.8, the owner or operator shall use as reference methods and procedures the test methods in appendix A of...
A parallel implementation of a multisensor feature-based range-estimation method
NASA Technical Reports Server (NTRS)
Suorsa, Raymond E.; Sridhar, Banavar
1993-01-01
There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
GPS navigation algorithms for Autonomous Airborne Refueling of Unmanned Air Vehicles
NASA Astrophysics Data System (ADS)
Khanafseh, Samer Mahmoud
Unmanned Air Vehicles (UAVs) have recently generated great interest because of their potential to perform hazardous missions without risking loss of life. If autonomous airborne refueling is possible for UAVs, mission range and endurance will be greatly enhanced. However, concerns about UAV-tanker proximity, dynamic mobility and safety demand that the relative navigation system meets stringent requirements on accuracy, integrity, and continuity. In response, this research focuses on developing high-performance GPS-based navigation architectures for Autonomous Airborne Refueling (AAR) of UAVs. The AAR mission is unique because of the potentially severe sky blockage introduced by the tanker. To address this issue, a high-fidelity dynamic sky blockage model was developed and experimentally validated. In addition, robust carrier phase differential GPS navigation algorithms were derived, including a new method for high-integrity reacquisition of carrier cycle ambiguities for recently-blocked satellites. In order to evaluate navigation performance, world-wide global availability and sensitivity covariance analyses were conducted. The new navigation algorithms were shown to be sufficient for turn-free scenarios, but improvement in performance was necessary to meet the difficult requirements for a general refueling mission with banked turns. Therefore, several innovative methods were pursued to enhance navigation performance. First, a new theoretical approach was developed to quantify the position-domain integrity risk in cycle ambiguity resolution problems. A mechanism to implement this method with partially-fixed cycle ambiguity vectors was derived, and it was used to define tight upper bounds on AAR navigation integrity risk. A second method, where a new algorithm for optimal fusion of measurements from multiple antennas was developed, was used to improve satellite coverage in poor visibility environments such as in AAR. Finally, methods for using data-link extracted measurements as an additional inter-vehicle ranging measurement were also introduced. The algorithms and methods developed in this work are generally applicable to realize high-performance GPS-based navigation in partially obstructed environments. Navigation performance for AAR was quantified through covariance analysis, and it was shown that the stringent navigation requirements for this application are achievable. Finally, a real-time implementation of the algorithms was developed and successfully validated in autopiloted flight tests.
Gas Turbine Characteristics for a Large Civil Tilt-Rotor (LCTR)
NASA Technical Reports Server (NTRS)
Snyder, Christopher A.; Thurman, Douglas R.
2010-01-01
In support of the Fundamental Aeronautics Program, Subsonic Rotary Wing Project; an engine system study has been undertaken to help define and understand some of the major gas turbine engine parameters required to meet performance and weight requirements as defined by earlier vehicle system studies. These previous vehicle studies will be reviewed to help define gas turbine performance goals. Assumptions and analysis methods used will be described. Performance and weight estimates for a few conceptual gas turbine engines meeting these requirements will be given and discussed. Estimated performance for these conceptual engines over a wide speed variation (down to 50 percent power turbine rpm at high torque) will be presented. Finally, areas needing further effort will be suggested and discussed.
Code of Federal Regulations, 2011 CFR
2011-07-01
... per million dry volume absolute value of the mean difference between the method and the continuous... activities (including, as applicable, calibration checks and required zero and span adjustments). Any such...
Compilation of Pilot Cognitive Ability Norms
2011-12-01
2.1.1 Change in Performance Method. The first method is a pretest , posttest paradigm. It is the most reliable but requires prior, premorbid...elements of the person’s own performance to make conclusions regarding cognitive change. A common approach uses the effects of aging on various types of...Percentile Equivalence for IQ Scores on the MAB-II ............................................... 7 4 Percentile Equivalence for Verbal Subtest
Preliminary sizing and performance of aircraft
NASA Technical Reports Server (NTRS)
Fetterman, D. E., Jr.
1985-01-01
The basic processes of a program that performs sizing operations on a baseline aircraft and determines their subsequent effects on aerodynamics, propulsion, weights, and mission performance are described. Input requirements are defined and output listings explained. Results obtained by applying the method to several types of aircraft are discussed.
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
The Second SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-2)
NASA Technical Reports Server (NTRS)
2005-01-01
Eight international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a variety of laboratory standards. The field samples were collected primarily from eutrophic waters, although mesotrophic waters were also sampled to create a dynamic range in chlorophyll concentration spanning approximately two orders of magnitude (0.3 25.8 mg m-3). The intercomparisons were used to establish the following: a) the uncertainties in quantitating individual pigments and higher-order variables (sums, ratios, and indices); b) an evaluation of spectrophotometric versus HPLC uncertainties in the determination of total chlorophyll a; and c) the reduction in uncertainties as a result of applying quality assurance (QA) procedures associated with extraction, separation, injection, degradation, detection, calibration, and reporting (particularly limits of detection and quantitation). In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied. The culmination of the activity was a validation of the round-robin methodology plus the development of the requirements for validating an individual HPLC method. The validation process includes the measurements required to initially demonstrate a pigment is validated, and the measurements that must be made during sample analysis to confirm a method remains validated. The so-called performance-based metrics developed here describe a set of thresholds for a variety of easily-measured parameters with a corresponding set of performance categories. The aggregate set of performance parameters and categories establish a) the overall performance capability of the method, and b) whether or not the capability is consistent with the required accuracy objectives.
Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data
Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria
2015-01-01
Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841
Assessment and Verification of SLS Block 1-B Exploration Upper Stage and Stage Disposal Performance
NASA Technical Reports Server (NTRS)
Patrick, Sean; Oliver, T. Emerson; Anzalone, Evan J.
2018-01-01
Delta-v allocation to correct for insertion errors caused by state uncertainty is one of the key performance requirements imposed on the SLS Navigation System. Additionally, SLS mission requirements include the need for the Exploration Up-per Stage (EUS) to be disposed of successfully. To assess these requirements, the SLS navigation team has developed and implemented a series of analysis methods. Here the authors detail the Delta-Delta-V approach to assessing delta-v allocation as well as the EUS disposal optimization approach.
Does unbelted safety requirement affect protection for belted occupants?
Hu, Jingwen; Klinich, Kathleen D; Manary, Miriam A; Flannagan, Carol A C; Narayanaswamy, Prabha; Reed, Matthew P; Andreen, Margaret; Neal, Mark; Lin, Chin-Hsu
2017-05-29
Federal regulations in the United States require vehicles to meet occupant performance requirements with unbelted test dummies. Removing the test requirements with unbelted occupants might encourage the deployment of seat belt interlocks and allow restraint optimization to focus on belted occupants. The objective of this study is to compare the performance of restraint systems optimized for belted-only occupants with those optimized for both belted and unbelted occupants using computer simulations and field crash data analyses. In this study, 2 validated finite element (FE) vehicle/occupant models (a midsize sedan and a midsize SUV) were selected. Restraint design optimizations under standardized crash conditions (U.S.-NCAP and FMVSS 208) with and without unbelted requirements were conducted using Hybrid III (HIII) small female and midsize male anthropomorphic test devices (ATDs) in both vehicles on both driver and right front passenger positions. A total of 10 to 12 design parameters were varied in each optimization using a combination of response surface method (RSM) and genetic algorithm. To evaluate the field performance of restraints optimized with and without unbelted requirements, 55 frontal crash conditions covering a greater variety of crash types than those in the standardized crashes were selected. A total of 1,760 FE simulations were conducted for the field performance evaluation. Frontal crashes in the NASS-CDS database from 2002 to 2012 were used to develop injury risk curves and to provide the baseline performance of current restraint system and estimate the injury risk change by removing the unbelted requirement. Unbelted requirements do not affect the optimal seat belt and airbag design parameters in 3 out of 4 vehicle/occupant position conditions, except for the SUV passenger side. Overall, compared to the optimal designs with unbelted requirements, optimal designs without unbelted requirements generated the same or lower total injury risks for belted occupants depending on statistical methods used for the analysis, but they could also increase the total injury risks for unbelted occupants. This study demonstrated potential for reducing injury risks to belted occupants if the unbelted requirements are eliminated. Further investigations are necessary to confirm these findings.
Novel operation and control of an electric vehicle aluminum/air battery system
NASA Astrophysics Data System (ADS)
Zhang, Xin; Yang, Shao Hua; Knickle, Harold
The objective of this paper is to create a method to size battery subsystems for an electric vehicle to optimize battery performance. Optimization of performance includes minimizing corrosion by operating at a constant current density. These subsystems will allow for easy mechanical recharging. A proper choice of battery subsystem will allow for longer battery life, greater range and performance. For longer life, the current density and reaction rate should be nearly constant. The control method requires control of power by controlling electrolyte flow in battery sub modules. As power is increased more sub modules come on line and more electrolyte is needed. Solenoid valves open in a sequence to provide the required power. Corrosion is limited because there is no electrolyte in the modules not being used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Keates, Steven
This protocol is intended to describe the recommended method when evaluating the whole-building performance of new construction projects in the commercial sector. The protocol focuses on energy conservation measures (ECMs) or packages of measures where evaluators can analyze impacts using building simulation. These ECMs typically require the use of calibrated building simulations under Option D of the International Performance Measurement and Verification Protocol (IPMVP).
Development of an Active Flow Control Technique for an Airplane High-Lift Configuration
NASA Technical Reports Server (NTRS)
Shmilovich, Arvin; Yadlin, Yoram; Dickey, Eric D.; Hartwich, Peter M.; Khodadoust, Abdi
2017-01-01
This study focuses on Active Flow Control methods used in conjunction with airplane high-lift systems. The project is motivated by the simplified high-lift system, which offers enhanced airplane performance compared to conventional high-lift systems. Computational simulations are used to guide the implementation of preferred flow control methods, which require a fluidic supply. It is first demonstrated that flow control applied to a high-lift configuration that consists of simple hinge flaps is capable of attaining the performance of the conventional high-lift counterpart. A set of flow control techniques has been subsequently considered to identify promising candidates, where the central requirement is that the mass flow for actuation has to be within available resources onboard. The flow control methods are based on constant blowing, fluidic oscillators, and traverse actuation. The simulations indicate that the traverse actuation offers a substantial reduction in required mass flow, and it is especially effective when the frequency of actuation is consistent with the characteristic time scale of the flow.
Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W
2017-08-28
The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
40 CFR 60.644 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Test methods and procedures. 60.644... Gas Processing: SO2 Emissions § 60.644 Test methods and procedures. (a) In conducting the performance tests required in § 60.8, the owner or operator shall use as reference methods and procedures the test...
40 CFR 60.335 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Test methods and procedures. 60.335... Turbines § 60.335 Test methods and procedures. (a) The owner or operator shall conduct the performance tests required in § 60.8, using either (1) EPA Method 20, (2) ASTM D6522-00 (incorporated by reference...
40 CFR 60.335 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Test methods and procedures. 60.335... Turbines § 60.335 Test methods and procedures. (a) The owner or operator shall conduct the performance tests required in § 60.8, using either (1) EPA Method 20, (2) ASTM D6522-00 (incorporated by reference...
40 CFR 60.335 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Test methods and procedures. 60.335... Turbines § 60.335 Test methods and procedures. (a) The owner or operator shall conduct the performance tests required in § 60.8, using either (1) EPA Method 20, (2) ASTM D6522-00 (incorporated by reference...
40 CFR 60.335 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Test methods and procedures. 60.335... Turbines § 60.335 Test methods and procedures. (a) The owner or operator shall conduct the performance tests required in § 60.8, using either (1) EPA Method 20, (2) ASTM D6522-00 (incorporated by reference...
First-principles simulations of heat transport
NASA Astrophysics Data System (ADS)
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
2017-11-01
Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.
A class of hybrid finite element methods for electromagnetics: A review
NASA Technical Reports Server (NTRS)
Volakis, J. L.; Chatterjee, A.; Gong, J.
1993-01-01
Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.
NASA Technical Reports Server (NTRS)
Kohlman, D. L.; Albright, A. E.
1983-01-01
An analytical method was developed for predicting minimum flow rates required to provide anti-ice protection with a porous leading edge fluid ice protection system. The predicted flow rates compare with an average error of less than 10 percent to six experimentally determined flow rates from tests in the NASA Icing Research Tunnel on a general aviation wing section.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Inorganic HAP Emissions From Catalytic Reforming Units 25 Table 25 to Subpart UUU of Part 63 Protection of... Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 25 Table 25 to Subpart UUU of Part 63—Requirements for... Procedure) in appendix A to subpart UUU; or EPA Method 5050 combined either with EPA Method 9056, or with...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Inorganic HAP Emissions From Catalytic Reforming Units 25 Table 25 to Subpart UUU of Part 63 Protection of... Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 25 Table 25 to Subpart UUU of Part 63—Requirements for... Procedure) in appendix A to subpart UUU; or EPA Method 5050 combined either with EPA Method 9056, or with...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Inorganic HAP Emissions From Catalytic Reforming Units 25 Table 25 to Subpart UUU of Part 63 Protection of... Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 25 Table 25 to Subpart UUU of Part 63—Requirements for... Procedure) in appendix A to subpart UUU; or EPA Method 5050 combined either with EPA Method 9056, or with...
Benefits and assessment of annual budget requirements for pavement preservation.
DOT National Transportation Integrated Search
2012-01-01
This research identifies methods and best practices that can be used by the Indiana Department of Transportation (INDOT) in : performing various strategies for pavement preservation. It also identifies various methods of calculating the benefits of :...
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Crew interface with a telerobotic control station
NASA Technical Reports Server (NTRS)
Mok, Eva
1987-01-01
A method for apportioning crew-telerobot tasks has been derived to facilitate the design of a crew-friendly telerobot control station. To identify the most appropriate state-of-the-art hardware for the control station, task apportionment must first be conducted to identify if an astronaut or a telerobot is best to execute the task and which displays and controls are required for monitoring and performance. Basic steps that comprise the task analysis process are: (1) identify space station tasks; (2) define tasks; (3) define task performance criteria and perform task apportionment; (4) verify task apportionment; (5) generate control station requirements; (6) develop design concepts to meet requirements; and (7) test and verify design concepts.
Draft Plan to Develop Non-Intrusive Load Monitoring Test Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.
2015-09-29
This document presents a Draft Plan proposed to develop a common test protocol that can be used to evaluate the performance requirements of Non-Intrusive Load Monitoring. Development on the test protocol will be focused on providing a consistent method that can be used to quantify and compare the performance characteristics of NILM products. Elements of the protocols include specifications for appliances to be used, metrics, instrumentation, and a procedure to simulate appliance behavior during tests. In addition, three priority use cases for NILM will be identified and their performance requirements will specified.
ERIC Educational Resources Information Center
Bassi, Laurie J.; And Others
1996-01-01
Trends shaping the workplace are increased skill requirements; more educated, diverse work force; continued corporate restructuring; change in size and composition of training departments; instructional technology advances; new training delivery methods; focus on performance improvement; integrated high-performance work systems; companies becoming…
Code of Federal Regulations, 2010 CFR
2010-07-01
...—Requirements for Continuous Emission Monitoring Systems (CEMS) For the following pollutants Use the following span values for CEMS Use the following performance specifications in appendix B of this part for your CEMS If needed to meet minimum data requirements, use the folloiwng alternate methods in appendix A of...
Code of Federal Regulations, 2011 CFR
2011-07-01
...—Requirements for Continuous Emission Monitoring Systems (CEMS) For the following pollutants Use the following span values for CEMS Use the following performance specifications in appendix B of this part for your CEMS If needed to meet minimum data requirements, use the folloiwng alternate methods in appendix A of...
Keyboard before Head Tracking Depresses User Success in Remote Camera Control
NASA Astrophysics Data System (ADS)
Zhu, Dingyun; Gedeon, Tom; Taylor, Ken
In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.
Sullivan, Darryl
2016-01-01
Infant formula is one of the most highly regulated products in the world. To comply with global regulations and to ensure the products are manufactured within product specifications, accurate analytical testing is required. Most of the AOAC INTERNATIONAL legacy test methods for infant formula were developed and validated in the 1980s and 1990s. Although these methods performed very well for many years, infant formulas have been updated, and today's products contain many new and novel ingredients. There were a number of cases in which the legacy AOAC methods began to result in problems with the analysis of modern infant formulas, and the use of these methods caused some disputes with regulatory agencies. In 2010, AOAC reached an agreement with the International Formula Council, which has changed its name to the Infant Nutrition Council of America, regarding a project to modernize these AOAC infant-formula test methods. This agreement led to the development of Standard Method Performance Requirements (SMPRs(®)) for 28 nutrients. After SMPR approval, methods were collected, evaluated, validated, and approved through the AOAC Official Methods(SM) process. Forty-seven methods have been approved as AOAC First Action Methods, and eight have been approved as Final Action.
USDA-ARS?s Scientific Manuscript database
Data from modern soil water contents probes can be used for data assimilation in soil water flow modeling, i.e. continual correction of the flow model performance based on observations. The ensemble Kalman filter appears to be an appropriate method for that. The method requires estimates of the unce...
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1975-01-01
Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.
Spatial Statistics for Tumor Cell Counting and Classification
NASA Astrophysics Data System (ADS)
Wirjadi, Oliver; Kim, Yoo-Jin; Breuel, Thomas
To count and classify cells in histological sections is a standard task in histology. One example is the grading of meningiomas, benign tumors of the meninges, which requires to assess the fraction of proliferating cells in an image. As this process is very time consuming when performed manually, automation is required. To address such problems, we propose a novel application of Markov point process methods in computer vision, leading to algorithms for computing the locations of circular objects in images. In contrast to previous algorithms using such spatial statistics methods in image analysis, the present one is fully trainable. This is achieved by combining point process methods with statistical classifiers. Using simulated data, the method proposed in this paper will be shown to be more accurate and more robust to noise than standard image processing methods. On the publicly available SIMCEP benchmark for cell image analysis algorithms, the cell count performance of the present paper is significantly more accurate than results published elsewhere, especially when cells form dense clusters. Furthermore, the proposed system performs as well as a state-of-the-art algorithm for the computer-aided histological grading of meningiomas when combined with a simple k-nearest neighbor classifier for identifying proliferating cells.
Requirement Assurance: A Verification Process
NASA Technical Reports Server (NTRS)
Alexander, Michael G.
2011-01-01
Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
DOT National Transportation Integrated Search
2012-03-31
This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...
DOT National Transportation Integrated Search
2012-03-01
This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...
Code of Federal Regulations, 2010 CFR
2010-01-01
... principles of the teaching-learning process; (ii) Teaching methods and procedures; and (iii) The instructor... certificate holder's policies and procedures. (3) The applicable methods, procedures, and techniques for... approved methods, procedures, and limitations for performing the required normal, abnormal, and emergency...
Code of Federal Regulations, 2014 CFR
2014-01-01
... principles of the teaching-learning process; (ii) Teaching methods and procedures; and (iii) The instructor... certificate holder's policies and procedures. (3) The applicable methods, procedures, and techniques for... approved methods, procedures, and limitations for performing the required normal, abnormal, and emergency...
Code of Federal Regulations, 2012 CFR
2012-01-01
... principles of the teaching-learning process; (ii) Teaching methods and procedures; and (iii) The instructor... certificate holder's policies and procedures. (3) The applicable methods, procedures, and techniques for... approved methods, procedures, and limitations for performing the required normal, abnormal, and emergency...
Code of Federal Regulations, 2013 CFR
2013-01-01
... principles of the teaching-learning process; (ii) Teaching methods and procedures; and (iii) The instructor... certificate holder's policies and procedures. (3) The applicable methods, procedures, and techniques for... approved methods, procedures, and limitations for performing the required normal, abnormal, and emergency...
Code of Federal Regulations, 2011 CFR
2011-01-01
... principles of the teaching-learning process; (ii) Teaching methods and procedures; and (iii) The instructor... certificate holder's policies and procedures. (3) The applicable methods, procedures, and techniques for... approved methods, procedures, and limitations for performing the required normal, abnormal, and emergency...
Virus Particle Detection by Convolutional Neural Network in Transmission Electron Microscopy Images.
Ito, Eisuke; Sato, Takaaki; Sano, Daisuke; Utagawa, Etsuko; Kato, Tsuyoshi
2018-06-01
A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps. The detection performance of the proposed method was assessed against a dataset of TEM images containing feline calicivirus particles and compared with several existing detection methods, and the state-of-the-art performance of the developed method for detecting virus was demonstrated. Since our method is based on supervised learning that requires both the input images and their corresponding annotations, it is basically used for detection of already-known viruses. However, the method is highly flexible, and the convolutional networks can adapt themselves to any virus particles by learning automatically from an annotated dataset.
Analysis of high-throughput biological data using their rank values.
Dembélé, Doulaye
2018-01-01
High-throughput biological technologies are routinely used to generate gene expression profiling or cytogenetics data. To achieve high performance, methods available in the literature become more specialized and often require high computational resources. Here, we propose a new versatile method based on the data-ordering rank values. We use linear algebra, the Perron-Frobenius theorem and also extend a method presented earlier for searching differentially expressed genes for the detection of recurrent copy number aberration. A result derived from the proposed method is a one-sample Student's t-test based on rank values. The proposed method is to our knowledge the only that applies to gene expression profiling and to cytogenetics data sets. This new method is fast, deterministic, and requires a low computational load. Probabilities are associated with genes to allow a statistically significant subset selection in the data set. Stability scores are also introduced as quality parameters. The performance and comparative analyses were carried out using real data sets. The proposed method can be accessed through an R package available from the CRAN (Comprehensive R Archive Network) website: https://cran.r-project.org/web/packages/fcros .
Campos-Filho, N; Franco, E L
1989-02-01
A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.
Multidimensional NMR inversion without Kronecker products: Multilinear inversion
NASA Astrophysics Data System (ADS)
Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos
2016-08-01
Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.
Three-dimensional compound comparison methods and their application in drug discovery.
Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke
2015-07-16
Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Conduct of Performance Tests Yes. 63.7(f) Alternative Test Method Yes. 63.7(g) Data Analysis Yes. 63.7(h... Method Yes. 63.8(g) Reduction of Monitoring Data Yes. 63.9(a) Notification Requirements Yes. 63.9(b...(e)(4) Reporting COM Data No COM not required. 63.10(f) Waiver of Recordkeeping/Reporting Yes. 63.11...
Mesoscopic modelling and simulation of soft matter.
Schiller, Ulf D; Krüger, Timm; Henrich, Oliver
2017-12-20
The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.
NASA Technical Reports Server (NTRS)
Parks, D. M.
1974-01-01
A finite element technique for determination of elastic crack tip stress intensity factors is presented. The method, based on the energy release rate, requires no special crack tip elements. Further, the solution for only a single crack length is required, and the crack is 'advanced' by moving nodal points rather than by removing nodal tractions at the crack tip and performing a second analysis. The promising straightforward extension of the method to general three-dimensional crack configurations is presented and contrasted with the practical impossibility of conventional energy methods.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.
Laser data transfer flight experiment definition
NASA Technical Reports Server (NTRS)
Merritt, J. R.
1975-01-01
A set of laser communication flight experiments to be performed between a relay satellite, ground terminals, and space shuttles were synthesized and evaluated. Results include a definition of the space terminals, NASA ground terminals, test methods, and test schedules required to perform the experiments.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receivingmore » at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.« less
CPR Instruction in U.S. High Schools: What Is the State in the Nation?
Brown, Lorrel E; Lynes, Carlos; Carroll, Travis; Halperin, Henry
2017-11-28
Cardiopulmonary resuscitation (CPR) training in high schools is required by law in the majority of U.S. states. However, laws differ from state to state, and it is unknown how this legislation is being enacted. The authors sent a cross-sectional, closed survey to educational superintendents in 32 states with CPR laws in June 2016. The authors subsequently performed direct examination and categorization of CPR legislation in 39 states (several states passed legislation as of September 2017). Survey results indicated differing practices with regard to CPR instruction in areas such as course content (63% perform automated external defibrillator training), instructor (47% used CPR-certified teachers/coaches, 30% used other CPR-certified instructors, 11% used noncertified teachers/coaches), and method (7% followed American Red Cross methods, 55% followed American Heart Association methods). CPR laws differ, although almost all (97%) require hands-on training. Although hands-on practice during CPR instruction in high school is required by law in the majority of U.S. states, there is currently no standardized method of implementation. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
1991-06-01
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Rutting performance of cold bituminous emulsion mixtures
NASA Astrophysics Data System (ADS)
Arshad, Ahmad Kamil; Ali, Noor Azilatom; Shaffie, Ekarizan; Hashim, Wardati; Rahman, Zanariah Abd
2017-10-01
Cold Bituminous Emulsion Mixture (CBEM) is an environmentally friendly alternative to hot mix asphalt (HMA) for road surfacing, due to its low energy requirements. However, CBEM has generally been perceived to be less superior in performance, compared to HMA. This paper details a laboratory study on the rutting performance of CBEM. The main objective of this study is to determine the Marshall properties of CBEM and to evaluate the rutting performance. The effect of cement in CBEM was also evaluated in this study. The specimens were prepared using Marshall Mix Design Method and rutting performance was evaluated using the Asphalt Pavement Analyzer (APA). Marshall Properties were analysed to confirm compliance with the PWD Malaysia's specification requirements. The rutting performance for specimens with cement was also found to perform better than specimens without cement. It can be concluded that Cold Bituminous Emulsion Mixtures (CBEM) with cement is a viable alternative to Hot Mix Asphalt (HMA) as their Marshall Properties and performance obtained from this study meets the requirements of the specifications. It is recommended that further study be conducted on CBEM for other performance criteria such as moisture susceptibility and fatigue.
On-Site Detection as a Countermeasure to Chemical Warfare/Terrorism.
Seto, Y
2014-01-01
On-site monitoring and detection are necessary in the crisis and consequence management of wars and terrorism involving chemical warfare agents (CWAs) such as sarin. The analytical performance required for on-site detection is mainly determined by the fatal vapor concentration and volatility of the CWAs involved. The analytical performance for presently available on-site technologies and commercially available on-site equipment for detecting CWAs interpreted and compared in this review include: classical manual methods, photometric methods, ion mobile spectrometry, vibrational spectrometry, gas chromatography, mass spectrometry, sensors, and other methods. Some of the data evaluated were obtained from our experiments using authentic CWAs. We concluded that (a) no technologies perfectly fulfill all of the on-site detection requirements and (b) adequate on-site detection requires (i) a combination of the monitoring-tape method and ion-mobility spectrometry for point detection and (ii) a combination of the monitoring-tape method, atmospheric pressure chemical ionization mass spectrometry with counterflow introduction, and gas chromatography with a trap and special detectors for continuous monitoring. The basic properties of CWAs, the concept of on-site detection, and the sarin gas attacks in Japan as well as the forensic investigations thereof, are also explicated in this article. Copyright © 2014 Central Police University.
Wisdom of crowds for robust gene network inference
Marbach, Daniel; Costello, James C.; Küffner, Robert; Vega, Nicci; Prill, Robert J.; Camacho, Diogo M.; Allison, Kyle R.; Kellis, Manolis; Collins, James J.; Stolovitzky, Gustavo
2012-01-01
Reconstructing gene regulatory networks from high-throughput data is a long-standing problem. Through the DREAM project (Dialogue on Reverse Engineering Assessment and Methods), we performed a comprehensive blind assessment of over thirty network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae, and in silico microarray data. We characterize performance, data requirements, and inherent biases of different inference approaches offering guidelines for both algorithm application and development. We observe that no single inference method performs optimally across all datasets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse datasets. Thereby, we construct high-confidence networks for E. coli and S. aureus, each comprising ~1700 transcriptional interactions at an estimated precision of 50%. We experimentally test 53 novel interactions in E. coli, of which 23 were supported (43%). Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662
An advanced probabilistic structural analysis method for implicit performance functions
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
Method of electric powertrain matching for battery-powered electric cars
NASA Astrophysics Data System (ADS)
Ning, Guobao; Xiong, Lu; Zhang, Lijun; Yu, Zhuoping
2013-05-01
The current match method of electric powertrain still makes use of longitudinal dynamics, which can't realize maximum capacity for on-board energy storage unit and can't reach lowest equivalent fuel consumption as well. Another match method focuses on improving available space considering reasonable layout of vehicle to enlarge rated energy capacity for on-board energy storage unit, which can keep the longitudinal dynamics performance almost unchanged but can't reach lowest fuel consumption. Considering the characteristics of driving motor, method of electric powertrain matching utilizing conventional longitudinal dynamics for driving system and cut-and-try method for energy storage system is proposed for passenger cars converted from traditional ones. Through combining the utilization of vehicle space which contributes to the on-board energy amount, vehicle longitudinal performance requirements, vehicle equivalent fuel consumption level, passive safety requirements and maximum driving range requirement together, a comprehensive optimal match method of electric powertrain for battery-powered electric vehicle is raised. In simulation, the vehicle model and match method is built in Matlab/simulink, and the Environmental Protection Agency (EPA) Urban Dynamometer Driving Schedule (UDDS) is chosen as a test condition. The simulation results show that 2.62% of regenerative energy and 2% of energy storage efficiency are increased relative to the traditional method. The research conclusions provide theoretical and practical solutions for electric powertrain matching for modern battery-powered electric vehicles especially for those converted from traditional ones, and further enhance dynamics of electric vehicles.
DOT National Transportation Integrated Search
1998-01-01
The conventional methods of determining origin-destination (O-D) trip tables involve elaborate surveys, e.g., home interviews, that require considerable time, staff, and funds. To overcome this drawback, a number of theoretical models that synthesize...
EVALUATION OF METHODS FOR SAMPLING, RECOVERY, AND ENUMERATION OF BACTERIA APPLIED TO THE PHYLLOPANE
Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. everal experiments were performed to examine quantitative recovery met...
A simple method to locate changes in vegetation cover, which can be used to identify areas under stress. The method only requires inexpensive NDVI data. The use of remotely sensed data is far more cost-effective than field studies and can be performed more quickly. Local knowledg...
[Enzymatic analysis of the quality of foodstuffs].
Kolesnov, A Iu
1997-01-01
Enzymatic analysis is an independent and separate branch of enzymology and analytical chemistry. It has become one of the most important methodologies used in food analysis. Enzymatic analysis allows the quick, reliable determination of many food ingredients. Often these contents cannot be determined by conventional methods, or if methods are available, they are determined only with limited accuracy. Today, methods of enzymatic analysis are being increasingly used in the investigation of foodstuffs. Enzymatic measurement techniques are used in industry, scientific and food inspection laboratories for quality analysis. This article describes the requirements of an optimal analytical method: specificity, sample preparation, assay performance, precision, sensitivity, time requirement, analysis cost, safety of reagents.
Acoustic attenuation design requirements established through EPNL parametric trades
NASA Technical Reports Server (NTRS)
Veldman, H. F.
1972-01-01
An optimization procedure for the provision of an acoustic lining configuration that is balanced with respect to engine performance losses and lining attenuation characteristics was established using a method which determined acoustic attenuation design requirements through parametric trade studies using the subjective noise unit of effective perceived noise level (EPNL).
2013-01-01
Background Molecular imaging using magnetic nanoparticles (MNPs)—magnetic particle imaging (MPI)—has attracted interest for the early diagnosis of cancer and cardiovascular disease. However, because a steep local magnetic field distribution is required to obtain a defined image, sophisticated hardware is required. Therefore, it is desirable to realize excellent image quality even with low-performance hardware. In this study, the spatial resolution of MPI was evaluated using an image reconstruction method based on the correlation information of the magnetization signal in a time domain and by applying MNP samples made from biocompatible ferucarbotran that have adjusted particle diameters. Methods The magnetization characteristics and particle diameters of four types of MNP samples made from ferucarbotran were evaluated. A numerical analysis based on our proposed method that calculates the image intensity from correlation information between the magnetization signal generated from MNPs and the system function was attempted, and the obtained image quality was compared with that using the prototype in terms of image resolution and image artifacts. Results MNP samples obtained by adjusting ferucarbotran showed superior properties to conventional ferucarbotran samples, and numerical analysis showed that the same image quality could be obtained using a gradient magnetic field generator with 0.6 times the performance. However, because image blurring was included theoretically by the proposed method, an algorithm will be required to improve performance. Conclusions MNP samples obtained by adjusting ferucarbotran showed magnetizing properties superior to conventional ferucarbotran samples, and by using such samples, comparable image quality (spatial resolution) could be obtained with a lower gradient magnetic field intensity. PMID:23734917
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan
2001-01-01
Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.
Automatic performance budget: towards a risk reduction
NASA Astrophysics Data System (ADS)
Laporte, Philippe; Blake, Simon; Schmoll, Jürgen; Rulten, Cameron; Savoie, Denis
2014-08-01
In this paper, we discuss the performance matrix of the SST-GATE telescope developed to allow us to partition and allocate the important characteristics to the various subsystems as well as to describe the process in order to verify that the current design will deliver the required performance. Due to the integrated nature of the telescope, a large number of parameters have to be controlled and effective calculation tools must be developed such as an automatic performance budget. Its main advantages consist in alleviating the work of the system engineer when changes occur in the design, in avoiding errors during any re-allocation process and recalculate automatically the scientific performance of the instrument. We explain in this paper the method to convert the ensquared energy (EE) and the signal-to-noise ratio (SNR) required by the science cases into the "as designed" instrument. To ensure successful design, integration and verification of the next generation instruments, it is of the utmost importance to have methods to control and manage the instrument's critical performance characteristics at its very early design steps to limit technical and cost risks in the project development. Such a performance budget is a tool towards this goal.
Lessons learned in preparing method 29 filters for compliance testing audits.
Martz, R F; McCartney, J E; Bursey, J T; Riley, C E
2000-01-01
Companies conducting compliance testing are required to analyze audit samples at the time they collect and analyze the stack samples if audit samples are available. Eastern Research Group (ERG) provides technical support to the EPA's Emission Measurements Center's Stationary Source Audit Program (SSAP) for developing, preparing, and distributing performance evaluation samples and audit materials. These audit samples are requested via the regulatory Agency and include spiked audit materials for EPA Method 29-Metals Emissions from Stationary Sources, as well as other methods. To provide appropriate audit materials to federal, state, tribal, and local governments, as well as agencies performing environmental activities and conducting emission compliance tests, ERG has recently performed testing of blank filter materials and preparation of spiked filters for EPA Method 29. For sampling stationary sources using an EPA Method 29 sampling train, the use of filters without organic binders containing less than 1.3 microg/in.2 of each of the metals to be measured is required. Risk Assessment testing imposes even stricter requirements for clean filter background levels. Three vendor sources of quartz fiber filters were evaluated for background contamination to ensure that audit samples would be prepared using filters with the lowest metal background levels. A procedure was developed to test new filters, and a cleaning procedure was evaluated to see if a greater level of cleanliness could be achieved using an acid rinse with new filters. Background levels for filters supplied by different vendors and within lots of filters from the same vendor showed a wide variation, confirmed through contact with several analytical laboratories that frequently perform EPA Method 29 analyses. It has been necessary to repeat more than one compliance test because of suspect metals background contamination levels. An acid cleaning step produced improvement in contamination level, but the difference was not significant for most of the Method 29 target metals. As a result of our studies, we conclude: Filters for Method 29 testing should be purchased in lots as large as possible. Testing firms should pre-screen new boxes and/or new lots of filters used for Method 29 testing. Random analysis of three filters (top, middle, bottom of the box) from a new box of vendor filters before allowing them to be used in field tests is a prudent approach. A box of filters from a given vendor should be screened, and filters from this screened box should be used both for testing and as field blanks in each test scenario to provide the level of quality assurance required for stationary source testing.
Optimal Tikhonov regularization for DEER spectroscopy
NASA Astrophysics Data System (ADS)
Edwards, Thomas H.; Stoll, Stefan
2018-03-01
Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. A user oriented description of the program input requirements, program output, deck setup, and operating instructions is presented.
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Technical Reports Server (NTRS)
Alhorn, Dean C.
2005-01-01
The fulfillment of the new US. National Vision for Space Exploration requires many new enabling technologies to accomplish the goal of utilizing space for commercial activities and for returning humans to the moon and extraterrestrial environments. Traditionally, flight structures are manufactured as complete systems and require humans to complete the integration and assembly in orbit. These structures are bulky and require the use of heavy launch vehicles to send the units to the desired location, e.g. International Space Station (ISS). This method requires a high degree of safety, numerous space walks and significant cost for the humans to perform the assembly in orbit. For example, for assembly and maintenance of the ISS, 52 Extravehicular Activities (EVA's) have been performed so far with a total EVA time of approximately 322 hours. Sixteen (16) shuttle flights haw been to the ISS to perform these activities with an approximate cost of $450M per mission. For future space missions, costs have to be reduced to reasonably achieve the exploration goals. One concept that has been proposed is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly operations. Assembly is autonomously performed when two components containing onboard electronics join after recognizing that the joint is appropriate and in the precise position and orientation required for assembly. The mechanism only activates when the specifications are correct and m a nominal range. After assembly, local sensors and electronics monitor the integrity of the joint for feedback to a master controller. To achieve this concept will require a shift in the methods for designing space structures. In addition, innovative techniques will be required to perform the assembly autonomously. Monitoring of the assembled joint will be necessary for safety and structural integrity. If a very large structure is to be assembled in orbit, then the number of integrity sensors will be significant. Thus simple, low cost sensors are integral to the success of this concept. This paper will address these issues and will propose a novel concept for assembling space structures autonomously. The paper will present Several autonomous assembly methods. Core technologies required to achieve in space assembly will be discussed and novel techniques for communicating, sensing, docking and assembly will be detailed. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Finally, these technologies can also be applied to other systems both on earth and extraterrestrial environments.
A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays
Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.
2013-01-01
Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767
The Identification of Software Failure Regions
1990-06-01
be used to detect non-obviously redundant test cases. A preliminary examination of the manual analysis method is performed with a set of programs ...failure regions are defined and a method of failure region analysis is described in detail. The thesis describes how this analysis may be used to detect...is the termination of the ability of a functional unit to perform its required function. (Glossary, 1983) The presence of faults in program code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W. S.
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
Satellite voice broadcast. Volume 2: System study
NASA Technical Reports Server (NTRS)
Bachtell, E. E.; Bettadapur, S. S.; Coyner, J. V.; Farrell, C. E.
1985-01-01
The Technical Volume of the Satellite Broadcast System Study is presented. Designs are synthesized for direct sound broadcast satellite systems for HF-, VHF-, L-, and Ku-bands. Methods are developed and used to predict satellite weight, volume, and RF performance for the various concepts considered. Cost and schedule risk assessments are performed to predict time and cost required to implement selected concepts. Technology assessments and tradeoffs are made to identify critical enabling technologies that require development to bring technical risk to acceptable levels for full scale development.
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
NASA Astrophysics Data System (ADS)
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-06-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-01-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-12-13
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-01-01
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387
Primary lithium-thionyl chloride cell evaluation
NASA Astrophysics Data System (ADS)
Zolla, A. E.; Waterhouse, R.; Debiccari, D.; Griffin, G. L.
1980-08-01
A test program was conducted to evaluate the Altus 1350AH cell performance against the Minuteman Survival Ground Power requirements. Twelve cells of the 17 inch diameter, 1-3/8 inch heights were fabricated and tested during this study. Under discharge rates varying from C/100 to C/400 at ambient temperature, the volumetric and gravimetric energy density performance requirements of 15 watt hours per cubic inch and 150 watt hours per pound were exceeded in all cases. All other performance requirements of voltage, current, configuration, capacity volume, weight, electrolyte leakage (none), and maintainability (none required), were met or exceeded. The abuse testing demonstrated the Altus Cell's ability to safely withstand short circuit by external shorting, short circuit by penetration with a conductive object, forced discharge, and forced charging of a cell. Disposal of discharged cells by incineration is an environmentally safe and efficient method of disposal.
Premanath, M.; Raghunath, M.
2010-01-01
Background: Peripheral Arterial Disease (PAD) remains the least recognized form of atherosclerosis. The Ankle-Brachial Index (ABI) has emerged as one of the potent markers of diffuse atherosclerosis, cardiovascular (CV) risk, and overall survival in general public, especially in diabetics. The important reason for the lack of early diagnosis is the non-availability of a test that is easy to perform and less expensive, with no training required. Objectives: To evaluate the osillometric method of performing ABI with regard to its usefulness in detecting PAD cases and to correlate the signs and symptoms with ABI. Materials and Methods: Two hundred diabetics of varying duration attending the clinic for a period of eight months, from August 2006 to April 2007, were evaluated for signs, symptoms, and risk factors. ABI was performed using the oscillometric method. The positives were confirmed by Doppler evaluation. An equal number of age- and sex-matched controls, which were ABI negative, were also assessed by Doppler. Sensitivity and Specificity were determined. Results: There were 120 males and 80 females. Twelve males (10%) and six females (7.5%) were ABI positive. On Doppler, eleven males (91.5%) and three females (50%) were true positives. There were six false negatives from the controls (three each). The Sensitivity was 70% and Specificity was 75%. Symptoms and signs correlated well with ABI positives. Hypertension was the most important risk factor. Conclusions: In spite of the limitations, the oscillometric method of performing ABI is a simple procedure, easy to perform, does not require training and can be performed as an outpatient procedure not only by doctors, but also by the paramedical staff to detect more PAD cases. PMID:20535314
NASA Astrophysics Data System (ADS)
Patil, Venkat P.; Gohatre, Umakant B.
2018-04-01
The technique of obtaining a wider field-of-view of an image to get high resolution integrated image is normally required for development of panorama of a photographic images or scene from a sequence of part of multiple views. There are various image stitching methods developed recently. For image stitching five basic steps are adopted stitching which are Feature detection and extraction, Image registration, computing homography, image warping and Blending. This paper provides review of some of the existing available image feature detection and extraction techniques and image stitching algorithms by categorizing them into several methods. For each category, the basic concepts are first described and later on the necessary modifications made to the fundamental concepts by different researchers are elaborated. This paper also highlights about the some of the fundamental techniques for the process of photographic image feature detection and extraction methods under various illumination conditions. The Importance of Image stitching is applicable in the various fields such as medical imaging, astrophotography and computer vision. For comparing performance evaluation of the techniques used for image features detection three methods are considered i.e. ORB, SURF, HESSIAN and time required for input images feature detection is measured. Results obtained finally concludes that for daylight condition, ORB algorithm found better due to the fact that less tome is required for more features extracted where as for images under night light condition it shows that SURF detector performs better than ORB/HESSIAN detectors.
NASA Astrophysics Data System (ADS)
Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku
2018-02-01
This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.
Chan, Leo Li-Ying; Smith, Tim; Kumph, Kendra A; Kuksin, Dmitry; Kessel, Sarah; Déry, Olivier; Cribbes, Scott; Lai, Ning; Qiu, Jean
2016-10-01
To ensure cell-based assays are performed properly, both cell concentration and viability have to be determined so that the data can be normalized to generate meaningful and comparable results. Cell-based assays performed in immuno-oncology, toxicology, or bioprocessing research often require measuring of multiple samples and conditions, thus the current automated cell counter that uses single disposable counting slides is not practical for high-throughput screening assays. In the recent years, a plate-based image cytometry system has been developed for high-throughput biomolecular screening assays. In this work, we demonstrate a high-throughput AO/PI-based cell concentration and viability method using the Celigo image cytometer. First, we validate the method by comparing directly to Cellometer automated cell counter. Next, cell concentration dynamic range, viability dynamic range, and consistency are determined. The high-throughput AO/PI method described here allows for 96-well to 384-well plate samples to be analyzed in less than 7 min, which greatly reduces the time required for the single sample-based automated cell counter. In addition, this method can improve the efficiency for high-throughput screening assays, where multiple cell counts and viability measurements are needed prior to performing assays such as flow cytometry, ELISA, or simply plating cells for cell culture.
PID Tuning Using Extremum Seeking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Killingsworth, N; Krstic, M
2005-11-15
Although proportional-integral-derivative (PID) controllers are widely used in the process industry, their effectiveness is often limited due to poor tuning. Manual tuning of PID controllers, which requires optimization of three parameters, is a time-consuming task. To remedy this difficulty, much effort has been invested in developing systematic tuning methods. Many of these methods rely on knowledge of the plant model or require special experiments to identify a suitable plant model. Reviews of these methods are given in [1] and the survey paper [2]. However, in many situations a plant model is not known, and it is not desirable to openmore » the process loop for system identification. Thus a method for tuning PID parameters within a closed-loop setting is advantageous. In relay feedback tuning [3]-[5], the feedback controller is temporarily replaced by a relay. Relay feedback causes most systems to oscillate, thus determining one point on the Nyquist diagram. Based on the location of this point, PID parameters can be chosen to give the closed-loop system a desired phase and gain margin. An alternative tuning method, which does not require either a modification of the system or a system model, is unfalsified control [6], [7]. This method uses input-output data to determine whether a set of PID parameters meets performance specifications. An adaptive algorithm is used to update the PID controller based on whether or not the controller falsifies a given criterion. The method requires a finite set of candidate PID controllers that must be initially specified [6]. Unfalsified control for an infinite set of PID controllers has been developed in [7]; this approach requires a carefully chosen input signal [8]. Yet another model-free PID tuning method that does not require opening of the loop is iterative feedback tuning (IFT). IFT iteratively optimizes the controller parameters with respect to a cost function derived from the output signal of the closed-loop system, see [9]. This method is based on the performance of the closed-loop system during a step response experiment [10], [11]. In this article we present a method for optimizing the step response of a closed-loop system consisting of a PID controller and an unknown plant with a discrete version of extremum seeking (ES). Specifically, ES is used to minimize a cost function similar to that used in [10], [11], which quantifies the performance of the PID controller. ES, a non-model-based method, iteratively modifies the arguments (in this application the PID parameters) of a cost function so that the output of the cost function reaches a local minimum or local maximum. In the next section we apply ES to PID controller tuning. We illustrate this technique through simulations comparing the effectiveness of ES to other PID tuning methods. Next, we address the importance of the choice of cost function and consider the effect of controller saturation. Furthermore, we discuss the choice of ES tuning parameters. Finally, we offer some conclusions.« less
Casing window milling with abrasive fluid jet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vestavik, O.M.; Fidtje, T.H.; Faure, A.M.
1995-12-31
Methods for through tubing re-entry drilling of multilateral wells has a large potential for increasing hydrocarbon production and total recovery. One of the bottle-necks of this technology is initiation of the side-track by milling a window in the casing downhole. A new approach to this problem has been investigated in a joint industry project. An experimental set-up has been built for milling a 4 inch window in a 7 inch steel casing at surface in the laboratory. A specially designed bit developed at RIF using abrasive jet cutting technology has been used for the window milling. The bit has anmore » abrasive jet beam which is always directed in the desired side-track direction, even if the bit is rotating uniformly. The bit performs the milling with a combined mechanical and hydraulic jet action. The method has been successfully demonstrated. The experiments has shown that the window milling can be performed with very low WOB and torque, and that only small side forces are required to perform the operation. Casing milling has been performed without a whipstock, a cement plug has been the only support for the tool. The tests indicate that milling operations can be performed more efficiently with less time and costs than what is required with conventional techniques. However, the method still needs some development of the downhole motor for coiled tubing applications. The method can be used both for milling and drilling giving the advantage of improved rate of penetration, improved bit life and increased horizontal reach. The method is planned to be demonstrated downhole in the near future.« less
Antonelli, Giorgia; Padoan, Andrea; Aita, Ada; Sciacovelli, Laura; Plebani, Mario
2017-08-28
Background The International Standard ISO 15189 is recognized as a valuable guide in ensuring high quality clinical laboratory services and promoting the harmonization of accreditation programmes in laboratory medicine. Examination procedures must be verified in order to guarantee that their performance characteristics are congruent with the intended scope of the test. The aim of the present study was to propose a practice model for implementing procedures employed for the verification of validated examination procedures already used for at least 2 years in our laboratory, in agreement with the ISO 15189 requirement at the Section 5.5.1.2. Methods In order to identify the operative procedure to be used, approved documents were identified, together with the definition of performance characteristics to be evaluated for the different methods; the examination procedures used in laboratory were analyzed and checked for performance specifications reported by manufacturers. Then, operative flow charts were identified to compare the laboratory performance characteristics with those declared by manufacturers. Results The choice of performance characteristics for verification was based on approved documents used as guidance, and the specific purpose tests undertaken, a consideration being made of: imprecision and trueness for quantitative methods; diagnostic accuracy for qualitative methods; imprecision together with diagnostic accuracy for semi-quantitative methods. Conclusions The described approach, balancing technological possibilities, risks and costs and assuring the compliance of the fundamental component of result accuracy, appears promising as an easily applicable and flexible procedure helping laboratories to comply with the ISO 15189 requirements.
DOT National Transportation Integrated Search
1998-01-01
The conventional methods of determining origin-destination (O-D) trip tables involve elaborate surveys, e.g., home interviews, that require considerable time, staff, and funds. To overcome this drawback, a number of theoretical models that synthesize...
Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.
1957-10-01
The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.
Reliable use of determinants to solve nonlinear structural eigenvalue problems efficiently
NASA Technical Reports Server (NTRS)
Williams, F. W.; Kennedy, D.
1988-01-01
The analytical derivation, numerical implementation, and performance of a multiple-determinant parabolic interpolation method (MDPIM) for use in solving transcendental eigenvalue (critical buckling or undamped free vibration) problems in structural mechanics are presented. The overall bounding, eigenvalue-separation, qualified parabolic interpolation, accuracy-confirmation, and convergence-recovery stages of the MDPIM are described in detail, and the numbers of iterations required to solve sample plane-frame problems using the MDPIM are compared with those for a conventional bisection method and for the Newtonian method of Simpson (1984) in extensive tables. The MDPIM is shown to use 31 percent less computation time than bisection when accuracy of 0.0001 is required, but 62 percent less when accuracy of 10 to the -8th is required; the time savings over the Newtonian method are about 10 percent.
NASA Astrophysics Data System (ADS)
Dalarmelina, Carlos A.; Adegbite, Saheed A.; Pereira, Esequiel da V.; Nunes, Reginaldo B.; Rocha, Helder R. O.; Segatto, Marcelo E. V.; Silva, Jair A. L.
2017-05-01
Block-level detection is required to decode what may be classified as selective control information (SCI) such as control format indicator in 4G-long-term evolution systems. Using optical orthogonal frequency division multiplexing over radio-over-fiber (RoF) links, we report the experimental evaluation of an SCI detection scheme based on a time-domain correlation (TDC) technique in comparison with the conventional maximum likelihood (ML) approach. When compared with the ML method, it is shown that the TDC method improves detection performance over both 20 and 40 km of standard single mode fiber (SSMF) links. We also report a performance analysis of the TDC scheme in noisy visible light communication channel models after propagation through 40 km of SSMF. Experimental and simulation results confirm that the TDC method is attractive for practical orthogonal frequency division multiplexing-based RoF and fiber-wireless systems. Unlike the ML method, another key benefit of the TDC is that it requires no channel estimation.
Analysis of automatic repeat request methods for deep-space downlinks
NASA Technical Reports Server (NTRS)
Pollara, F.; Ekroot, L.
1995-01-01
Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.
NASA Astrophysics Data System (ADS)
Dodd, Michael; Ferrante, Antonino
2017-11-01
Our objective is to perform DNS of finite-size droplets that are evaporating in isotropic turbulence. This requires fully resolving the process of momentum, heat, and mass transfer between the droplets and surrounding gas. We developed a combined volume-of-fluid (VOF) method and low-Mach-number approach to simulate this flow. The two main novelties of the method are: (i) the VOF algorithm captures the motion of the liquid gas interface in the presence of mass transfer due to evaporation and condensation without requiring a projection step for the liquid velocity, and (ii) the low-Mach-number approach allows for local volume changes caused by phase change while the total volume of the liquid-gas system is constant. The method is verified against an analytical solution for a Stefan flow problem, and the D2 law is verified for a single droplet in quiescent gas. We also demonstrate the schemes robustness when performing DNS of an evaporating droplet in forced isotropic turbulence.
Microgravity isolation system design: A modern control analysis framework
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.
1994-01-01
Many acceleration-sensitive, microgravity science experiments will require active vibration isolation from the manned orbiters on which they will be mounted. The isolation problem, especially in the case of a tethered payload, is a complex three-dimensional one that is best suited to modern-control design methods. These methods, although more powerful than their classical counterparts, can nonetheless go only so far in meeting the design requirements for practical systems. Once a tentative controller design is available, it must still be evaluated to determine whether or not it is fully acceptable, and to compare it with other possible design candidates. Realistically, such evaluation will be an inherent part of a necessary iterative design process. In this paper, an approach is presented for applying complex mu-analysis methods to a closed-loop vibration isolation system (experiment plus controller). An analysis framework is presented for evaluating nominal stability, nominal performance, robust stability, and robust performance of active microgravity isolation systems, with emphasis on the effective use of mu-analysis methods.
Fast, high-fidelity readout of multiple qubits
NASA Astrophysics Data System (ADS)
Bronn, N. T.; Abdo, B.; Inoue, K.; Lekuch, S.; Córcoles, A. D.; Hertzberg, J. B.; Takita, M.; Bishop, L. S.; Gambetta, J. M.; Chow, J. M.
2017-05-01
Quantum computing requires a delicate balance between coupling quantum systems to external instruments for control and readout, while providing enough isolation from sources of decoherence. Circuit quantum electrodynamics has been a successful method for protecting superconducting qubits, while maintaining the ability to perform readout [1, 2]. Here, we discuss improvements to this method that allow for fast, high-fidelity readout. Specifically, the integration of a Purcell filter, which allows us to increase the resonator bandwidth for fast readout, the incorporation of a Josephson parametric converter, which enables us to perform high-fidelity readout by amplifying the readout signal while adding the minimum amount of noise required by quantum mechanics, and custom control electronics, which provide us with the capability of fast decision and control.
OpenMM 7: Rapid development of high performance algorithms for molecular dynamics
Swails, Jason; Zhao, Yutong; Beauchamp, Kyle A.; Wang, Lee-Ping; Stern, Chaya D.; Brooks, Bernard R.; Pande, Vijay S.
2017-01-01
OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community. PMID:28746339
Using SpF to Achieve Petascale for Legacy Pseudospectral Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Jiang, Weiyuan
2014-01-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.
Requirements Flowdown for Prognostics and Health Management
NASA Technical Reports Server (NTRS)
Goebel, Kai; Saxena, Abhinav; Roychoudhury, Indranil; Celaya, Jose R.; Saha, Bhaskar; Saha, Sankalita
2012-01-01
Prognostics and Health Management (PHM) principles have considerable promise to change the game of lifecycle cost of engineering systems at high safety levels by providing a reliable estimate of future system states. This estimate is a key for planning and decision making in an operational setting. While technology solutions have made considerable advances, the tie-in into the systems engineering process is lagging behind, which delays fielding of PHM-enabled systems. The derivation of specifications from high level requirements for algorithm performance to ensure quality predictions is not well developed. From an engineering perspective some key parameters driving the requirements for prognostics performance include: (1) maximum allowable Probability of Failure (PoF) of the prognostic system to bound the risk of losing an asset, (2) tolerable limits on proactive maintenance to minimize missed opportunity of asset usage, (3) lead time to specify the amount of advanced warning needed for actionable decisions, and (4) required confidence to specify when prognosis is sufficiently good to be used. This paper takes a systems engineering view towards the requirements specification process and presents a method for the flowdown process. A case study based on an electric Unmanned Aerial Vehicle (e-UAV) scenario demonstrates how top level requirements for performance, cost, and safety flow down to the health management level and specify quantitative requirements for prognostic algorithm performance.
AOAC SMPR 2015.009: Estimation of total phenolic content using Folin-C Assay
USDA-ARS?s Scientific Manuscript database
This AOAC Standard Method Performance Requirements (SMPR) is for estimation of total soluble phenolic content in dietary supplement raw materials and finished products using the Folin-C assay for comparison within same matrices. SMPRs describe the minimum recommended performance characteristics to b...
USDA-ARS?s Scientific Manuscript database
This AOAC Standard Method Performance Requirements (SMPR) is for authentication of selected Vaccinium species in dietary ingredients and dietary supplements containing a single Vaccinium species using anthocyanin profiles. SMPRs describe the minimum recommended performance characteristics to be used...
Brunt, Kommer; Sanders, Peter; Spichtig, Véronique; Ernste-Nota, Veronica; Sawicka, Paulina; Iwanoff, Kimberley; Van Soest, Jeroen; Lin, Paul Kong Thoo; Austin, Sean
2017-05-01
Until recently, only two AOAC Official MethodsSM have been available for the analysis of fructans: Method 997.08 and Method 999.03. Both are based on the analysis of the fructan component monosaccharides (glucose and fructose) after hydrolysis. The two methods have some limitations due to the strategies used for removing background interferences (such as from sucrose, α-glucooligosaccharides, and free sugars). The method described in this paper has been developed to overcome those limitations. The method is largely based on Method 999.03 and uses combined enzymatic and SPE steps to remove the interfering components without impacting the final analytical result. The method has been validated in two laboratories on infant formula and adult nutritionals. Recoveries were in the range of 86-119%, with most being in the range of 91-104%. RSDr values were in the range of 0.7-2.6%, with one exception when the fructan concentration was close to the LOQ, resulting in an RSDr of 8.9%. The performance is generally within the requirements outlined in the AOAC Standard Method Performance Requirements (SMPR® 2014.002), which specifies recoveries in the range of 90-110% and RSDr values below 6%.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao
2018-02-01
Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.
Evaluation of four methods for estimating leaf area of isolated trees
P.J. Peper; E.G. McPherson
2003-01-01
The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...
40 CFR 63.1348 - Compliance requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standards and operating limits by using the test methods and procedures in §§ 63.1349 and 63.7. (1) PM... initial compliance with the PM emissions standards by using the test methods and procedures in § 63.1349(b... standards by using the performance test methods and procedures in § 63.1349(b)(2). The maximum 6-minute...
40 CFR 63.1348 - Compliance requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... standards and operating limits by using the test methods and procedures in §§ 63.1349 and 63.7. (1) PM... initial compliance with the PM emissions standards by using the test methods and procedures in § 63.1349(b... standards by using the performance test methods and procedures in § 63.1349(b)(2). The maximum 6-minute...
Performance considerations in long-term spaceflight
NASA Technical Reports Server (NTRS)
Akins, F. R.
1979-01-01
Maintenance of skilled performance during extended space flight is of critical importance to both the health and safety of crew members and to the overall success of mission goals. An examination of long term effects and performance requirements is therefore a factor of immense importance to the planning of future missions. Factors that were investigated include: definition of performance categories to be investigated; methods for assessing and predicting performance levels; in-flight factors which can affect performance; and factors pertinent to the maintenance of skilled performance.
Yzquierdo, Sergio Luis; Lemus, Dihadenys; Echemendia, Miguel; Montoro, Ernesto; McNerney, Ruth; Martin, Anandi; Palomino, Juan Carlos
2006-01-01
Background Conventional methods for susceptibility testing require several months before results can be reported. However, rapid methods to determine drug susceptibility have been developed recently. Phage assay have been reported as a rapid useful tools for antimicrobial susceptibility testing. The aim of this study was to apply the Phage assay for rapid detection of resistance on Mycobacterium tuberculosis strains in Cuba. Methods Phage D29 assay was performed on 102 M. tuberculosis strains to detect rifampicin resistance. The results were compared with the proportion method (gold standard) to evaluate the sensitivity and specificity of Phage assay. Results Phage assay results were available in 2 days whereas Proportion Methods results were obtain in 42 days. A total of 44 strains were detected as rifampicin resistant by both methods. However, one strains deemed resistant by Proportion Methods was susceptible by Phage assay. The sensitivity and specificity of Phage assay were 97.8 % and 100% respectively. Conclusion Phage assay provides rapid and reliable results for susceptibility testing; it's easy to perform, requires no specialized equipment and is applicable to drug susceptibility testing in low income countries where tuberculosis is a major public health problem. PMID:16630356
Reinforcement learning for resource allocation in LEO satellite networks.
Usaha, Wipawee; Barria, Javier A
2007-06-01
In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.
Development of a qualification standard for adhesives used in hybrid microcircuits
NASA Technical Reports Server (NTRS)
Licari, J. J.; Weigand, B. L.; Soykin, C. A.
1981-01-01
Improved qualification standards and test procedures for adhesives used in microelectronic packaging are developed. The test methods in specification for the Selection and Use of Organic Adhesives in Hybrid Microcircuits are reevaluated versus industry and government requirements. Four electrically insulative and four electrically conductive adhesives used in the assembly of hybrid microcircuits are selected to evaluate the proposed revised test methods. An estimate of the cost to perform qualification testing of an adhesive to the requirements of the revised specification is also prepared.
ERIC Educational Resources Information Center
O'Donnell, Beatrice
Descriptions of 200 occupations from the "Dictionary of Occupational Titles" Volume I designate the area of work and worker trait group and the reference page in Volume II of the Dictionary. Each occupational description briefly outlines highlights of work performed, worker requirements, and training and methods of entry. Occupations are…
Soltis, Robert; Verlinden, Nathan; Kruger, Nicholas; Carroll, Ailey; Trumbo, Tiffany
2015-02-17
To determine if the process-oriented guided inquiry learning (POGIL) teaching strategy improves student performance and engages higher-level thinking skills of first-year pharmacy students in an Introduction to Pharmaceutical Sciences course. Overall examination scores and scores on questions categorized as requiring either higher-level or lower-level thinking skills were compared in the same course taught over 3 years using traditional lecture methods vs the POGIL strategy. Student perceptions of the latter teaching strategy were also evaluated. Overall mean examination scores increased significantly when POGIL was implemented. Performance on questions requiring higher-level thinking skills was significantly higher, whereas performance on questions requiring lower-level thinking skills was unchanged when the POGIL strategy was used. Student feedback on use of this teaching strategy was positive. The use of the POGIL strategy increased student overall performance on examinations, improved higher-level thinking skills, and provided an interactive class setting.
Ventura, Valérie; Todorova, Sonia
2015-05-01
Spike-based brain-computer interfaces (BCIs) have the potential to restore motor ability to people with paralysis and amputation, and have shown impressive performance in the lab. To transition BCI devices from the lab to the clinic, decoding must proceed automatically and in real time, which prohibits the use of algorithms that are computationally intensive or require manual tweaking. A common choice is to avoid spike sorting and treat the signal on each electrode as if it came from a single neuron, which is fast, easy, and therefore desirable for clinical use. But this approach ignores the kinematic information provided by individual neurons recorded on the same electrode. The contribution of this letter is a linear decoding model that extracts kinematic information from individual neurons without spike-sorting the electrode signals. The method relies on modeling sample averages of waveform features as functions of kinematics, which is automatic and requires minimal data storage and computation. In offline reconstruction of arm trajectories of a nonhuman primate performing reaching tasks, the proposed method performs as well as decoders based on expertly manually and automatically sorted spikes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Shadid, John N.; Tsuji, Paul H.
Here, this study explores the performance and scaling of a GMRES Krylov method employed as a smoother for an algebraic multigrid (AMG) preconditioned Newton- Krylov solution approach applied to a fully-implicit variational multiscale (VMS) nite element (FE) resistive magnetohydrodynamics (MHD) formulation. In this context a Newton iteration is used for the nonlinear system and a Krylov (GMRES) method is employed for the linear subsystems. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play a critical role. Krylov smoothers are considered inmore » an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. Three time dependent resistive MHD test cases are considered to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.« less
NASA Technical Reports Server (NTRS)
1974-01-01
Major resource management missions to be performed by the TERSSE are examined in order to develop an understanding of the form and function of a system designed to perform an operational mission. Factors discussed include: resource manager (user) functions, methods of performing their function, the information flows and information requirements embodied in their function, and the characteristics of the observation system which assists in the management of the resource involved. The missions selected for study are: world crop survey and land resources management. These missions are found to represent opposite ends of the TERSSE spectrum and to support the conclusion that different missions require different systems and must be analyzed in detail to permit proper system development decisions.
NASA Technical Reports Server (NTRS)
Jack, Devin P.; Hoffler, Keith D.; Johnson, Sally C.
2014-01-01
A need exists to safely integrate Unmanned Aircraft Systems (UAS) into the United States' National Airspace System. Replacing manned aircraft's see-and-avoid capability in the absence of an onboard pilot is one of the key challenges associated with safe integration. Sense-and-avoid (SAA) systems will have to achieve yet-to-be-determined required separation distances for a wide range of encounters. They will also need to account for the maneuver performance of the UAS they are paired with. The work described in this paper is aimed at developing an understanding of the trade space between UAS maneuver performance and SAA system performance requirements, focusing on a descent avoidance maneuver. An assessment of current manned and unmanned aircraft performance was used to establish potential UAS performance test matrix bounds. Then, near-term UAS integration work was used to narrow down the scope. A simulator was developed with sufficient fidelity to assess SAA system performance requirements. The simulator generates closest-point-of-approach (CPA) data from the wide range of UAS performance models maneuvering against a single intruder with various encounter geometries. Initial attempts to model the results made it clear that developing maneuver performance groups is required. Discussion of the performance groups developed and how to know in which group an aircraft belongs for a given flight condition and encounter is included. The groups are airplane, flight condition, and encounter specific, rather than airplane-only specific. Results and methodology for developing UAS maneuver performance requirements are presented for a descent avoidance maneuver. Results for the descent maneuver indicate that a minimum specific excess power magnitude can assure a minimum CPA for a given time-to-go prediction. However, smaller amounts of specific excess power may achieve or exceed the same CPA if the UAS has sufficient speed to trade for altitude. The results of this study will support UAS maneuver performance requirements development for integrating UAS in the NAS. The methods described are being used to help RTCA Special Committee 228 develop requirements.
Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.
Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie
2010-07-01
Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin
2017-02-01
This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.
Quick, Jacob A; MacIntyre, Allan D; Barnes, Stephen L
2014-02-01
Surgical airway creation has a high potential for disaster. Conventional methods can be cumbersome and require special instruments. A simple method utilizing three steps and readily available equipment exists, but has yet to be adequately tested. Our objective was to compare conventional cricothyroidotomy with the three-step method utilizing high-fidelity simulation. Utilizing a high-fidelity simulator, 12 experienced flight nurses and paramedics performed both methods after a didactic lecture, simulator briefing, and demonstration of each technique. Six participants performed the three-step method first, and the remaining 6 performed the conventional method first. Each participant was filmed and timed. We analyzed videos with respect to the number of hand repositions, number of airway instrumentations, and technical complications. Times to successful completion were measured from incision to balloon inflation. The three-step method was completed faster (52.1 s vs. 87.3 s; p = 0.007) as compared with conventional surgical cricothyroidotomy. The two methods did not differ statistically regarding number of hand movements (3.75 vs. 5.25; p = 0.12) or instrumentations of the airway (1.08 vs. 1.33; p = 0.07). The three-step method resulted in 100% successful airway placement on the first attempt, compared with 75% of the conventional method (p = 0.11). Technical complications occurred more with the conventional method (33% vs. 0%; p = 0.05). The three-step method, using an elastic bougie with an endotracheal tube, was shown to require fewer total hand movements, took less time to complete, resulted in more successful airway placement, and had fewer complications compared with traditional cricothyroidotomy. Published by Elsevier Inc.
Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination
NASA Astrophysics Data System (ADS)
Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel
2012-06-01
The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.
Optoelectronic Inner-Product Neural Associative Memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang
1993-01-01
Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.
ABSTRACT Background and Aims. Waterborne diseases originating from bovine fecal material are a significant public health issue. Ensuring water quality requires the use of methods that can consistently identify pollution across a broad range of management practices. One practi...
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Diagnostic Methods for Predicting Performance Impairment Associated with Combat Stress
2007-08-01
vision. Participants who wore glasses were excluded, as the frame of eyeglasses interfered with the ability to acquire a signal with the apparatus...TCD in monitoring fitness to perform concurrently with performance, and to explore strategies for using TCD as a predictor of future performance...most effective technique for evaluating whether soldiers are fit for missions requiring sustained attention. The aim of this study was to test
Army Sociocultural Performance Requirements
2014-06-01
L., Crafts, J. L., & Brooks, J. E. (July 1995). Intercultural communication requirements for Special Forces teams. (Study Report 1683). Arlington... Communication Uses alternative, sometimes novel, methods to communicate when verbal language is not shared; conveys information about mood, intent...status, and demeanor via gestures, tone of voice, and facial expressions; improvises communication techniques as necessary. WI Works with Interpreters
NASA Technical Reports Server (NTRS)
Eller, H. H.; Sugg, F. E.
1970-01-01
The methods and procedures used to perform nondestructive testing inspections of the Saturn S-2 liquid hydrogen and liquid oxygen tank weldments during fabrication and after proof testing are described to document special skills developed during the program. All post-test inspection requirements are outlined including radiographic inspections procedures.
USDA-ARS?s Scientific Manuscript database
Accurate and rapid assays for glucose are desirable for analysis of glucose and starch in food and feedstuffs. An established colorimetric glucose oxidase-peroxidase method for glucose was modified to reduce analysis time, and evaluated for factors that affected accuracy. Time required to perform t...
40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...
40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...
40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?
Code of Federal Regulations, 2012 CFR
2012-07-01
... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...
Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers
Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray
2014-01-01
We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
40 CFR Table 5 of Subpart Aaaaaaa... - Applicability of General Provisions to Subpart AAAAAAA
Code of Federal Regulations, 2010 CFR
2010-07-01
... must be conducted. § 63.7(e)(2)-(4) Conduct of Performance Tests and Data Reduction Yes. § 63.7(f)-(h) Use of Alternative Test Method; Data Analysis, Recordkeeping, and Reporting; and Waiver of Performance... CMS requirements. § 63.8(e)-(f) CMS Performance Evaluation Yes. § 63.8(g)(1)-(4) Data Reduction...
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Business Models for Training and Performance Improvement Departments
ERIC Educational Resources Information Center
Carliner, Saul
2004-01-01
Although typically applied to entire enterprises, the concept of business models applies to training and performance improvement groups. Business models are "the method by which firm[s] build and use [their] resources to offer.. value." Business models affect the types of projects, services offered, skills required, business processes, and type of…
Direct digital RF synthesis and modulation for MSAT mobile applications
NASA Technical Reports Server (NTRS)
Crozier, Stewart; Datta, Ravi; Sydor, John
1993-01-01
A practical method of performing direct digital RF synthesis using the Hilbert transform single sideband (SSB) technique is described. It is also shown that amplitude and phase modulation can be achieved directly at L-band with frequency stability and spurii performance exceeding stringent MSAT system requirements.
ERIC Educational Resources Information Center
Nip, Ignatius S. B.; Blumenfeld, Henrike K.
2015-01-01
Purpose: Second-language (L2) production requires greater cognitive resources to inhibit the native language and to retrieve less robust lexical representations. The current investigation identifies how proficiency and linguistic complexity, specifically syntactic and lexical factors, influence speech motor control and performance. Method: Speech…
Great Performances: Creating Classroom-Based Assessment Tasks. Second Edition
ERIC Educational Resources Information Center
Shoemaker, Betty; Lewin, Larry
2011-01-01
Get an in-depth understanding of how to create fun, engaging, and challenging performance assessments that require students to elaborate on content and demonstrate mastery of skills. This update of an ASCD (Association for Supervision and Curriculum Development) classic includes new scoring methods, reading assessments, and insights on navigating…
Spiking neural network simulation: memory-optimal synaptic event scheduling.
Stewart, Robert D; Gurney, Kevin N
2011-06-01
Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
Media processors using a new microsystem architecture designed for the Internet era
NASA Astrophysics Data System (ADS)
Wyland, David C.
1999-12-01
The demands of digital image processing, communications and multimedia applications are growing more rapidly than traditional design methods can fulfill them. Previously, only custom hardware designs could provide the performance required to meet the demands of these applications. However, hardware design has reached a crisis point. Hardware design can no longer deliver a product with the required performance and cost in a reasonable time for a reasonable risk. Software based designs running on conventional processors can deliver working designs in a reasonable time and with low risk but cannot meet the performance requirements. What is needed is a media processing approach that combines very high performance, a simple programming model, complete programmability, short time to market and scalability. The Universal Micro System (UMS) is a solution to these problems. The UMS is a completely programmable (including I/O) system on a chip that combines hardware performance with the fast time to market, low cost and low risk of software designs.
The European space debris safety and mitigation standard
NASA Astrophysics Data System (ADS)
Alby, F.; Alwes, D.; Anselmo, L.; Baccini, H.; Bonnal, C.; Crowther, R.; Flury, W.; Jehn, R.; Klinkrad, H.; Portelli, C.; Tremayne-Smith, R.
2001-10-01
A standard has been proposed as one of the series of ECSS Standards intended to be applied together for the management, engineering and product assurance in space projects and applications. The requirements in the Standard are defined in terms of what must be accomplished, rather than in terms of how to organise and perform the necessary work. This allows existing organisational structures and methods within agencies and industry to be applied where they are effective, and for such structures and methods to evolve as necessary, without the need for rewriting the standards. The Standard comprises management requirements, design requirements and operational requirements. The standard was prepared by the European Debris Mitigation Standard Working Group (EDMSWG) involving members from ASI, BNSC, CNES, DLR and ESA.
Zone plate method for electronic holographic display using resolution redistribution technique.
Takaki, Yasuhiro; Nakamura, Junya
2011-07-18
The resolution redistribution (RR) technique can increase the horizontal viewing-zone angle and screen size of electronic holographic display. The present study developed a zone plate method that would reduce hologram calculation time for the RR technique. This method enables calculation of an image displayed on a spatial light modulator by performing additions of the zone plates, while the previous calculation method required performing the Fourier transform twice. The derivation and modeling of the zone plate are shown. In addition, the look-up table approach was introduced for further reduction in computation time. Experimental verification using a holographic display module based on the RR technique is presented.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models
NASA Astrophysics Data System (ADS)
Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.
2013-12-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.
NASA Astrophysics Data System (ADS)
Rahmanita, E.; Widyaningrum, V. T.; Kustiyahningsih, Y.; Purnama, J.
2018-04-01
SMEs have a very important role in the development of the economy in Indonesia. SMEs assist the government in terms of creating new jobs and can support household income. The number of SMEs in Madura and the number of measurement indicators in the SME mapping so that it requires a method.This research uses Fuzzy Analytic Network Process (FANP) method for performance measurement SME. The FANP method can handle data that contains uncertainty. There is consistency index in determining decisions. Performance measurement in this study is based on a perspective of the Balanced Scorecard. This research approach integrated internal business perspective, learning, and growth perspective and fuzzy Analytic Network Process (FANP). The results of this research areframework a priority weighting of assessment indicators SME.
Quantitative analysis of the anti-noise performance of an m-sequence in an electromagnetic method
NASA Astrophysics Data System (ADS)
Yuan, Zhe; Zhang, Yiming; Zheng, Qijia
2018-02-01
An electromagnetic method with a transmitted waveform coded by an m-sequence achieved better anti-noise performance compared to the conventional manner with a square-wave. The anti-noise performance of the m-sequence varied with multiple coding parameters; hence, a quantitative analysis of the anti-noise performance for m-sequences with different coding parameters was required to optimize them. This paper proposes the concept of an identification system, with the identified Earth impulse response obtained by measuring the system output with the input of the voltage response. A quantitative analysis of the anti-noise performance of the m-sequence was achieved by analyzing the amplitude-frequency response of the corresponding identification system. The effects of the coding parameters on the anti-noise performance are summarized by numerical simulation, and their optimization is further discussed in our conclusions; the validity of the conclusions is further verified by field experiment. The quantitative analysis method proposed in this paper provides a new insight into the anti-noise mechanism of the m-sequence, and could be used to evaluate the anti-noise performance of artificial sources in other time-domain exploration methods, such as the seismic method.
Watson, A K; Klopfenstein, T J; Erickson, G E; MacDonald, J C; Wilkerson, V A
2017-07-01
Data from 16 trials were compiled to calculate microbial CP (MCP) production and MP requirements of growing cattle on high-forage diets. All cattle were individually fed diets with 28% to 72% corn cobs in addition to either alfalfa, corn silage, or sorghum silage at 18% to 60% of the diet (DM basis). The remainder of the diet consisted of protein supplement. Source of protein within the supplement varied and included urea, blood meal, corn gluten meal, dry distillers grains, feather meal, meat and bone meal, poultry by-product meal, soybean meal, and wet distillers grains. All trials included a urea-only treatment. Intake of all cattle within an experiment was held constant, as a percentage of BW, established by the urea-supplemented group. In each trial the base diet (forage and urea supplement) was MP deficient. Treatments consisted of increasing amounts of test protein replacing the urea supplement. As protein in the diet increased, ADG plateaued. Among experiments, ADG ranged from 0.11 to 0.73 kg. Three methods of calculating microbial efficiency were used to determine MP supply. Gain was then regressed against calculated MP supply to determine MP requirement for maintenance and gain. Method 1 (based on a constant 13% microbial efficiency as used by the beef NRC model) predicted an MP maintenance requirement of 3.8 g/kg BW and 385 g MP/kg gain. Method 2 calculated microbial efficiency using low-quality forage diets and predicted MP requirements of 3.2 g/kg BW for maintenance and 448 g/kg for gain. Method 3 (based on an equation predicting MCP yield from TDN intake, proposed by the Beef Cattle Nutrient Requirements Model [BCNRM]) predicted MP requirements of 3.1 g/kg BW for maintenance and 342 g/kg for gain. The factorial method of calculating MP maintenance requirements accounts for scurf, endogenous urinary, and metabolic fecal protein losses and averaged 4.2 g/kg BW. Cattle performance data demonstrate formulating diets to meet the beef NRC model recommended MP maintenance requirement (3.8 g/kg S) works well when using 13% microbial efficiency. Therefore, a change in how microbial efficiency is calculated necessitates a change in the proposed MP maintenance requirement to not oversupply or undersupply RUP. Using the 2016 BCNRM to predict MCP production and formulate diets to meet MP requirements also requires changing the MP maintenance requirement to 3.1 g/kg BW.
An algorithm to track laboratory zebrafish shoals.
Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia
2018-05-01
In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.
40 CFR 63.93 - Approval of State requirements that substitute for a section 112 rule.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., board and administrative orders, permits issued pursuant to permit templates, or State operating permits... respective Federal rule; (2) Levels of control (including associated performance test methods) and compliance... must include monitoring or another method for determining compliance. (ii) If a standard in the...
2015-03-26
to my reader, Lieutenant Colonel Robert Overstreet, for helping solidify my research, coaching me through the statistical analysis, and positive...61 Descriptive Statistics .............................................................................................................. 61...common-method bias requires careful assessment of potential sources of bias and implementing procedural and statistical control methods. Podsakoff
ERIC Educational Resources Information Center
Ferrari, Pier Alda; Barbiero, Alessandro
2012-01-01
The increasing use of ordinal variables in different fields has led to the introduction of new statistical methods for their analysis. The performance of these methods needs to be investigated under a number of experimental conditions. Procedures to simulate from ordinal variables are then required. In this article, we deal with simulation from…
Calibration and Data Analysis of the MC-130 Air Balance
NASA Technical Reports Server (NTRS)
Booth, Dennis; Ulbrich, N.
2012-01-01
Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
Analysis of the performance of the drive system and diffuser of the Langley unitary plan wind tunnel
NASA Technical Reports Server (NTRS)
Hasel, L. E.; Stallings, R. L.
1981-01-01
A broad program was initiated at the Langley Research Center in 1973 to reduce the energy consumption of the laboratory. As a part of this program, the performance characteristics of the Unitary Plan Wind Tunnel were reexamined to determine if potential methods for incresing the operating efficiencies of the tunnel could be formulated. The results of that study are summarized. The performance characteristics of the drive system components and the variable-geometry diffuser system of the tunnel are documented and analyzed. Several potential methods for reducing the energy requirements of the facility are discussed.
Rubber Balloons, Buoyancy and the Weight of Air: A Look Inside
ERIC Educational Resources Information Center
Calza, G.; Gratton, L. M.; Lopez-Arias, T.; Oss, S.
2009-01-01
We discuss three methods of measuring the density of air most commonly used in a teaching context. Emphasis is put on the advantages and/or difficulties of each method. In particular, we show that the 'rubber balloon' method can still be performed with meaningful physical insight, but it requires a very careful approach. (Contains 4 figures and 3…
NASA Astrophysics Data System (ADS)
Park, Jonghee; Yoon, Kuk-Jin
2015-02-01
We propose a real-time line matching method for stereo systems. To achieve real-time performance while retaining a high level of matching precision, we first propose a nonparametric transform to represent the spatial relations between neighboring lines and nearby textures as a binary stream. Since the length of a line can vary across images, the matching costs between lines are computed within an overlap area (OA) based on the binary stream. The OA is determined for each line pair by employing the properties of a rectified image pair. Finally, the line correspondence is determined using a winner-takes-all method with a left-right consistency check. To reduce the computational time requirements further, we filter out unreliable matching candidates in advance based on their rectification properties. The performance of the proposed method was compared with state-of-the-art methods in terms of the computational time, matching precision, and recall. The proposed method required 47 ms to match lines from an image pair in the KITTI dataset with an average precision of 95%. We also verified the proposed method under image blur, illumination variation, and viewpoint changes.
NASA Astrophysics Data System (ADS)
Mai, W.; Zhang, J.-F.; Zhao, X.-M.; Li, Z.; Xu, Z.-W.
2017-11-01
Wastewater from the dye industry is typically analyzed using a standard method for measurement of chemical oxygen demand (COD) or by a single-wavelength spectroscopic method. To overcome the disadvantages of these methods, ultraviolet-visible (UV-Vis) spectroscopy was combined with principal component regression (PCR) and partial least squares regression (PLSR) in this study. Unlike the standard method, this method does not require digestion of the samples for preparation. Experiments showed that the PLSR model offered high prediction performance for COD, with a mean relative error of about 5% for two dyes. This error is similar to that obtained with the standard method. In this study, the precision of the PLSR model decreased with the number of dye compounds present. It is likely that multiple models will be required in reality, and the complexity of a COD monitoring system would be greatly reduced if the PLSR model is used because it can include several dyes. UV-Vis spectroscopy with PLSR successfully enhanced the performance of COD prediction for dye wastewater and showed good potential for application in on-line water quality monitoring.
Salvati, Louis M; McClure, Sean C; Reddy, Todime M; Cellar, Nicholas A
2016-05-01
This method provides simultaneous determination of total vitamins B1, B2, B3, and B6 in infant formula and related nutritionals (adult and infant). The method was given First Action for vitamins B1, B2, and B6, but not B3, during the AOAC Annual Meeting in September 2015. The method uses acid phosphatase to dephosphorylate the phosphorylated vitamin forms. It then measures thiamine (vitamin B1); riboflavin (vitamin B2); nicotinamide and nicotinic acid (vitamin B3); and pyridoxine, pyridoxal, and pyridoxamine (vitamin B6) from digested sample extract by liquid chromatography-tandem mass spectrometry. A single-laboratory validation was performed on 14 matrixes provided by the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals (SPIFAN) to demonstrate method effectiveness. The method met requirements of the AOAC SPIFAN Standard Method Performance Requirement for each of the three vitamins, including average over-spike recovery of 99.6 ± 3.5%, average repeatability of 1.5 ± 0.8% relative standard deviation, and average intermediate precision of 3.9 ± 1.3% relative standard deviation.
Fitting methods to paradigms: are ergonomics methods fit for systems thinking?
Salmon, Paul M; Walker, Guy H; M Read, Gemma J; Goode, Natassia; Stanton, Neville A
2017-02-01
The issues being tackled within ergonomics problem spaces are shifting. Although existing paradigms appear relevant for modern day systems, it is worth questioning whether our methods are. This paper asks whether the complexities of systems thinking, a currently ubiquitous ergonomics paradigm, are outpacing the capabilities of our methodological toolkit. This is achieved through examining the contemporary ergonomics problem space and the extent to which ergonomics methods can meet the challenges posed. Specifically, five key areas within the ergonomics paradigm of systems thinking are focused on: normal performance as a cause of accidents, accident prediction, system migration, systems concepts and ergonomics in design. The methods available for pursuing each line of inquiry are discussed, along with their ability to respond to key requirements. In doing so, a series of new methodological requirements and capabilities are identified. It is argued that further methodological development is required to provide researchers and practitioners with appropriate tools to explore both contemporary and future problems. Practitioner Summary: Ergonomics methods are the cornerstone of our discipline. This paper examines whether our current methodological toolkit is fit for purpose given the changing nature of ergonomics problems. The findings provide key research and practice requirements for methodological development.
An Evaluation Method for PV Systems by using Limited Data Item
NASA Astrophysics Data System (ADS)
Oozeki, Takashi; Izawa, Toshiyasu; Otani, Kenji; Tsuzuku, Ken; Koike, Hisafumi; Kurokawa, Kosuke
Beside photovoltaic (PV) systems are recently expected to introduce around Japan, almost all of them have not been taken care after established since PV systems are called maintenance free. In fact, there are few troubles about PV operations behind owners of PV systems because characteristics of them cannot be identified completely such as the ideal output energy. Therefore, it is very important to evaluate the characteristics of them. For evaluating them, equipments of measuring are required, and they, especially Pyrheliometer, are expensive as much as owners of the PV system cannot equip usually. Consequently, An evaluation method which can reveal the performance of operation such as the performance ratio with a very few kinds of data is necessary. In this paper, proposed method can evaluate performance ratio, shading losses, inverter efficiency losses by using only system output data items. The adequacies of the method are indicated by comparing with actual data and field survey results. As a result, the method is intended to be checking tool of PV system performance.
Validation studies and proficiency testing.
Ankilam, Elke; Heinze, Petra; Kay, Simon; Van den Eede, Guy; Popping, Bert
2002-01-01
Genetically modified organisms (GMOs) entered the European food market in 1996. Current legislation demands the labeling of food products if they contain <1% GMO, as assessed for each ingredient of the product. To create confidence in the testing methods and to complement enforcement requirements, there is an urgent need for internationally validated methods, which could serve as reference methods. To date, several methods have been submitted to validation trials at an international level; approaches now exist that can be used in different circumstances and for different food matrixes. Moreover, the requirement for the formal validation of methods is clearly accepted; several national and international bodies are active in organizing studies. Further validation studies, especially on the quantitative polymerase chain reaction methods, need to be performed to cover the rising demand for new extraction methods and other background matrixes, as well as for novel GMO constructs.
The Flash ADC system and PMT waveform reconstruction for the Daya Bay experiment
NASA Astrophysics Data System (ADS)
Huang, Yongbo; Chang, Jinfan; Cheng, Yaping; Chen, Zhang; Hu, Jun; Ji, Xiaolu; Li, Fei; Li, Jin; Li, Qiuju; Qian, Xin; Jetter, Soeren; Wang, Wei; Wang, Zheng; Xu, Yu; Yu, Zeyuan
2018-07-01
To better understand the energy response of the Antineutrino Detector (AD), the Daya Bay Reactor Neutrino Experiment installed a full Flash ADC readout system on one AD that allowed for simultaneous data taking with the current readout system. This paper presents the design, data acquisition, and simulation of the Flash ADC system, and focuses on the PMT waveform reconstruction algorithms. For liquid scintillator calorimetry, the most critical requirement to waveform reconstruction is linearity. Several common reconstruction methods were tested but the linearity performance was not satisfactory. A new method based on the deconvolution technique was developed with 1% residual non-linearity, which fulfills the requirement. The performance was validated with both data and Monte Carlo (MC) simulations, and 1% consistency between them has been achieved.
Study on verifying the angle measurement performance of the rotary-laser system
NASA Astrophysics Data System (ADS)
Zhao, Jin; Ren, Yongjie; Lin, Jiarui; Yin, Shibin; Zhu, Jigui
2018-04-01
An angle verification method to verify the angle measurement performance of the rotary-laser system was developed. Angle measurement performance has a great impact on measuring accuracy. Although there is some previous research on the verification of angle measuring uncertainty for the rotary-laser system, there are still some limitations. High-precision reference angles are used in the study of the method, and an integrated verification platform is set up to evaluate the performance of the system. This paper also probes the error that has biggest influence on the verification system. Some errors of the verification system are avoided via the experimental method, and some are compensated through the computational formula and curve fitting. Experimental results show that the angle measurement performance meets the requirement for coordinate measurement. The verification platform can evaluate the uncertainty of angle measurement for the rotary-laser system efficiently.
Reduced kernel recursive least squares algorithm for aero-engine degradation prediction
NASA Astrophysics Data System (ADS)
Zhou, Haowen; Huang, Jinquan; Lu, Feng
2017-10-01
Kernel adaptive filters (KAFs) generate a linear growing radial basis function (RBF) network with the number of training samples, thereby lacking sparseness. To deal with this drawback, traditional sparsification techniques select a subset of original training data based on a certain criterion to train the network and discard the redundant data directly. Although these methods curb the growth of the network effectively, it should be noted that information conveyed by these redundant samples is omitted, which may lead to accuracy degradation. In this paper, we present a novel online sparsification method which requires much less training time without sacrificing the accuracy performance. Specifically, a reduced kernel recursive least squares (RKRLS) algorithm is developed based on the reduced technique and the linear independency. Unlike conventional methods, our novel methodology employs these redundant data to update the coefficients of the existing network. Due to the effective utilization of the redundant data, the novel algorithm achieves a better accuracy performance, although the network size is significantly reduced. Experiments on time series prediction and online regression demonstrate that RKRLS algorithm requires much less computational consumption and maintains the satisfactory accuracy performance. Finally, we propose an enhanced multi-sensor prognostic model based on RKRLS and Hidden Markov Model (HMM) for remaining useful life (RUL) estimation. A case study in a turbofan degradation dataset is performed to evaluate the performance of the novel prognostic approach.
Using the Detectability Index to Predict P300 Speller Performance
Mainsah, B.O.; Collins, L.M.; Throckmorton, C.S.
2017-01-01
Objective The P300 speller is a popular brain-computer interface (BCI) system that has been investigated as a potential communication alternative for individuals with severe neuromuscular limitations. To achieve acceptable accuracy levels for communication, the system requires repeated data measurements in a given signal condition to enhance the signal-to-noise ratio of elicited brain responses. These elicited brain responses, which are used as control signals, are embedded in noisy electroencephalography (EEG) data. The discriminability between target and non-target EEG responses defines a user’s performance with the system. A previous P300 speller model has been proposed to estimate system accuracy given a certain amount of data collection. However, the approach was limited to a static stopping algorithm, i.e. averaging over a fixed number of measurements, and the row-column paradigm. A generalized method that is also applicable to dynamic stopping algorithms and other stimulus paradigms is desirable. Approach We developed a new probabilistic model-based approach to predicting BCI performance, where performance functions can be derived analytically or via Monte Carlo methods. Within this framework, we introduce a new model for the P300 speller with the Bayesian dynamic stopping (DS) algorithm, by simplifying a multi-hypothesis to a binary hypothesis problem using the likelihood ratio test. Under a normality assumption, the performance functions for the Bayesian algorithm can be parameterized with the detectability index, a measure which quantifies the discriminability between target and non-target EEG responses. Main results Simulations with synthetic and empirical data provided initial verification of the proposed method of estimating performance with Bayesian DS using the detectability index. Analysis of results from previous online studies validated the proposed method. Significance The proposed method could serve as a useful tool to initially asses BCI performance without extensive online testing, in order to estimate the amount of data required to achieve a desired accuracy level. PMID:27705956
Using the detectability index to predict P300 speller performance
NASA Astrophysics Data System (ADS)
Mainsah, B. O.; Collins, L. M.; Throckmorton, C. S.
2016-12-01
Objective. The P300 speller is a popular brain-computer interface (BCI) system that has been investigated as a potential communication alternative for individuals with severe neuromuscular limitations. To achieve acceptable accuracy levels for communication, the system requires repeated data measurements in a given signal condition to enhance the signal-to-noise ratio of elicited brain responses. These elicited brain responses, which are used as control signals, are embedded in noisy electroencephalography (EEG) data. The discriminability between target and non-target EEG responses defines a user’s performance with the system. A previous P300 speller model has been proposed to estimate system accuracy given a certain amount of data collection. However, the approach was limited to a static stopping algorithm, i.e. averaging over a fixed number of measurements, and the row-column paradigm. A generalized method that is also applicable to dynamic stopping (DS) algorithms and other stimulus paradigms is desirable. Approach. We developed a new probabilistic model-based approach to predicting BCI performance, where performance functions can be derived analytically or via Monte Carlo methods. Within this framework, we introduce a new model for the P300 speller with the Bayesian DS algorithm, by simplifying a multi-hypothesis to a binary hypothesis problem using the likelihood ratio test. Under a normality assumption, the performance functions for the Bayesian algorithm can be parameterized with the detectability index, a measure which quantifies the discriminability between target and non-target EEG responses. Main results. Simulations with synthetic and empirical data provided initial verification of the proposed method of estimating performance with Bayesian DS using the detectability index. Analysis of results from previous online studies validated the proposed method. Significance. The proposed method could serve as a useful tool to initially assess BCI performance without extensive online testing, in order to estimate the amount of data required to achieve a desired accuracy level.
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
Comprehensive comparative analysis of 5'-end RNA-sequencing methods.
Adiconis, Xian; Haber, Adam L; Simmons, Sean K; Levy Moonshine, Ami; Ji, Zhe; Busby, Michele A; Shi, Xi; Jacques, Justin; Lancaster, Madeline A; Pan, Jen Q; Regev, Aviv; Levin, Joshua Z
2018-06-04
Specialized RNA-seq methods are required to identify the 5' ends of transcripts, which are critical for studies of gene regulation, but these methods have not been systematically benchmarked. We directly compared six such methods, including the performance of five methods on a single human cellular RNA sample and a new spike-in RNA assay that helps circumvent challenges resulting from uncertainties in annotation and RNA processing. We found that the 'cap analysis of gene expression' (CAGE) method performed best for mRNA and that most of its unannotated peaks were supported by evidence from other genomic methods. We applied CAGE to eight brain-related samples and determined sample-specific transcription start site (TSS) usage, as well as a transcriptome-wide shift in TSS usage between fetal and adult brain.
NASA Astrophysics Data System (ADS)
Eliot, Michael H.
Students with learning disabilities (SWLDs) need to attain academic rigor to graduate from high school and college, as well as achieve success in life. Constructivist theories suggest that guided inquiry may provide the impetus for their success, yet little research has been done to support this premise. This study was designed to fill that gap. This quasi-experimental study compared didactic and guided inquiry-based teaching of science concepts to secondary SWLDs in SDC science classes. The study examined 38 students in four classes at two diverse, urban high schools. Participants were taught two science concepts using both teaching methods and posttested after each using paper-and-pencil tests and performance tasks. Data were compared to determine increases in conceptual understanding by teaching method, order of teaching method, and exposure one or both teaching methods. A survey examined participants' perceived self-efficacy under each method. Also, qualitative comparison of the two test formats examined appropriate use with SWLDs. Results showed significantly higher scores after the guided inquiry method on concept of volume, suggesting that guided inquiry does improve conceptual understanding over didactic instruction in some cases. Didactic teaching followed by guided inquiry resulted in higher scores than the reverse order, indicating that SWLDs may require direct instruction in basic facts and procedures related to a topic prior to engaging in guided inquiry. Also application of both teaching methods resulted in significantly higher scores than a single method on the concept of density, suggesting that SWLDs may require more in depth instruction found using both methods. No differences in perceived self-efficacy were shown. Qualitative analysis both assessments and participants' behaviors during testing support the use of performance tasks over paper-and-pencil tests with SWLDs. Implications for education include the use of guided inquiry to increase SWLDs conceptual understanding and process skills, while improving motivation and participation through hands-on learning. In addition, teachers may use performance tasks to better assess students' thought process, problem solving skills, and conceptual understanding. However, constructivist teaching methods require extra training, pedagogical skills, subject matter knowledge, physical resources, and support from all stakeholders.
[High performance thin-layer chromatography in specific blood diagnosis (author's transl)].
Bernardelli, B; Masotti, G
1976-01-01
Furthering their research into the differentiation of various haemoglobins (both human and animal) with the use of thin layer chromatographic methods, the Authors have applied Kaiser's high performance thin layer chromatography (HPTLC) to the specific diagnosis of blood. Although the method was superior to ascending one-dimensional thin layer chromatography for its sensitivity, Rf reproducibility and much briefer migration times, it did not turn out to be suitable for application to the specific requirements of forensic haematology.
Laser Doppler velocimetry primer
NASA Technical Reports Server (NTRS)
Bachalo, William D.
1985-01-01
Advanced research in experimental fluid dynamics required a familiarity with sophisticated measurement techniques. In some cases, the development and application of new techniques is required for difficult measurements. Optical methods and in particular, the laser Doppler velocimeter (LDV) are now recognized as the most reliable means for performing measurements in complex turbulent flows. And such, the experimental fluid dynamicist should be familiar with the principles of operation of the method and the details associated with its application. Thus, the goals of this primer are to efficiently transmit the basic concepts of the LDV method to potential users and to provide references that describe the specific areas in greater detail.
The teratology testing of food additives.
Barrow, Paul C; Spézia, François
2013-01-01
The developmental and reproductive toxicity testing (including teratogenicity) of new foods and food additives is performed worldwide according to the guidelines given in the FDA Redbook. These studies are not required for substances that are generally recognized as safe, according to the FDA inventory. The anticipated cumulated human exposure level above which developmental or reproduction studies are required depends on the structure-alert category. For food additives of concern, both developmental (prenatal) and reproduction (multigeneration) studies are required. The developmental studies are performed in two species, usually the rat and the rabbit. The reproduction study is generally performed in the rat. The two rat studies are preferably combined into a single experimental design, if possible. The test methods described in the FDA Redbook are similar to those specified by the OECD for the reproductive toxicity testing of chemicals.
A technology development program for large space antennas
NASA Technical Reports Server (NTRS)
Russell, R. A.; Campbell, T. G.; Freeland, R. E.
1980-01-01
The design and application of the offset wrap rib and the maypole (hoop/column) antenna configurations are described. The NASA mission model that generically categorizes the classes of user requirements, as well as the methods used to determine critical technologies and requirements are discussed. Performance estimates for the mesh deployable antenna selected for development are presented.
ERIC Educational Resources Information Center
Zholdasbekova, S.; Karataev, G.; Yskak, A.; Zholdasbekov, A.; Nurzhanbaeva, J.
2015-01-01
This article describes the major components of required technological skills (TS) for future designers taught during the academic process of a college. It considers the choices in terms of the various logical operations required by the fashion industry including fabric processing, assembly charts, performing work operations, etc. The article…
40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests
Code of Federal Regulations, 2010 CFR
2010-07-01
... THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR part 60... the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...
40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests
Code of Federal Regulations, 2011 CFR
2011-07-01
... THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR part 60... the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...
MRP (materiel requirements planning) II: successful implementation the hard way.
Grubbs, S C
1994-05-01
Many manufacturing companies embark on MRP II implementation projects as a method for improvement. In spite of an increasing body of knowledge regarding successful implementations, companies continue to attempt new approaches. This article reviews an actual implementation, featuring some of the mistakes made and the efforts required to still achieve "Class A" performance levels.
Tang, Shaojie; Yang, Yi; Tang, Xiangyang
2012-01-01
Interior tomography problem can be solved using the so-called differentiated backprojection-projection onto convex sets (DBP-POCS) method, which requires a priori knowledge within a small area interior to the region of interest (ROI) to be imaged. In theory, the small area wherein the a priori knowledge is required can be in any shape, but most of the existing implementations carry out the Hilbert filtering either horizontally or vertically, leading to a vertical or horizontal strip that may be across a large area in the object. In this work, we implement a practical DBP-POCS method with radial Hilbert filtering and thus the small area with the a priori knowledge can be roughly round (e.g., a sinus or ventricles among other anatomic cavities in human or animal body). We also conduct an experimental evaluation to verify the performance of this practical implementation. We specifically re-derive the reconstruction formula in the DBP-POCS fashion with radial Hilbert filtering to assure that only a small round area with the a priori knowledge be needed (namely radial DBP-POCS method henceforth). The performance of the practical DBP-POCS method with radial Hilbert filtering and a priori knowledge in a small round area is evaluated with projection data of the standard and modified Shepp-Logan phantoms simulated by computer, followed by a verification using real projection data acquired by a computed tomography (CT) scanner. The preliminary performance study shows that, if a priori knowledge in a small round area is available, the radial DBP-POCS method can solve the interior tomography problem in a more practical way at high accuracy. In comparison to the implementations of DBP-POCS method demanding the a priori knowledge in horizontal or vertical strip, the radial DBP-POCS method requires the a priori knowledge within a small round area only. Such a relaxed requirement on the availability of a priori knowledge can be readily met in practice, because a variety of small round areas (e.g., air-filled sinuses or fluid-filled ventricles among other anatomic cavities) exist in human or animal body. Therefore, the radial DBP-POCS method with a priori knowledge in a small round area is more feasible in clinical and preclinical practice.
A Gold Standards Approach to Training Instructors to Evaluate Crew Performance
NASA Technical Reports Server (NTRS)
Baker, David P.; Dismukes, R. Key
2003-01-01
The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.
Multiple nodes transfer alignment for airborne missiles based on inertial sensor network
NASA Astrophysics Data System (ADS)
Si, Fan; Zhao, Yan
2017-09-01
Transfer alignment is an important initialization method for airborne missiles because the alignment accuracy largely determines the performance of the missile. However, traditional alignment methods are limited by complicated and unknown flexure angle, and cannot meet the actual requirement when wing flexure deformation occurs. To address this problem, we propose a new method that uses the relative navigation parameters between the weapons and fighter to achieve transfer alignment. First, in the relative inertial navigation algorithm, the relative attitudes and positions are constantly computed in wing flexure deformation situations. Secondly, the alignment results of each weapon are processed using a data fusion algorithm to improve the overall performance. Finally, the feasibility and performance of the proposed method were evaluated under two typical types of deformation, and the simulation results demonstrated that the new transfer alignment method is practical and has high-precision.
GPS/DR Error Estimation for Autonomous Vehicle Localization.
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-08-21
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.
GPS/DR Error Estimation for Autonomous Vehicle Localization
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-01-01
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997
Diverse task scheduling for individualized requirements in cloud manufacturing
NASA Astrophysics Data System (ADS)
Zhou, Longfei; Zhang, Lin; Zhao, Chun; Laili, Yuanjun; Xu, Lida
2018-03-01
Cloud manufacturing (CMfg) has emerged as a new manufacturing paradigm that provides ubiquitous, on-demand manufacturing services to customers through network and CMfg platforms. In CMfg system, task scheduling as an important means of finding suitable services for specific manufacturing tasks plays a key role in enhancing the system performance. Customers' requirements in CMfg are highly individualized, which leads to diverse manufacturing tasks in terms of execution flows and users' preferences. We focus on diverse manufacturing tasks and aim to address their scheduling issue in CMfg. First of all, a mathematical model of task scheduling is built based on analysis of the scheduling process in CMfg. To solve this scheduling problem, we propose a scheduling method aiming for diverse tasks, which enables each service demander to obtain desired manufacturing services. The candidate service sets are generated according to subtask directed graphs. An improved genetic algorithm is applied to searching for optimal task scheduling solutions. The effectiveness of the scheduling method proposed is verified by a case study with individualized customers' requirements. The results indicate that the proposed task scheduling method is able to achieve better performance than some usual algorithms such as simulated annealing and pattern search.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Circling motion and screen edges as an alternative input method for on-screen target manipulation.
Ka, Hyun W; Simpson, Richard C
2017-04-01
To investigate a new alternative interaction method, called circling interface, for manipulating on-screen objects. To specify a target, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. To evaluate the circling interface, we conducted an experiment with 16 participants, comparing the performance on pointing tasks with different combinations of selection method (circling interface, physical mouse and dwelling interface) and input device (normal computer mouse, head pointer and joystick mouse emulator). A circling interface is compatible with many types of pointing devices, not requiring physical activation of mouse buttons, and is more efficient than dwell-clicking. Across all common pointing operations, the circling interface had a tendency to produce faster performance with a head-mounted mouse emulator than with a joystick mouse. The performance accuracy of the circling interface outperformed the dwelling interface. It was demonstrated that the circling interface has the potential as another alternative pointing method for selecting and manipulating objects in a graphical user interface. Implications for Rehabilitation A circling interface will improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. The Circling interface can also work with AAC devices.
Influence of Installation Errors On the Output Data of the Piezoelectric Vibrations Transducers
NASA Astrophysics Data System (ADS)
Kozuch, Barbara; Chelmecki, Jaroslaw; Tatara, Tadeusz
2017-10-01
The paper examines an influence of installation errors of the piezoelectric vibrations transducers on the output data. PCB Piezotronics piezoelectric accelerometers were used to perform calibrations by comparison. The measurements were performed with TMS 9155 Calibration Workstation version 5.4.0 at frequency in the range of 5Hz - 2000Hz. Accelerometers were fixed on the calibration station in a so-called back-to-back configuration in accordance with the applicable international standard - ISO 16063-21: Methods for the calibration of vibration and shock transducers - Part 21: Vibration calibration by comparison to a reference transducer. The first accelerometer was calibrated by suitable methods with traceability to a primary reference transducer. Each subsequent calibration was performed when changing one setting in relation to the original calibration. The alterations were related to negligence and failures in relation to the above-mentioned standards and operating guidelines - e.g. the sensor was not tightened or appropriate substance was not placed. Also, there was modified the method of connection which was in the standards requirements. Different kind of wax, light oil, grease and other assembly methods were used. The aim of the study was to verify the significance of standards requirements and to estimate of their validity. The authors also wanted to highlight the most significant calibration errors. Moreover, relation between various appropriate methods of the connection was demonstrated.
Nebot, C; Regal, P; Miranda, J; Cepeda, A; Fente, C
2012-05-01
Veterinary drugs are widely and legally used to treat and prevent disease in livestock. However, drugs are also used illegally as growth-promoting agents. To protect the health of consumers, maximum residue limits (MRL) in food of animal origin have been established and are listed in Regulation 37/2010. According to this regulation, more than 300 drugs need to be controlled regularly in laboratories for residues of veterinary drugs. A cost-effective analytical method is very important and explains why the development of multi-residual methods is becoming popular in laboratories. The aim of this work is to describe a simple, rapid and economical high-performance liquid chromatography-tandem mass spectrometry method for the simultaneous identification and quantification of 21 veterinary drugs in pork muscle samples. The sample clean-up procedure is performed with acidified dichloromethane and does not require solid phase extraction. The method is applicable to nine sulfonamides and seven coccidiostats identified within 36 min. Calculated relevant validation parameters such as recoveries (from 72.to 126 %), intra-precision and intermediate precision (relative standard deviation below 40 %) and decision limits (below 7 µg Kg(-1)) were within acceptable range and in compliance with the requirements of Commission Decision 2002/657/EC. © The Author [2012]. Published by Oxford University Press. All rights reserved.
Super-Resolution for Color Imagery
2017-09-01
separately; however, it requires performing the super-resolution computation 3 times. We transform images in the default red, green, blue (RGB) color space...chrominance components based on ARL’s alias-free image upsampling using Fourier-based windowing methods. A reverse transformation is performed on... Transformation from sRGB to CIELAB............................................... 3 Fig. 2 YCbCr mathematical coordinate transformation
10 CFR 1049.8 - Training of SPR Protective Force Officers and qualification to carry firearms.
Code of Federal Regulations, 2010 CFR
2010-01-01
... sufficient to maintain at least the minimum level of competency required for the successful performance of... competence to perform tasks associated with their responsibilities. The basic course shall include the...) Operating in such a manner as to preserve SPR sites and facilities; (9) Communications, including methods...
Estimation of channel parameters and background irradiance for free-space optical link.
Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk
2013-05-10
Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.
Leyrat, Clémence; Caille, Agnès; Foucher, Yohann; Giraudeau, Bruno
2016-01-22
Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required. We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs. The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40% of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection. The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs.
Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil
France, Brian; Bell, William; Chang, Emily; Scholten, Trudy
2015-01-01
Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315
Cavitation in liquid cryogens. 4: Combined correlations for venturi, hydrofoil, ogives, and pumps
NASA Technical Reports Server (NTRS)
Hord, J.
1974-01-01
The results of a series of experimental and analytical cavitation studies are presented. Cross-correlation is performed of the developed cavity data for a venturi, a hydrofoil and three scaled ogives. The new correlating parameter, MTWO, improves data correlation for these stationary bodies and for pumping equipment. Existing techniques for predicting the cavitating performance of pumping machinery were extended to include variations in flow coefficient, cavitation parameter, and equipment geometry. The new predictive formulations hold promise as a design tool and universal method for correlating pumping machinery performance. Application of these predictive formulas requires prescribed cavitation test data or an independent method of estimating the cavitation parameter for each pump. The latter would permit prediction of performance without testing; potential methods for evaluating the cavitation parameter prior to testing are suggested.
Aspheres for high speed cine lenses
NASA Astrophysics Data System (ADS)
Beder, Christian
2005-09-01
To fulfil the requirements of today's high performance cine lenses aspheres are an indispensable part of lens design. Among making them manageable in shape and size, tolerancing aspheres is an essential part of the development process. The traditional method of tolerancing individual aspherical coefficients results in unemployable theoretical figures only. In order to obtain viable parameters that can easily be dealt with in a production line, more enhanced techniques are required. In this presentation, a method of simulating characteristic manufacturing errors and deducing surface deviation and slope error tolerances will be shown.
ERIC Educational Resources Information Center
Foster, Bruce E., Ed.
Volume 1 contains all the invited papers accepted for the symposium. The subject matter covered in the papers includes physiological, anthropometrical, psychological, sociological, and economic human requirements and methods of evaluation; physical requirements and methods of evaluation in mechanical, acoustical, thermal, dimensional stability,…
Overlapped Partitioning for Ensemble Classifiers of P300-Based Brain-Computer Interfaces
Onishi, Akinari; Natsume, Kiyohisa
2014-01-01
A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance. PMID:24695550
Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.
Onishi, Akinari; Natsume, Kiyohisa
2014-01-01
A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.
21 CFR 606.65 - Supplies and reagents.
Code of Federal Regulations, 2014 CFR
2014-04-01
... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...
21 CFR 606.65 - Supplies and reagents.
Code of Federal Regulations, 2012 CFR
2012-04-01
... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...
21 CFR 606.65 - Supplies and reagents.
Code of Federal Regulations, 2013 CFR
2013-04-01
... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...
21 CFR 606.65 - Supplies and reagents.
Code of Federal Regulations, 2011 CFR
2011-04-01
... solutions shall be tested on a regularly scheduled basis by methods described in the Standard Operating Procedures Manual to determine their capacity to perform as required: Reagent or solution Frequency of...
White, Paul J; Naidu, Som; Yuriev, Elizabeth; Short, Jennifer L; McLaughlin, Jacqueline E; Larson, Ian C
2017-11-01
Objective: To investigate the relationship between student engagement with the key elements of a flipped classroom approach (preparation and attendance), their attitudes to learning, including strategy development, and their performance on two types of examination questions (knowledge recall and providing rational predictions when faced with novel scenarios). Methods. This study correlated student engagement with the flipped classroom and student disposition to learning with student ability to solve novel scenarios in examinations. Results. Students who both prepared for and attended classes performed significantly better on examination questions that required analysis of novel scenarios compared to students who did not prepare and missed classes. However, there was no difference for both groups of students on examination questions that required knowledge and comprehension. Student motivation and use of strategies correlated with higher examination scores on questions requiring novel scenario analysis. Conclusion. There is a synergistic relationship between class preparation and attendance. The combination of preparation and attendance was positively correlated to assessment type; the relationship was apparent for questions requiring students to solve novel problems but not for questions requiring knowledge or comprehension.
Beam shuttering interferometer and method
Deason, V.A.; Lassahn, G.D.
1993-07-27
A method and apparatus resulting in the simplification of phase shifting interferometry by eliminating the requirement to know the phase shift between interferograms or to keep the phase shift between interferograms constant. The present invention provides a simple, inexpensive means to shutter each independent beam of the interferometer in order to facilitate the data acquisition requirements for optical interferometry and phase shifting interferometry. By eliminating the requirement to know the phase shift between interferograms or to keep the phase shift constant, a simple, economical means and apparatus for performing the technique of phase shifting interferometry is provide which, by thermally expanding a fiber optical cable changes the optical path distance of one incident beam relative to another.
Beam shuttering interferometer and method
Deason, Vance A.; Lassahn, Gordon D.
1993-01-01
A method and apparatus resulting in the simplification of phase shifting interferometry by eliminating the requirement to know the phase shift between interferograms or to keep the phase shift between interferograms constant. The present invention provides a simple, inexpensive means to shutter each independent beam of the interferometer in order to facilitate the data acquisition requirements for optical interferometry and phase shifting interferometry. By eliminating the requirement to know the phase shift between interferograms or to keep the phase shift constant, a simple, economical means and apparatus for performing the technique of phase shifting interferometry is provide which, by thermally expanding a fiber optical cable changes the optical path distance of one incident beam relative to another.
Machine cost analysis using the traditional machine-rate method and ChargeOut!
E. M. (Ted) Bilek
2009-01-01
Forestry operations require ever more use of expensive capital equipment. Mechanization is frequently necessary to perform cost-effective and safe operations. Increased capital should mean more sophisticated capital costing methodologies. However the machine rate method, which is the costing methodology most frequently used, dates back to 1942. CHARGEOUT!, a recently...
Moving and adaptive grid methods for compressible flows
NASA Technical Reports Server (NTRS)
Trepanier, Jean-Yves; Camarero, Ricardo
1995-01-01
This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.
Student Diversity Requires Different Approaches to College Teaching, Even in Math and Science.
ERIC Educational Resources Information Center
Nelson, Craig E.
1996-01-01
Asserts that traditional teaching methods are unintentionally biased towards the elite and against many non-traditional students. Outlines several easily accessible changes in teaching methods that have fostered dramatic changes in student performance with no change in standards. These approaches have proven effective even in the fields of…
40 CFR Table 2 to Subpart Jjjj of... - Requirements for Performance Tests
Code of Federal Regulations, 2011 CFR
2011-07-01
... Interface Gas Chromatography/Mass Spectrometry as an alternative to EPA Method 18 for measuring total... portable analyzer. b You may use ASME PTC 19.10-1981, Flue and Exhaust Gas Analyses, for measuring the O2 content of the exhaust gas as an alternative to EPA Method 3B. c You may use EPA Method 18 of 40 CFR part...
40 CFR Table 2 to Subpart Jjjj of... - Requirements for Performance Tests
Code of Federal Regulations, 2010 CFR
2010-07-01
... Interface Gas Chromatography/Mass Spectrometry as an alternative to EPA Method 18 for measuring total... portable analyzer. b You may use ASME PTC 19.10-1981, Flue and Exhaust Gas Analyses, for measuring the O2 content of the exhaust gas as an alternative to EPA Method 3B. c You may use EPA Method 18 of 40 CFR part...
Engineering calculations for communications satellite systems planning
NASA Technical Reports Server (NTRS)
Walton, E.; Aebker, E.; Mata, F.; Reilly, C.
1991-01-01
The final phase of a satellite synthesis project is described. Several methods for generating satellite positionings with improved aggregate carrier to interference characteristics were studied. Two general methods for modifying required separation values are presented. Also, two methods for improving aggregate carrier to interference (C/I) performance of given satellite synthesis solutions are presented. A perturbation of the World Administrative Radio Conference (WARC) synthesis is presented.
A method for experimental modal separation
NASA Technical Reports Server (NTRS)
Hallauer, W. L., Jr.
1977-01-01
A method is described for the numerical simulation of multiple-shaker modal survey testing using simulated experimental data to optimize the shaker force-amplitude distribution for the purpose of isolating individual modes of vibration. Inertia, damping, stiffness, and model data are stored on magnetic disks, available by direct access to the interactive FORTRAN programs which perform all computations required by this relative force amplitude distribution method.
Airbreathing hypersonic vehicle design and analysis methods
NASA Technical Reports Server (NTRS)
Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.
1996-01-01
The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 7 2013-07-01 2013-07-01 false What test methods and other procedures... Internal Combustion Engines Testing Requirements for Owners and Operators § 60.4244 What test methods and...? Owners and operators of stationary SI ICE who conduct performance tests must follow the procedures in...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What test methods and other procedures... Internal Combustion Engines Testing Requirements for Owners and Operators § 60.4244 What test methods and...? Owners and operators of stationary SI ICE who conduct performance tests must follow the procedures in...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 7 2014-07-01 2014-07-01 false What test methods and other procedures... Internal Combustion Engines Testing Requirements for Owners and Operators § 60.4244 What test methods and...? Owners and operators of stationary SI ICE who conduct performance tests must follow the procedures in...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false What test methods and other procedures... Internal Combustion Engines Testing Requirements for Owners and Operators § 60.4244 What test methods and...? Owners and operators of stationary SI ICE who conduct performance tests must follow the procedures in...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 7 2012-07-01 2012-07-01 false What test methods and other procedures... Internal Combustion Engines Testing Requirements for Owners and Operators § 60.4244 What test methods and...? Owners and operators of stationary SI ICE who conduct performance tests must follow the procedures in...
Evaluation of the use of nodal methods for MTR neutronic analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reitsma, F.; Mueller, E.Z.
1997-08-01
Although modern nodal methods are used extensively in the nuclear power industry, their use for research reactor analysis has been very limited. The suitability of nodal methods for material testing reactor analysis is investigated with the emphasis on the modelling of the core region (fuel assemblies). The nodal approach`s performance is compared with that of the traditional finite-difference fine mesh approach. The advantages of using nodal methods coupled with integrated cross section generation systems are highlighted, especially with respect to data preparation, simplicity of use and the possibility of performing a great variety of reactor calculations subject to strict timemore » limitations such as are required for the RERTR program.« less
End-to-end performance measurement of Internet based medical applications.
Dev, P; Harris, D; Gutierrez, D; Shah, A; Senger, S
2002-01-01
We present a method to obtain an end-to-end characterization of the performance of an application over a network. This method is not dependent on any specific application or type of network. The method requires characterization of network parameters, such as latency and packet loss, between the expected server or client endpoints, as well as characterization of the application's constraints on these parameters. A subjective metric is presented that integrates these characterizations and that operates over a wide range of applications and networks. We believe that this method may be of wide applicability as research and educational applications increasingly make use of computation and data servers that are distributed over the Internet.
Trajectories for High Specific Impulse High Specific Power Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.
On-orbit calibration for star sensors without priori information.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang
2017-07-24
The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.
Adaptive optimal training of animal behavior
NASA Astrophysics Data System (ADS)
Bak, Ji Hyun; Choi, Jung Yoon; Akrami, Athena; Witten, Ilana; Pillow, Jonathan
Neuroscience experiments often require training animals to perform tasks designed to elicit various sensory, cognitive, and motor behaviors. Training typically involves a series of gradual adjustments of stimulus conditions and rewards in order to bring about learning. However, training protocols are usually hand-designed, and often require weeks or months to achieve a desired level of task performance. Here we combine ideas from reinforcement learning and adaptive optimal experimental design to formulate methods for efficient training of animal behavior. Our work addresses two intriguing problems at once: first, it seeks to infer the learning rules underlying an animal's behavioral changes during training; second, it seeks to exploit these rules to select stimuli that will maximize the rate of learning toward a desired objective. We develop and test these methods using data collected from rats during training on a two-interval sensory discrimination task. We show that we can accurately infer the parameters of a learning algorithm that describes how the animal's internal model of the task evolves over the course of training. We also demonstrate by simulation that our method can provide a substantial speedup over standard training methods.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J
2015-04-01
The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be used in precision feeding systems without adjustments. However, the method's ability to accommodate large genetic differences in feed intake and protein deposition patterns needs to be studied further.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
HO2 rovibrational eigenvalue studies for nonzero angular momentum
NASA Astrophysics Data System (ADS)
Wu, Xudong T.; Hayes, Edward F.
1997-08-01
An efficient parallel algorithm is reported for determining all bound rovibrational energy levels for the HO2 molecule for nonzero angular momentum values, J=1, 2, and 3. Performance tests on the CRAY T3D indicate that the algorithm scales almost linearly when up to 128 processors are used. Sustained performance levels of up to 3.8 Gflops have been achieved using 128 processors for J=3. The algorithm uses a direct product discrete variable representation (DVR) basis and the implicitly restarted Lanczos method (IRLM) of Sorensen to compute the eigenvalues of the polyatomic Hamiltonian. Since the IRLM is an iterative method, it does not require storage of the full Hamiltonian matrix—it only requires the multiplication of the Hamiltonian matrix by a vector. When the IRLM is combined with a formulation such as DVR, which produces a very sparse matrix, both memory and computation times can be reduced dramatically. This algorithm has the potential to achieve even higher performance levels for larger values of the total angular momentum.
Optimation and Determination of Fe-Oxinate Complex by Using High Performance Liquid Chromatography
NASA Astrophysics Data System (ADS)
Oktavia, B.; Nasra, E.; Sary, R. C.
2018-04-01
The need for iron will improve the industrial processes that require iron as its raw material. Control of industrial iron waste is very important to do. One method of iron analysis is to conduct indirect analysis of iron (III) ions by complexing with 8-Hydroxyquinoline or oxine. In this research, qualitative and quantitative tests of iron (III) ions in the form of complex with oxine. The analysis was performed using HPLC at a wavelength of 470 nm with an ODS C18 column. Three methods of analysis were performed: 1) Fe-oxinate complexes were prepared in an ethanol solvent so no need for separation anymore, (2) Fe-oxinate complexes were made in chloroform so that a solvent extraction was required before the complex was injected into the column while the third complex was formed in the column, wherein the eluent contains the oxide and the metal ions are then injected. The resulting chromatogram shows that the 3rd way provides a better chromatogram for iron analysis.
[Performance comparison of material tests for cadmium and lead in food contact plastics].
Mutsuga, Motoh; Abe, Tomoyuki; Abe, Yutaka; Ishii, Rie; Itoh, Yuko; Ohno, Hiroyuki; Ohno, Yuichiro; Ozaki, Asako; Kakihara, Yoshiteru; Kaneko, Reiko; Kawamura, Yoko; Shibata, Hiroshi; Sekido, Haruko; Sonobe, Hironori; Takasaka, Noriko; Tajima, Yoshiyasu; Tanaka, Aoi; Nomura, Chie; Hikida, Akinori; Matsuyama, Sigetomo; Murakami, Ryo; Yamaguchi, Miku; Wada, Takenari; Watanabe, Kazunari; Akiyama, Hiroshi
2014-01-01
Based on the Japanese Food Sanitation Law, the performances of official and alternative material test methods for cadmium (Cd) and lead (Pb) in food contact plastics were compared. Nineteen laboratories participated to an interlaboratory study, and quantified Cd and Pb in three PVC pellets. in the official method, a sample is digested with H2SO4, taken up in HCl, and evaporated to dryness on a water bath, then measured by atomic absorption spectrometry (AAS) or inductively coupled plasma-optical emission spectrometry (ICP-OES). Statistical treatment revealed that the trueness, repeatability (RSDr) and reproducibility (RSDr) were 86-95%, 3.1-9.4% and 8.6-22.1%, respectively. The values of the performance parameters fulfilled the requirements , and the performances met the test specifications. The combination of evaporation to dryness on a hot plate and measurement by AAS or ICP-OES is applicable as an alternative method. However, the trueness and RSDr were inferior to those of the official method. The performance parameters obtained by using the microwave digestion method (MW method) to prepare test solution were better than those of the official method. Thus, the MW method is available as an alternative method. Induced coupled plasma-mass spectrometry (ICP-MS) is also available as an alternative method. However, it is necessary to ensure complete digestion of the sample.
A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization
Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao
2016-01-01
The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322
Rossi, Patrizia; Pozio, Edoardo
2008-01-01
The European Community Regulation (EC) No. 2075/2005 lays down specific rules on official controls for the detection of Trichinella in fresh meat for human consumption, recommending the pooled-sample digestion method as the reference method. The aim of this document is to provide specific guidance to implement an appropriate Trichinella digestion method by a laboratory accredited according to the ISO/IEC 17025:2005 international standard, and performing microbiological testing following the EA-04/10:2002 international guideline. Technical requirements for the correct implementation of the method, such as the personnel competence, specific equipments and reagents, validation of the method, reference materials, sampling, quality assurance of results and quality control of performance are provided, pointing out the critical control points for the correct implementation of the digestion method.
Blade design and analysis using a modified Euler solver
NASA Technical Reports Server (NTRS)
Leonard, O.; Vandenbraembussche, R. A.
1991-01-01
An iterative method for blade design based on Euler solver and described in an earlier paper is used to design compressor and turbine blades providing shock free transonic flows. The method shows a rapid convergence, and indicates how much the flow is sensitive to small modifications of the blade geometry, that the classical iterative use of analysis methods might not be able to define. The relationship between the required Mach number distribution and the resulting geometry is discussed. Examples show how geometrical constraints imposed upon the blade shape can be respected by using free geometrical parameters or by relaxing the required Mach number distribution. The same code is used both for the design of the required geometry and for the off-design calculations. Examples illustrate the difficulty of designing blade shapes with optimal performance also outside of the design point.
Cognition and procedure representational requirements for predictive human performance models
NASA Technical Reports Server (NTRS)
Corker, K.
1992-01-01
Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods including procedural backtracking with concurrent search, temporal reasoning, and constraint checking for partial ordering of procedures. Finally, the representation is being linked to models of human decision making processes that include heuristic, propositional and prescriptive judgement models that are sensitive to the procedural content in which the valuative functions are being performed.
40 CFR 91.313 - Analyzers required.
Code of Federal Regulations, 2012 CFR
2012-07-01
... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...
40 CFR 90.313 - Analyzers required.
Code of Federal Regulations, 2013 CFR
2013-07-01
... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...
40 CFR 90.313 - Analyzers required.
Code of Federal Regulations, 2011 CFR
2011-07-01
... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...
40 CFR 91.313 - Analyzers required.
Code of Federal Regulations, 2014 CFR
2014-07-01
... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...
40 CFR 91.313 - Analyzers required.
Code of Federal Regulations, 2013 CFR
2013-07-01
... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...
40 CFR 91.313 - Analyzers required.
Code of Federal Regulations, 2011 CFR
2011-07-01
... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...
40 CFR 90.313 - Analyzers required.
Code of Federal Regulations, 2014 CFR
2014-07-01
... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...
40 CFR 90.313 - Analyzers required.
Code of Federal Regulations, 2012 CFR
2012-07-01
... condensation is acceptable. If water is removed by condensation, the sample gas temperature or sample dew point.... A water trap performing this function is an acceptable method. Means other than condensation may be...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, T.A.
1992-12-01
The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less
Multiparticle imaging technique for two-phase fluid flows using pulsed laser speckle velocimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, T.A.
1992-12-01
The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less
ANSI/ASHRAE/IES Standard 90.1-2010 Performance Rating Method Reference Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goel, Supriya; Rosenberg, Michael I.
This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1- 2010 (Standard 90.1-2010).The PRM is used for rating the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM. It should be noted that this document is created independently from ASHRAE and SSPC 90.1 and is not sanctioned nor approved by either ofmore » those entities . Potential users of this manual include energy modelers, software developers and implementers of “beyond code” energy programs. Energy modelers using ASHRAE Standard 90.1-2010 for beyond code programs can use this document as a reference manual for interpreting requirements of the Performance Rating method. Software developers, developing tools for automated creation of the baseline model can use this reference manual as a guideline for developing the rules for the baseline model.« less
Results and Analysis from Space Suit Joint Torque Testing
NASA Technical Reports Server (NTRS)
Matty, Jennifer
2010-01-01
A space suit's mobility is critical to an astronaut's ability to perform work efficiently. As mobility increases, the astronaut can perform tasks for longer durations with less fatigue. Mobility can be broken down into two parts: range of motion (ROM) and torque. These two measurements describe how the suit moves and how much force it takes to move. Two methods were chosen to define mobility requirements for the Constellation Space Suit Element (CSSE). One method focuses on range of motion and the second method centers on joint torque. A joint torque test was conducted to determine a baseline for current advanced space suit joint torques. This test utilized the following space suits: Extravehicular Mobility Unit (EMU), Advanced Crew Escape Suit (ACES), I-Suit, D-Suit, Enhanced Mobility (EM)- ACES, and Mark III (MK-III). Data was collected data from 16 different joint movements of each suit. The results were then reviewed and CSSE joint torque requirement values were selected. The focus of this paper is to discuss trends observed during data analysis.
Design and Analysis of Offshore Macroalgae Biorefineries.
Golberg, Alexander; Liberzon, Alexander; Vitkin, Edward; Yakhini, Zohar
2018-03-15
Displacing fossil fuels and their derivatives with renewables, and increasing sustainable food production are among the major challenges facing the world in the coming decades. A possible, sustainable direction for addressing this challenge is the production of biomass and the conversion of this biomass to the required products through a complex system coined biorefinery. Terrestrial biomass and microalgae are possible sources; however, concerns over net energy balance, potable water use, environmental hazards, and uncertainty in the processing technologies raise questions regarding their actual potential to meet the anticipated food, feed, and energy challenges in a sustainable way. Alternative sustainable sources for biorefineries are macroalgae grown and processed offshore. However, implementation of the offshore biorefineries requires detailed analysis of their technological, economic, and environmental performance. In this chapter, the basic principles of marine biorefineries design are shown. The methods to integrate thermodynamic efficiency, investment, and environmental aspects are discussed. The performance improvement by development of new cultivation methods that fit macroalgae physiology and development of new fermentation methods that address macroalgae unique chemical composition is shown.
Giske, Christian G.; Haldorsen, Bjørg; Matuschek, Erika; Schønning, Kristian; Leegaard, Truls M.; Kahlmeter, Gunnar
2014-01-01
Different antimicrobial susceptibility testing methods to detect low-level vancomycin resistance in enterococci were evaluated in a Scandinavian multicenter study (n = 28). A phenotypically and genotypically well-characterized diverse collection of Enterococcus faecalis (n = 12) and Enterococcus faecium (n = 18) strains with and without nonsusceptibility to vancomycin was examined blindly in Danish (n = 5), Norwegian (n = 13), and Swedish (n = 10) laboratories using the EUCAST disk diffusion method (n = 28) and the CLSI agar screen (n = 18) or the Vitek 2 system (bioMérieux) (n = 5). The EUCAST disk diffusion method (very major error [VME] rate, 7.0%; sensitivity, 0.93; major error [ME] rate, 2.4%; specificity, 0.98) and CLSI agar screen (VME rate, 6.6%; sensitivity, 0.93; ME rate, 5.6%; specificity, 0.94) performed significantly better (P = 0.02) than the Vitek 2 system (VME rate, 13%; sensitivity, 0.87; ME rate, 0%; specificity, 1). The performance of the EUCAST disk diffusion method was challenged by differences in vancomycin inhibition zone sizes as well as the experience of the personnel in interpreting fuzzy zone edges as an indication of vancomycin resistance. Laboratories using Oxoid agar (P < 0.0001) or Merck Mueller-Hinton (MH) agar (P = 0.027) for the disk diffusion assay performed significantly better than did laboratories using BBL MH II medium. Laboratories using Difco brain heart infusion (BHI) agar for the CLSI agar screen performed significantly better (P = 0.017) than did those using Oxoid BHI agar. In conclusion, both the EUCAST disk diffusion and CLSI agar screening methods performed acceptably (sensitivity, 0.93; specificity, 0.94 to 0.98) in the detection of VanB-type vancomycin-resistant enterococci with low-level resistance. Importantly, use of the CLSI agar screen requires careful monitoring of the vancomycin concentration in the plates. Moreover, disk diffusion methodology requires that personnel be trained in interpreting zone edges. PMID:24599985
IPAC-Inlet Performance Analysis Code
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1997-01-01
A series of analyses have been developed which permit the calculation of the performance of common inlet designs. The methods presented are useful for determining the inlet weight flows, total pressure recovery, and aerodynamic drag coefficients for given inlet geometric designs. Limited geometric input data is required to use this inlet performance prediction methodology. The analyses presented here may also be used to perform inlet preliminary design studies. The calculated inlet performance parameters may be used in subsequent engine cycle analyses or installed engine performance calculations for existing uninstalled engine data.
NASA Technical Reports Server (NTRS)
Knoll, Richard H.; Stochl, Robert J.; Sanabria, Rafael
1991-01-01
The storage of cryogenic propellants such as liquid hydrogen (LH2) and liquid oxygen (LO2) for the future Space Exploration Initiative (SEI) will require lightweight, high performance thermal protection systems (TPSs). For the near-term lunar missions, the major weight element for most of the TPSs will be multilayer insulation (MLI) and/or the special structures/systems required to accommodate the MLI. Methods of applying MLI to LH2 tankage to avoid condensation or freezing of condensible gases such as nitrogen or oxygen while in the atmosphere are discussed. Because relatively thick layers of MLI will be required for storage times of a month or more, the transient performance from ground-hold to space-hold of the systems will become important in optimizing the TPSs for many of the missions. The ground-hold performance of several candidate systems are given as well as a qualitative assessment of the transient performance effects.
Verlinden, Nathan; Kruger, Nicholas; Carroll, Ailey; Trumbo, Tiffany
2015-01-01
Objective. To determine if the process-oriented guided inquiry learning (POGIL) teaching strategy improves student performance and engages higher-level thinking skills of first-year pharmacy students in an Introduction to Pharmaceutical Sciences course. Design. Overall examination scores and scores on questions categorized as requiring either higher-level or lower-level thinking skills were compared in the same course taught over 3 years using traditional lecture methods vs the POGIL strategy. Student perceptions of the latter teaching strategy were also evaluated. Assessment. Overall mean examination scores increased significantly when POGIL was implemented. Performance on questions requiring higher-level thinking skills was significantly higher, whereas performance on questions requiring lower-level thinking skills was unchanged when the POGIL strategy was used. Student feedback on use of this teaching strategy was positive. Conclusion. The use of the POGIL strategy increased student overall performance on examinations, improved higher-level thinking skills, and provided an interactive class setting. PMID:25741027
Design and performance evaluation of the imaging payload for a remote sensing satellite
NASA Astrophysics Data System (ADS)
Abolghasemi, Mojtaba; Abbasi-Moghadam, Dariush
2012-11-01
In this paper an analysis method and corresponding analytical tools for design of the experimental imaging payload (IMPL) of a remote sensing satellite (SINA-1) are presented. We begin with top-level customer system performance requirements and constraints and derive the critical system and component parameters, then analyze imaging payload performance until a preliminary design that meets customer requirements. We consider system parameters and components composing the image chain for imaging payload system which includes aperture, focal length, field of view, image plane dimensions, pixel dimensions, detection quantum efficiency, and optical filter requirements. The performance analysis is accomplished by calculating the imaging payload's SNR (signal-to-noise ratio), and imaging resolution. The noise components include photon noise due to signal scene and atmospheric background, cold shield, out-of-band optical filter leakage and electronic noise. System resolution is simulated through cascaded modulation transfer functions (MTFs) and includes effects due to optics, image sampling, and system motion. Calculations results for the SINA-1 satellite are also presented.
NASA Technical Reports Server (NTRS)
Knoll, Richard H.; Stochl, Robert J.; Sanabria, Rafael
1991-01-01
The storage of cryogenic propellants such as liquid hydrogen (LH2) and liquid oxygen (LO2) for the future Space Exploration Initiative (SEI) will require lightweight, high performance thermal protection systems (TPS's). For the near-term lunar missions, the major weight element for most of the TPS's will be multilayer insulation (MLI) and/or the special structures/systems required to accommodate the MLI. Methods of applying MLI to LH2 tankage to avoid condensation or freezing of condensible gases such as nitrogen or oxygen while in the atmosphere are discussed. Because relatively thick layers of MLI will be required for storage times of a month or more, the transient performance from ground-hold to space-hold of the systems will become important in optimizing the TPS's for many of the missions. The ground-hold performance of several candidate systems are given as well as a qualitative assessment of the transient performance effects.
Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe
2014-09-01
Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.
Cumulative sum control charts for assessing performance in arterial surgery.
Beiles, C Barry; Morton, Anthony P
2004-03-01
The Melbourne Vascular Surgical Association (Melbourne, Australia) undertakes surveillance of mortality following aortic aneurysm surgery, patency at discharge following infrainguinal bypass and stroke and death following carotid endarterectomy. Quality improvement protocol employing the Deming cycle requires that the system for performing surgery first be analysed and optimized. Then process and outcome data are collected and these data require careful analysis. There must be a mechanism so that the causes of unsatisfactory outcomes can be determined and a good feedback mechanism must exist so that good performance is acknowledged and unsatisfactory performance corrected. A simple method for analysing these data that detects changes in average outcome rates is available using cumulative sum statistical control charts. Data have been analysed both retrospectively from 1999 to 2001, and prospectively during 2002 using cumulative sum control methods. A pathway to deal with control chart signals has been developed. The standard of arterial surgery in Victoria, Australia, is high. In one case a safe and satisfactory outcome was achieved by following the pathway developed by the audit committee. Cumulative sum control charts are a simple and effective tool for the identification of variations in performance standards in arterial surgery. The establishment of a pathway to manage problem performance is a vital part of audit activity.
Toward performance portability of the Albany finite element analysis code using the Kokkos library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Toward performance portability of the Albany finite element analysis code using the Kokkos library
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...
2018-02-05
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
What is the Final Verification of Engineering Requirements?
NASA Technical Reports Server (NTRS)
Poole, Eric
2010-01-01
This slide presentation reviews the process of development through the final verification of engineering requirements. The definition of the requirements is driven by basic needs, and should be reviewed by both the supplier and the customer. All involved need to agree upon a formal requirements including changes to the original requirements document. After the requirements have ben developed, the engineering team begins to design the system. The final design is reviewed by other organizations. The final operational system must satisfy the original requirements, though many verifications should be performed during the process. The verification methods that are used are test, inspection, analysis and demonstration. The plan for verification should be created once the system requirements are documented. The plan should include assurances that every requirement is formally verified, that the methods and the responsible organizations are specified, and that the plan is reviewed by all parties. The options of having the engineering team involved in all phases of the development as opposed to having some other organization continue the process once the design has been complete is discussed.
40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests
Code of Federal Regulations, 2014 CFR
2014-07-01
... HAP as THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR... subtract the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...
40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests
Code of Federal Regulations, 2012 CFR
2012-07-01
... HAP as THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR... subtract the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...
40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests
Code of Federal Regulations, 2013 CFR
2013-07-01
... HAP as THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR... subtract the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...
Donald L. Rockwood; Bijay Tamang; Matias Kirst; JY Zhu
2012-01-01
For several methods utilizing woody biomass for energy (Rockwood and others 2008), one of the challenges is the large, continuous fuel supply required. For example, proposed biomass plants in Florida may each require one million tons of biomass/year. When supplies of forest residues and urban wood wastes are limited, short rotation woody crops (SRWC) are a viable...
Design of a practical model-observer-based image quality assessment method for CT imaging systems
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana
2014-03-01
The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.
High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster
NASA Astrophysics Data System (ADS)
Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku
2015-01-01
High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53...
Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z
2009-05-01
Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.
NASA Technical Reports Server (NTRS)
Vachon, Jacques; Curry, Robert E.
2010-01-01
Program Objectives: 1) Satellite Calibration and Validation: Provide methods to perform the cal/val requirements for Earth Observing System satellites. 2) New Sensor Development: Provide methods to reduce risk for new sensor concepts and algorithm development prior to committing sensors to operations. 3) Process Studies: Facilitate the acquisition of high spatial/temporal resolution focused measurements that are required to understand small atmospheric and surface structures which generate powerful Earth system effects. 4) Airborne Networking: Develop disruption-tolerant networking to enable integrated multiple scale measurements of critical environmental features. Dryden Capabilities include: a) Aeronautics history of aircraft developments and milestones. b) Extensive history and experience in instrument integration. c) Extensive history and experience in aircraft modifications. d) Strong background in international deployments. e) Long history of reliable and dependable execution of projects. f) Varied aircraft types providing different capabilities, performance and duration.
Electrostatic Evaluation: SCAPE Suit Materials
NASA Technical Reports Server (NTRS)
Buhler, Charles; Calle, Carlos
2005-01-01
The surface resistivity tests are performed per the requirements of the ESD Association Standard Test Method ESD STM11.11*. These measurements are taken using a PRS-801 resistance system with an Electro Tech System (ETS) PRF-911 concentric ring resistance probe. The tests require a five pound weight on top of cylindrical electrodes and were conducted at both ambient and low humidity conditions. In order for materials to "pass" resistivity tests the surface of the materials must either be conductive or statically dissipative otherwise the materials "fail" ESD. Volume resistivity tests are also conducted to measure conductivity through the material as opposed to conductivity along the surface. These tests are conducted using the same PRS-801 resistance system with the Electro Tech System PRF-911 concentric ring resistance probe but are performed in accordance with ESD Association Standard Test Method ESD STM11.l2**.
A new implementation of the CMRH method for solving dense linear systems
NASA Astrophysics Data System (ADS)
Heyouni, M.; Sadok, H.
2008-04-01
The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-11
...Recent EPA gas audit results indicate that some gas cylinders used to calibrate continuous emission monitoring systems on stationary sources do not meet EPA's performance specification. Reviews of stack test reports in recent years indicate that some stack testers do not properly follow EPA test methods or do not correctly calculate test method results. Therefore, EPA is proposing to amend its Protocol Gas Verification Program (PGVP) and the minimum competency requirements for air emission testing (formerly air emission testing body requirements) to improve the accuracy of emissions data. EPA is also proposing to amend other sections of the Acid Rain Program continuous emission monitoring system regulations by adding and clarifying certain recordkeeping and reporting requirements, removing the provisions pertaining to mercury monitoring and reporting, removing certain requirements associated with a class-approved alternative monitoring system, disallowing the use of a particular quality assurance option in EPA Reference Method 7E, adding an incorporation by reference that was inadvertently left out of the January 24, 2008 final rule, and clarifying the language and applicability of certain provisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belte, D.; Stratton, M.V.
1982-08-01
The United States Army Aviation Engineering Flight Activity conducted level flight performance tests of the OH-58C helicopter at Edwards AFB, California from 22 September to 20 November 1981, and at St. Paul, Minnesota, from 12 January to 9 February 1982. Nondimensional methods were used to identify effects of compressibility and blade stall on performance, and increased referred rotor speeds were used to supplement the range of currently available level flight data. Maximum differences in nondimensional power required attributed to compressibility effects varied from 6.5 to 11%. However, high actual rotor speed at a given condition can result in less powermore » required than at low rotor speed even with the compressibility penalty. The power required characteristics determined by these tests can be combined with engine performance to determine the most fuel efficient operating conditions.« less
NASA Astrophysics Data System (ADS)
Harney, Robert C.
1997-03-01
A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.
The Detection Method of Escherichia coli in Water Resources: A Review
NASA Astrophysics Data System (ADS)
Nurliyana, M. R.; Sahdan, M. Z.; Wibowo, K. M.; Muslihati, A.; Saim, H.; Ahmad, S. A.; Sari, Y.; Mansor, Z.
2018-04-01
This article reviews several approaches for Escherichia coli (E. coli) bacteria detection from conventional methods, emerging method and goes to biosensor-based techniques. Detection and enumeration of E. coli bacteria usually required long duration of time in obtaining the result since laboratory-based approach is normally used in its assessment. It requires 24 hours to 72 hours after sampling to process the culturing samples before results are available. Although faster technique for detecting E. coli in water such as Polymerase Chain Reaction (PCR) and Enzyme-Linked Immunosorbent Assay (ELISA) have been developed, it still required transporting the samples from water resources to the laboratory, high-cost, complicated equipment usage, complex procedures, as well as the requirement of skilled specialist to cope with the complexity which limit their wide spread practice in water quality detection. Recently, development of biosensor device that is easy to perform, portable, highly sensitive and selective becomes indispensable in detecting extremely lower consolidation of pathogenic E. coli bacteria in water samples.
RTM: Cost-effective processing of composite structures
NASA Technical Reports Server (NTRS)
Hasko, Greg; Dexter, H. Benson
1991-01-01
Resin transfer molding (RTM) is a promising method for cost effective fabrication of high strength, low weight composite structures from textile preforms. In this process, dry fibers are placed in a mold, resin is introduced either by vacuum infusion or pressure, and the part is cured. RTM has been used in many industries, including automotive, recreation, and aerospace. Each of the industries has different requirements of material strength, weight, reliability, environmental resistance, cost, and production rate. These requirements drive the selection of fibers and resins, fiber volume fractions, fiber orientations, mold design, and processing equipment. Research is made into applying RTM to primary aircraft structures which require high strength and stiffness at low density. The material requirements are discussed of various industries, along with methods of orienting and distributing fibers, mold configurations, and processing parameters. Processing and material parameters such as resin viscosity, perform compaction and permeability, and tool design concepts are discussed. Experimental methods to measure preform compaction and permeability are presented.
Eusebio, Lidia; Capelli, Laura; Sironi, Selena
2016-01-01
Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy), it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable. PMID:27657086
Eusebio, Lidia; Capelli, Laura; Sironi, Selena
2016-09-21
Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy), it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable.
The Role of Integrated Modeling in the Design and Verification of the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Mosier, Gary E.; Howard, Joseph M.; Johnston, John D.; Parrish, Keith A.; Hyde, T. Tupper; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. System-level verification of critical optical performance requirements will rely on integrated modeling to a considerable degree. In turn, requirements for accuracy of the models are significant. The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures, effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a complex series of steps are planned by which the components of the end-to-end models are validated at various levels of subassembly, and the ultimate verification of optical performance is by analysis using the assembled models. This paper describes the critical optical performance requirements driving the integrated modeling activity, shows how the error budget is used to allocate and track contributions to total performance, and presents examples of integrated modeling methods and results that support the preliminary observatory design. Finally, the concepts for model validation and the role of integrated modeling in the ultimate verification of observatory are described.
77 FR 74144 - Federal Motor Vehicle Safety Standards; Event Data Recorders
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-13
... submitted to NHTSA through one of the preceding methods and a copy should also be sent to the Office of... and Crash Test Performance Requirements D. NHTSA's Validation of and Reliance on EDR Data in Its Crash... for the purpose of post-crash assessment of vehicle safety system performance.\\1\\ EDR data are used to...
ERIC Educational Resources Information Center
Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta
2014-01-01
Purpose: To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Method: Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions,…
Smirr, Jean-Loup; Guilbaud, Sylvain; Ghalbouni, Joe; Frey, Robert; Diamanti, Eleni; Alléaume, Romain; Zaquine, Isabelle
2011-01-17
Fast characterization of pulsed spontaneous parametric down conversion (SPDC) sources is important for applications in quantum information processing and communications. We propose a simple method to perform this task, which only requires measuring the counts on the two output channels and the coincidences between them, as well as modeling the filter used to reduce the source bandwidth. The proposed method is experimentally tested and used for a complete evaluation of SPDC sources (pair emission probability, total losses, and fidelity) of various bandwidths. This method can find applications in the setting up of SPDC sources and in the continuous verification of the quality of quantum communication links.
A VLSI architecture for performing finite field arithmetic with reduced table look-up
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Reed, I. S.
1986-01-01
A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.
Security Analysis and Improvements to the PsychoPass Method
2013-01-01
Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458
Trujillo-Rodríguez, María J; Nacham, Omprakash; Clark, Kevin D; Pino, Verónica; Anderson, Jared L; Ayala, Juan H; Afonso, Ana M
2016-08-31
This work describes the applicability of magnetic ionic liquids (MILs) in the analytical determination of a group of heavy polycyclic aromatic hydrocarbons. Three different MILs, namely, benzyltrioctylammonium bromotrichloroferrate (III) (MIL A), methoxybenzyltrioctylammonium bromotrichloroferrate (III) (MIL B), and 1,12-di(3-benzylbenzimidazolium) dodecane bis[(trifluoromethyl)sulfonyl)]imide bromotrichloroferrate (III) (MIL C), were designed to exhibit hydrophobic properties, and their performance examined in a microextraction method for hydrophobic analytes. The magnet-assisted approach with these MILs was performed in combination with high performance liquid chromatography and fluorescence detection. The study of the extraction performance showed that MIL A was the most suitable solvent for the extraction of polycyclic aromatic hydrocarbons and under optimum conditions the fast extraction step required ∼20 μL of MIL A for 10 mL of aqueous sample, 24 mmol L(-1) NaOH, high ionic strength content of NaCl (25% (w/v)), 500 μL of acetone as dispersive solvent, and 5 min of vortex. The desorption step required the aid of an external magnetic field with a strong NdFeB magnet (the separation requires few seconds), two back-extraction steps for polycyclic aromatic hydrocarbons retained in the MIL droplet with n-hexane, evaporation and reconstitution with acetonitrile. The overall method presented limits of detection down to 5 ng L(-1), relative recoveries ranging from 91.5 to 119%, and inter-day reproducibility values (expressed as relative standard derivation) lower than 16.4% for a spiked level of 0.4 μg L(-1) (n = 9). The method was also applied for the analysis of real samples, including tap water, wastewater, and tea infusion. Copyright © 2016 Elsevier B.V. All rights reserved.
Prediction of anthropometric accommodation in aircraft cockpits
NASA Astrophysics Data System (ADS)
Zehner, Gregory Franklin
Designing aircraft cockpits to accommodate the wide range of body sizes existing in the U.S. population has always been a difficult problem for Crewstation Engineers. The approach taken in the design of military aircraft has been to restrict the range of body sizes allowed into flight training, and then to develop standards and specifications to ensure that the majority of the pilots are accommodated. Accommodation in this instance is defined as the ability to: (1) Adequately see, reach, and actuate controls; (2) Have external visual fields so that the pilot can see to land, clear for other aircraft, and perform a wide variety of missions (ground support/attack or air to air combat); and (3) Finally, if problems arise, the pilot has to be able to escape safely. Each of these areas is directly affected by the body size of the pilot. Unfortunately, accommodation problems persist and may get worse. Currently the USAF is considering relaxing body size entrance requirements so that smaller and larger people could become pilots. This will make existing accommodation problems much worse. This dissertation describes a methodology for correcting this problem and demonstrates the method by predicting pilot fit and performance in the USAF T-38A aircraft based on anthropometric data. The methods described can be applied to a variety of design applications where fitting the human operator into a system is a major concern. A systematic approach is described which includes: defining the user population, setting functional requirements that operators must be able to perform, testing the ability of the user population to perform the functional requirements, and developing predictive equations for selecting future users of the system. Also described is a process for the development of new anthropometric design criteria and cockpit design methods that assure body size accommodation is improved in the future.
Yano, Kenji; Taminato, Mifue; Nomori, Michiko; Hosokawa, Ko
2017-01-01
Background: Autologous breast reconstruction can be performed for breasts with ptosis to a certain extent, but if patients desire to correct ptosis, mastopexy of the contralateral breast is indicated. However, accurate prediction of post-mastopexy breast shape is difficult to make, and symmetrical breast reconstruction requires certain experience. We have previously reported the use of three-dimensional (3D) imaging and printing technologies in deep inferior epigastric artery perforator (DIEP) flap breast reconstruction. In the present study, these technologies were applied to the reconstruction of breasts with ptosis. Methods: Eight breast cancer patients with ptotic breasts underwent two-stage unilateral DIEP flap breast reconstruction. In the initial surgery, tissue expander (TE) placement and contralateral mastopexy are performed simultaneously. Four to six months later, 3D bilateral breast imaging is performed after confirming that the shape of the contralateral breast (post-mastopexy) is somewhat stabilized, and a 3D-printed breast mold is created based on the mirror image of the shape of the contralateral breast acquired using analytical software. Then, DIEP flap surgery is performed, where the breast mold is used to determine the required flap volume and to shape the breast mound. Results: All flaps were engrafted without any major perioperative complications during both the initial and DIEP flap surgeries. Objective assessment of cosmetic outcome revealed that good breast symmetry was achieved in all cases. Conclusions: The method described here may allow even inexperienced surgeons to achieve reconstruction of symmetrical, non-ptotic breasts with ease and in a short time. While the requirement of two surgeries is a potential disadvantage, our method will be particularly useful in cases involving TEs, i.e., delayed reconstruction or immediate reconstruction involving significant skin resection. PMID:29184728
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koh, Chung-Yan; Light, Yooli Kim; Piccini, Matthew Ernest
Embodiments of the present invention are directed toward devices, systems, and methods for purifying nucleic acids to conduct polymerase chain reaction (PCR) assays. In one example, a method includes generating complexes of silica beads and nucleic acids in a lysis buffer, transporting the complexes through an immiscible fluid to remove interfering compounds from the complexes, further transporting the complexes into a density medium containing components required for PCR where the nucleic acids disassociate from the silica beads, and thermocycling the contents of the density medium to achieve PCR. Signal may be detected from labeling agents in the components required formore » PCR.« less
Cordeiro, Fernando; Robouch, Piotr; de la Calle, Maria Beatriz; Emteborg, Håkan; Charoud-Got, Jean; Schmitz, Franz
2011-01-01
A collaborative study, International Evaluation Measurement Programme-25a, was conducted in accordance with international protocols to determine the performance characteristics of an analytical method for the determination of dissolved bromate in drinking water. The method should fulfill the analytical requirements of Council Directive 98/83/EC (referred to in this work as the Drinking Water Directive; DWD). The new draft standard method under investigation is based on ion chromatography followed by post-column reaction and UV detection. The collaborating laboratories used the Draft International Organization for Standardization (ISO)/Draft International Standard (DIS) 11206 document. The existing standard method (ISO 15061:2001) is based on ion chromatography using suppressed conductivity detection, in which a preconcentration step may be required for the determination of bromate concentrations as low as 3 to 5 microg/L. The new method includes a dilution step that reduces the matrix effects, thus allowing the determination of bromate concentrations down to 0.5 microg/L. Furthermore, the method aims to minimize any potential interference of chlorite ions. The collaborative study investigated different types of drinking water, such as soft, hard, and mineral water. Other types of water, such as raw water (untreated), swimming pool water, a blank (named river water), and a bromate standard solution, were included as test samples. All test matrixes except the swimming pool water were spiked with high-purity potassium bromate to obtain bromate concentrations ranging from 1.67 to 10.0 microg/L. Swimming pool water was not spiked, as this water was incurred with bromate. Test samples were dispatched to 17 laboratories from nine different countries. Sixteen participants reported results. The repeatability RSD (RSD(r)) ranged from 1.2 to 4.1%, while the reproducibility RSD (RSDR) ranged from 2.3 to 5.9%. These precision characteristics compare favorably with those of ISO 15601. A thorough comparison of the performance characteristics is presented in this report. All method performance characteristics obtained in the frame of this collaborative study indicate that the draft ISO/DIS 11206 standard method meets the requirements set down by the DWD. It can, therefore, be considered to fit its intended analytical purpose.
Referenceless MR thermometry-a comparison of five methods.
Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho
2017-01-07
Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.
Referenceless MR thermometry—a comparison of five methods
NASA Astrophysics Data System (ADS)
Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho
2017-01-01
Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.
Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques
2012-01-01
Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.
NASA Technical Reports Server (NTRS)
Kovich, G.
1972-01-01
The cavitating performance of a stainless steel 80.6 degree flat-plate helical inducer was investigated in water over a range of liquid temperatures and flow coefficients. A semi-empirical prediction method was used to compare predicted values of required net positive suction head in water with experimental values obtained in water. Good agreement was obtained between predicted and experimental data in water. The required net positive suction head in water decreased with increasing temperature and increased with flow coefficient, similar to that observed for a like inducer in liquid hydrogen.
Increasing Efficiency of Fecal Coliform Testing Through EPA-Approved Alternate Method Colilert*-18
NASA Technical Reports Server (NTRS)
Cornwell, Brian
2017-01-01
The 21 SM 9221 E multiple-tube fermentation method for fecal coliform analysis requires a large time and reagent investment for the performing laboratory. In late 2010, the EPA approved an alternative procedure for the determination of fecal coliforms designated as Colilert*-18. However, as of late 2016, only two VELAP-certified laboratories in the Commonwealth of Virginia have been certified in this method.
Integrals for IBS and beam cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burov, A.; /Fermilab
Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.
Integrals for IBS and Beam Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burov, A.
Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.
NASA Astrophysics Data System (ADS)
Elliott, Jonathan T.; Wright, Eric A.; Tichauer, Kenneth M.; Diop, Mamadou; Morrison, Laura B.; Pogue, Brian W.; Lee, Ting-Yim; St. Lawrence, Keith
2012-12-01
In many cases, kinetic modeling requires that the arterial input function (AIF)—the time-dependent arterial concentration of a tracer—be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO2) following a bolus injection of a light-absorbing dye. In other words, the change in SaO2 can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.
Elliott, Jonathan T; Wright, Eric A; Tichauer, Kenneth M; Diop, Mamadou; Morrison, Laura B; Pogue, Brian W; Lee, Ting-Yim; St Lawrence, Keith
2012-12-21
In many cases, kinetic modeling requires that the arterial input function (AIF)--the time-dependent arterial concentration of a tracer--be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO₂) following a bolus injection of a light-absorbing dye. In other words, the change in SaO₂ can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.
Performance Basis for Airborne Separation
NASA Technical Reports Server (NTRS)
Wing, David J.
2008-01-01
Emerging applications of Airborne Separation Assistance System (ASAS) technologies make possible new and powerful methods in Air Traffic Management (ATM) that may significantly improve the system-level performance of operations in the future ATM system. These applications typically involve the aircraft managing certain components of its Four Dimensional (4D) trajectory within the degrees of freedom defined by a set of operational constraints negotiated with the Air Navigation Service Provider. It is hypothesized that reliable individual performance by many aircraft will translate into higher total system-level performance. To actually realize this improvement, the new capabilities must be attracted to high demand and complexity regions where high ATM performance is critical. Operational approval for use in such environments will require participating aircraft to be certified to rigorous and appropriate performance standards. Currently, no formal basis exists for defining these standards. This paper provides a context for defining the performance basis for 4D-ASAS operations. The trajectory constraints to be met by the aircraft are defined, categorized, and assessed for performance requirements. A proposed extension of the existing Required Navigation Performance (RNP) construct into a dynamic standard (Dynamic RNP) is outlined. Sample data is presented from an ongoing high-fidelity batch simulation series that is characterizing the performance of an advanced 4D-ASAS application. Data of this type will contribute to the evaluation and validation of the proposed performance basis.
Preliminary Sizing and Performance Evaluation of Supersonic Cruise Aircraft
NASA Technical Reports Server (NTRS)
Fetterman, D. E., Jr.
1976-01-01
The basic processes of a method that performs sizing operations on a baseline aircraft and determines their subsequent effects on aerodynamics, propulsion, weights, and mission performance are described. The input requirements of the associated computer program are defined and its output listings explained. Results obtained by applying the method to an advanced supersonic technology concept are discussed. These results include the effects of wing loading, thrust-to-weight ratio, and technology improvements on range performance, and possible gains in both range and payload capability that become available through growth versions of the baseline aircraft. The results from an in depth contractual study that confirm the range gain predicted for a particular wing loading, thrust-to-weight ratio combination are also included.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 3: User manual
NASA Technical Reports Server (NTRS)
1972-01-01
The two-fold purpose of the Performance Analysis and Design Synthesis (PADS) computer program is discussed. The program can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general purpose branched trajectory optimization program. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent. The second module uses the method of quasi-linearization, which requires a starting solution from the first trajectory module.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
40 CFR 63.1349 - Performance testing requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) THC emissions test. (i) If you are subject to limitations on THC emissions, you must operate a CEMS in... assurance evaluations for CEMS, the THC span value (as propane) is 50 ppmvd and the reference method (RM) is Method 25A of appendix A to part 60 of this chapter. (ii) Use the THC CEMS to conduct the initial...
40 CFR Table 1 to Subpart Bbbbb of... - Requirements for Performance Tests
Code of Federal Regulations, 2014 CFR
2014-07-01
... HAP used as the calibration gas must be the single organic HAP representing the largest percent of... determining compliance with a ppmv concentration limit. c. Conduct gas molecular weight analysis i. Method 3... York, NY 10016-5990) as an alternative to EPA Method 3B. d. Measure moisture content of the stack gas...
Near-Field Source Localization by Using Focusing Technique
NASA Astrophysics Data System (ADS)
He, Hongyang; Wang, Yide; Saillard, Joseph
2008-12-01
We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.
Investigation of aged hot-mix asphalt pavements.
DOT National Transportation Integrated Search
2013-09-01
Over the lifetime of an asphalt concrete (AC) pavement, the roadway requires periodic resurfacing and rehabilitation to provide : acceptable performance. The most popular resurfacing method is an asphalt overlay over the existing roadway. In the desi...
Mapping High Dimensional Sparse Customer Requirements into Product Configurations
NASA Astrophysics Data System (ADS)
Jiao, Yao; Yang, Yu; Zhang, Hongshan
2017-10-01
Mapping customer requirements into product configurations is a crucial step for product design, while, customers express their needs ambiguously and locally due to the lack of domain knowledge. Thus the data mining process of customer requirements might result in fragmental information with high dimensional sparsity, leading the mapping procedure risk uncertainty and complexity. The Expert Judgment is widely applied against that background since there is no formal requirements for systematic or structural data. However, there are concerns on the repeatability and bias for Expert Judgment. In this study, an integrated method by adjusted Local Linear Embedding (LLE) and Naïve Bayes (NB) classifier is proposed to map high dimensional sparse customer requirements to product configurations. The integrated method adjusts classical LLE to preprocess high dimensional sparse dataset to satisfy the prerequisite of NB for classifying different customer requirements to corresponding product configurations. Compared with Expert Judgment, the adjusted LLE with NB performs much better in a real-world Tablet PC design case both in accuracy and robustness.
Security analysis and improvements to the PsychoPass method.
Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko
2013-08-13
In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.
IGDS/TRAP Interface Program (ITIP). Software Design Document
NASA Technical Reports Server (NTRS)
Jefferys, Steve; Johnson, Wendell
1981-01-01
The preliminary design of the IGDS/TRAP Interface Program (ITIP) is described. The ITIP is implemented on the PDP 11/70 and interfaces directly with the Interactive Graphics Design System and the Data Management and Retrieval System. The program provides an efficient method for developing a network flow diagram. Performance requirements, operational rquirements, and design requirements are discussed along with sources and types of input and destination and types of output. Information processing functions and data base requirements are also covered.
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...
2018-04-30
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Correlation energy extrapolation by many-body expansion
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...
2017-01-09
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Correlation energy extrapolation by many-body expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Risk of Performance Decrement and Crew Illness Due to an Inadequate Food System
NASA Technical Reports Server (NTRS)
Douglas, Grace L.; Cooper, Maya; Bermudez-Aguirre, Daniela; Sirmons, Takiyah
2016-01-01
NASA is preparing for long duration manned missions beyond low-Earth orbit that will be challenged in several ways, including long-term exposure to the space environment, impacts to crew physiological and psychological health, limited resources, and no resupply. The food system is one of the most significant daily factors that can be altered to improve human health, and performance during space exploration. Therefore, the paramount importance of determining the methods, technologies, and requirements to provide a safe, nutritious, and acceptable food system that promotes crew health and performance cannot be underestimated. The processed and prepackaged food system is the main source of nutrition to the crew, therefore significant losses in nutrition, either through degradation of nutrients during processing and storage or inadequate food intake due to low acceptability, variety, or usability, may significantly compromise the crew's health and performance. Shelf life studies indicate that key nutrients and quality factors in many space foods degrade to concerning levels within three years, suggesting that food system will not meet the nutrition and acceptability requirements of a long duration mission beyond low-Earth orbit. Likewise, mass and volume evaluations indicate that the current food system is a significant resource burden. Alternative provisioning strategies, such as inclusion of bioregenerative foods, are challenged with resource requirements, and food safety and scarcity concerns. Ensuring provisioning of an adequate food system relies not only upon determining technologies, and requirements for nutrition, quality, and safety, but upon establishing a food system that will support nutritional adequacy, even with individual crew preference and self-selection. In short, the space food system is challenged to maintain safety, nutrition, and acceptability for all phases of an exploration mission within resource constraints. This document presents the evidence for the Risk of Performance Decrement and Crew Illness Due to an Inadequate Food System and the gaps in relation to exploration, as identified by the NASA Human Research Program (HRP). The research reviewed here indicates strategies to establish methods, technologies, and requirements that increase food stability, support adequate nutrition, quality, and variety, enable supplementation with grow-pick-and-eat salad crops, ensure safety, and reduce resource use. Obtaining the evidence to establish an adequate food system is essential, as the resources allocated to the food system may be defined based on the data relating nutritional stability and food quality requirements to crew performance and health.
Picking vs Waveform based detection and location methods for induced seismicity monitoring
NASA Astrophysics Data System (ADS)
Grigoli, Francesco; Boese, Maren; Scarabello, Luca; Diehl, Tobias; Weber, Bernd; Wiemer, Stefan; Clinton, John F.
2017-04-01
Microseismic monitoring is a common operation in various industrial activities related to geo-resouces, such as oil and gas and mining operations or geothermal energy exploitation. In microseismic monitoring we generally deal with large datasets from dense monitoring networks that require robust automated analysis procedures. The seismic sequences being monitored are often characterized by very many events with short inter-event times that can even provide overlapped seismic signatures. In these situations, traditional approaches that identify seismic events using dense seismic networks based on detections, phase identification and event association can fail, leading to missed detections and/or reduced location resolution. In recent years, to improve the quality of automated catalogues, various waveform-based methods for the detection and location of microseismicity have been proposed. These methods exploit the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. Although this family of methods have been applied to different induced seismicity datasets, an extensive comparison with sophisticated pick-based detection and location methods is still lacking. We aim here to perform a systematic comparison in term of performance using the waveform-based method LOKI and the pick-based detection and location methods (SCAUTOLOC and SCANLOC) implemented within the SeisComP3 software package. SCANLOC is a new detection and location method specifically designed for seismic monitoring at local scale. Although recent applications have proved an extensive test with induced seismicity datasets have been not yet performed. This method is based on a cluster search algorithm to associate detections to one or many potential earthquake sources. On the other hand, SCAUTOLOC is more a "conventional" method and is the basic tool for seismic event detection and location in SeisComp3. This approach was specifically designed for regional and teleseismic applications, thus its performance with microseismic data might be limited. We analyze the performance of the three methodologies for a synthetic dataset with realistic noise conditions as well as for the first hour of continuous waveform data, including the Ml 3.5 St. Gallen earthquake, recorded by a microseismic network deployed in the area. We finally compare the results obtained all these three methods with a manually revised catalogue.
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Evaluation of Various Depainting Processes on Mechanical Properties of 2024-T3 Aluminum Substrate
NASA Technical Reports Server (NTRS)
McGill, P.
2001-01-01
Alternate alkaline and neutral chemical paint strippers have been identified that, with respect to corrosion requirements, perform as well as or better than a methylene chloride baseline. These chemicals also, in general, meet corrosion acceptance criteria as specified in SAE MA 4872. Alternate acid chemical paint strippers have been identified that, with respect to corrosion requirements, perform as well as or better than a methylene chloride baseline. However, these chemicals do not generally meet corrosion acceptance criteria as specified in SAE MA 4872, especially in the areas of non-clad material performance and hydrogen embrittlement. Media blast methods reviewed in the study do not, in general, adversely affect fatigue performance or crack detectability of 2024-T3 substrate. Sodium bicarbonate stripping exhibited a tendency towards inhibiting crack detectability. These generalizations are based on a limited sample size and additional testing should be performed to characterize the response of specific substrates to specific processes.
NASA Astrophysics Data System (ADS)
Rizqy Averous, Nurhan; Berthold, Anica; Schneider, Alexander; Schwimmbeck, Franz; Monti, Antonello; De Doncker, Rik W.
2016-09-01
A vast increase of wind turbines (WT) contribution in the modern electrical grids have led to the development of grid connection requirements. In contrast to the conventional test method, testing power-electronics converters for WT using a grid emulator at Center for Wind Power Drives (CWD) RWTH Aachen University offers more flexibility for conducting test scenarios. Further analysis on the performance of the device under test (DUT) is however required when testing with grid emulator since the characteristic of the grid emulator might influence the performance of the DUT. This paper focuses on the performance analysis of the DUT when tested using grid emulator. Beside the issue regarding the current harmonics, the performance during Fault Ride-Through (FRT) is discussed in detail. A power hardware in the loop setup is an attractive solution to conduct a comprehensive study on the interaction between the power-electronics converters and the electrical grids.
Pressure balance cross-calibration method using a pressure transducer as transfer standard
Olson, D; Driver, R. G.; Yang, Y
2016-01-01
Piston gauges or pressure balances are widely used to realize the SI unit of pressure, the pascal, and to calibrate pressure sensing devices. However, their calibration is time consuming and requires a lot of technical expertise. In this paper, we propose an alternate method of performing a piston gauge cross calibration that incorporates a pressure transducer as an immediate in-situ transfer standard. For a sufficiently linear transducer, the requirement to exactly balance the weights on the two pressure gauges under consideration is greatly relaxed. Our results indicate that this method can be employed without a significant increase in measurement uncertainty. Indeed, in the test case explored here, our results agreed with the traditional method within standard uncertainty, which was less than 6 parts per million. PMID:28303167
Method for compression of binary data
Berlin, Gary J.
1996-01-01
The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.
Teaching and assessing procedural skills using simulation: metrics and methodology.
Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C
2008-11-01
Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.
Szostak, Katarzyna M.; Grand, Laszlo; Constandinou, Timothy G.
2017-01-01
Implantable neural interfaces for central nervous system research have been designed with wire, polymer, or micromachining technologies over the past 70 years. Research on biocompatible materials, ideal probe shapes, and insertion methods has resulted in building more and more capable neural interfaces. Although the trend is promising, the long-term reliability of such devices has not yet met the required criteria for chronic human application. The performance of neural interfaces in chronic settings often degrades due to foreign body response to the implant that is initiated by the surgical procedure, and related to the probe structure, and material properties used in fabricating the neural interface. In this review, we identify the key requirements for neural interfaces for intracortical recording, describe the three different types of probes—microwire, micromachined, and polymer-based probes; their materials, fabrication methods, and discuss their characteristics and related challenges. PMID:29270103
NASA Astrophysics Data System (ADS)
Salatino, Maria
2017-06-01
In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.
Szostak, Katarzyna M; Grand, Laszlo; Constandinou, Timothy G
2017-01-01
Implantable neural interfaces for central nervous system research have been designed with wire, polymer, or micromachining technologies over the past 70 years. Research on biocompatible materials, ideal probe shapes, and insertion methods has resulted in building more and more capable neural interfaces. Although the trend is promising, the long-term reliability of such devices has not yet met the required criteria for chronic human application. The performance of neural interfaces in chronic settings often degrades due to foreign body response to the implant that is initiated by the surgical procedure, and related to the probe structure, and material properties used in fabricating the neural interface. In this review, we identify the key requirements for neural interfaces for intracortical recording, describe the three different types of probes-microwire, micromachined, and polymer-based probes; their materials, fabrication methods, and discuss their characteristics and related challenges.
High current superconductors for tokamak toroidal field coils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fietz, W.A.
1976-01-01
Conductors rated at 10,000 A for 8 T and 4.2 K are being purchased for the first large coil segment tests at ORNL. Requirements for these conductors, in addition to the high current rating, are low pulse losses, cryostatic stability, and acceptable mechanical properties. The conductors are required to have losses less than 0.4 W/m under pulsed fields of 0.5 T with a rise time of 1 sec in an ambient 8-T field. Methods of calculating these losses and techniques for verifying the performance by direct measurement are discussed. Conductors stabilized by two different cooling methods, pool boiling and forcedmore » helium flow, have been proposed. Analysis of these conductors is presented and a proposed definition and test of stability is discussed. Mechanical property requirements, tensile and compressive, are defined and test methods are discussed.« less
Peterson, Shelby C; Brownell, Isaac; Wong, Sunny Y
2016-06-26
Cutaneous somatosensory nerves function to detect diverse stimuli that act upon the skin. In addition to their established sensory roles, recent studies have suggested that nerves may also modulate skin disorders including atopic dermatitis, psoriasis and cancer. Here, we describe protocols for testing the requirement for nerves in maintaining a cutaneous mechanosensory organ, the touch dome (TD). Specifically, we discuss methods for genetically labeling, harvesting and visualizing TDs by whole-mount staining, and for performing unilateral surgical denervation on mouse dorsal back skin. Together, these approaches can be used to directly compare TD morphology and gene expression in denervated as well as sham-operated skin from the same animal. These methods can also be readily adapted to examine the requirement for nerves in mouse models of skin pathology. Finally, the ability to repeatedly sample the skin provides an opportunity to monitor disease progression at different stages and times after initiation.
Otero, Raquel; Carrera, Guillem; Dulsat, Joan Francesc; Fábregas, José Luís; Claramunt, Juan
2004-11-19
A static headspace (HS) gas chromatographic method for quantitative determination of residual solvents in a drug substance has been developed according to European Pharmacopoeia general procedure. A water-dimethylformamide mixture is proposed as sample solvent to obtain good sensitivity and recovery. The standard addition technique with internal standard quantitation was used for ethanol, tetrahydrofuran and toluene determination. Validation was performed within the requirements of ICH validation guidelines Q2A and Q2B. Selectivity was tested for 36 solvents, and system suitability requirements described in the European Pharmacopoeia were checked. Limits of detection and quantitation, precision, linearity, accuracy, intermediate precision and robustness were determined, and excellent results were obtained.
Thermal/structural design verification strategies for large space structures
NASA Technical Reports Server (NTRS)
Benton, David
1988-01-01
Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.
A parallel-vector algorithm for rapid structural analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1990-01-01
A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the 'loop unrolling' technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large-scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.
A parallel-vector algorithm for rapid structural analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1990-01-01
A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.
Tavčar, Eva; Turk, Erika; Kreft, Samo
2012-01-01
The most commonly used technique for water content determination is Karl-Fischer titration with electrometric detection, requiring specialized equipment. When appropriate equipment is not available, the method can be performed through visual detection of a titration endpoint, which does not enable an analysis of colored samples. Here, we developed a method with spectrophotometric detection of a titration endpoint, appropriate for moisture determination of colored samples. The reaction takes place in a sealed 4 ml cuvette. Detection is performed at 520 nm. Titration endpoint is determined from the graph of absorbance plotted against titration volume. The method has appropriate reproducibility (RSD = 4.3%), accuracy, and linearity (R 2 = 0.997). PMID:22567558
Best practices for evaluating single nucleotide variant calling methods for microbial genomics
Olson, Nathan D.; Lund, Steven P.; Colman, Rebecca E.; Foster, Jeffrey T.; Sahl, Jason W.; Schupp, James M.; Keim, Paul; Morrow, Jayne B.; Salit, Marc L.; Zook, Justin M.
2015-01-01
Innovations in sequencing technologies have allowed biologists to make incredible advances in understanding biological systems. As experience grows, researchers increasingly recognize that analyzing the wealth of data provided by these new sequencing platforms requires careful attention to detail for robust results. Thus far, much of the scientific Communit’s focus for use in bacterial genomics has been on evaluating genome assembly algorithms and rigorously validating assembly program performance. Missing, however, is a focus on critical evaluation of variant callers for these genomes. Variant calling is essential for comparative genomics as it yields insights into nucleotide-level organismal differences. Variant calling is a multistep process with a host of potential error sources that may lead to incorrect variant calls. Identifying and resolving these incorrect calls is critical for bacterial genomics to advance. The goal of this review is to provide guidance on validating algorithms and pipelines used in variant calling for bacterial genomics. First, we will provide an overview of the variant calling procedures and the potential sources of error associated with the methods. We will then identify appropriate datasets for use in evaluating algorithms and describe statistical methods for evaluating algorithm performance. As variant calling moves from basic research to the applied setting, standardized methods for performance evaluation and reporting are required; it is our hope that this review provides the groundwork for the development of these standards. PMID:26217378
Manufacturing Methods and Technology for Microwave Stripline Circuits
1982-02-26
to the dielectric material so It does not peel during the etching and subsequent processing. The copper cladding requirements were defined by MIL-F...the B-stage,giv- ing acceptable peel strengths per the military requirements. For PTFE sub- strata printed wiring boards that are laminated using a...examining multilayers for measles and delaminations, and analytically by performing peel tests and glass transition temperatures. "STRIPLINE
Variable Structure PID Control to Prevent Integrator Windup
NASA Technical Reports Server (NTRS)
Hall, C. E.; Hodel, A. S.; Hung, J. Y.
1999-01-01
PID controllers are frequently used to control systems requiring zero steady-state error while maintaining requirements for settling time and robustness (gain/phase margins). PID controllers suffer significant loss of performance due to short-term integrator wind-up when used in systems with actuator saturation. We examine several existing and proposed methods for the prevention of integrator wind-up in both continuous and discrete time implementations.
A GPU-Accelerated Parameter Interpolation Thermodynamic Integration Free Energy Method.
Giese, Timothy J; York, Darrin M
2018-03-13
There has been a resurgence of interest in free energy methods motivated by the performance enhancements offered by molecular dynamics (MD) software written for specialized hardware, such as graphics processing units (GPUs). In this work, we exploit the properties of a parameter-interpolated thermodynamic integration (PI-TI) method to connect states by their molecular mechanical (MM) parameter values. This pathway is shown to be better behaved for Mg 2+ → Ca 2+ transformations than traditional linear alchemical pathways (with and without soft-core potentials). The PI-TI method has the practical advantage that no modification of the MD code is required to propagate the dynamics, and unlike with linear alchemical mixing, only one electrostatic evaluation is needed (e.g., single call to particle-mesh Ewald) leading to better performance. In the case of AMBER, this enables all the performance benefits of GPU-acceleration to be realized, in addition to unlocking the full spectrum of features available within the MD software, such as Hamiltonian replica exchange (HREM). The TI derivative evaluation can be accomplished efficiently in a post-processing step by reanalyzing the statistically independent trajectory frames in parallel for high throughput. We also show how one can evaluate the particle mesh Ewald contribution to the TI derivative evaluation without needing to perform two reciprocal space calculations. We apply the PI-TI method with HREM on GPUs in AMBER to predict p K a values in double stranded RNA molecules and make comparison with experiments. Convergence to under 0.25 units for these systems required 100 ns or more of sampling per window and coupling of windows with HREM. We find that MM charges derived from ab initio QM/MM fragment calculations improve the agreement between calculation and experimental results.
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
In-flight performance optimization for rotorcraft with redundant controls
NASA Astrophysics Data System (ADS)
Ozdemir, Gurbuz Taha
A conventional helicopter has limits on performance at high speeds because of the limitations of main rotor, such as compressibility issues on advancing side or stall issues on retreating side. Auxiliary lift and thrust components have been suggested to improve performance of the helicopter substantially by reducing the loading on the main rotor. Such a configuration is called the compound rotorcraft. Rotor speed can also be varied to improve helicopter performance. In addition to improved performance, compound rotorcraft and variable RPM can provide a much larger degree of control redundancy. This additional redundancy gives the opportunity to further enhance performance and handling qualities. A flight control system is designed to perform in-flight optimization of redundant control effectors on a compound rotorcraft in order to minimize power required and extend range. This "Fly to Optimal" (FTO) control law is tested in simulation using the GENHEL model. A model of the UH-60, a compound version of the UH-60A with lifting wing and vectored thrust ducted propeller (VTDP), and a generic compound version of the UH-60A with lifting wing and propeller were developed and tested in simulation. A model following dynamic inversion controller is implemented for inner loop control of roll, pitch, yaw, heave, and rotor RPM. An outer loop controller regulates airspeed and flight path during optimization. A Golden Section search method was used to find optimal rotor RPM on a conventional helicopter, where the single redundant control effector is rotor RPM. The FTO builds off of the Adaptive Performance Optimization (APO) method of Gilyard by performing low frequency sweeps on a redundant control for a fixed wing aircraft. A method based on the APO method was used to optimize trim on a compound rotorcraft with several redundant control effectors. The controller can be used to optimize rotor RPM and compound control effectors through flight test or simulations in order to establish a schedule. The method has been expanded to search a two-dimensional control space. Simulation results demonstrate the ability to maximize range by optimizing stabilator deflection and an airspeed set point. Another set of results minimize power required in high speed flight by optimizing collective pitch and stabilator deflection. Results show that the control laws effectively hold the flight condition while the FTO method is effective at improving performance. Optimizations show there can be issues when the control laws regulating altitude push the collective control towards it limits. So a modification was made to the control law to regulate airspeed and altitude using propeller pitch and angle of attack while the collective is held fixed or used as an optimization variable. A dynamic trim limit avoidance algorithm is applied to avoid control saturation in other axes during optimization maneuvers. Range and power optimization FTO simulations are compared with comprehensive sweeps of trim solutions and FTO optimization shown to be effective and reliable in reaching an optimal when optimizing up to two redundant controls. Use of redundant controls is shown to be beneficial for improving performance. The search method takes almost 25 minutes of simulated flight for optimization to be complete. The optimization maneuver itself can sometimes drive the power required to high values, so a power limit is imposed to restrict the search to avoid conditions where power is more than5% higher than that of the initial trim state. With this modification, the time the optimization maneuver takes to complete is reduced down to 21 minutes without any significant change in the optimal power value.
Achieving Innovation and Affordability Through Standardization of Materials Development and Testing
NASA Technical Reports Server (NTRS)
Bray, M. H.; Zook, L. M.; Raley, R. E.; Chapman, C.
2011-01-01
The successful expansion of development, innovation, and production within the aeronautics industry during the 20th century was facilitated by collaboration of government agencies with the commercial aviation companies. One of the initial products conceived from the collaboration was the ANC-5 Bulletin, first published in 1937. The ANC-5 Bulletin had intended to standardize the requirements of various government agencies in the design of aircraft structure. The national space policy shift in priority for NASA with an emphasis on transferring the travel to low earth orbit to commercial space providers highlights an opportunity and a need for the national and global space industries. The same collaboration and standardization that is documented and maintained by the industry within MIL-HDBK-5 (MMPDS-01) and MIL-HBDK-17 (nonmetallic mechanical properties) can also be exploited to standardize the thermal performance properties, processing methods, test methods, and analytical methods for use in aircraft and spacecraft design and associated propulsion systems. In addition to the definition of thermal performance description and standardization, the standardization for test methods and analysis for extreme environments (high temperature, cryogenics, deep space radiation, etc) would also be highly valuable to the industry. Its subsequent revisions and conversion to MIL-HDBK-5 and then MMPDS-01 established and then expanded to contain standardized mechanical property design values and other related design information for metallic materials used in aircraft, missiles, and space vehicles. It also includes guidance on standardization of composition, processing, and analytical methods for presentation and inclusion into the handbook. This standardization enabled an expansion of the technologies to provide efficiency and reliability to the consumers. It can be established that many individual programs within the government agencies have been overcome with development costs generated from these nonstandard requirements. Without industry standardization and acceptance, the programs are driven to shoulder the costs of determining design requirements, performance criteria, and then material qualification and certification. A significant investment that the industry could make to both reduce individual program development costs and schedules while expanding commercial space flight capabilities would be to invest in standardizing material performance properties for high temperature, cryogenic, and deep space environments for both metallic and nonmetallic materials.
Optimal design of a main driving mechanism for servo punch press based on performance atlases
NASA Astrophysics Data System (ADS)
Zhou, Yanhua; Xie, Fugui; Liu, Xinjun
2013-09-01
The servomotor drive turret punch press is attracting more attentions and being developed more intensively due to the advantages of high speed, high accuracy, high flexibility, high productivity, low noise, cleaning and energy saving. To effectively improve the performance and lower the cost, it is necessary to develop new mechanisms and establish corresponding optimal design method with uniform performance indices. A new patented main driving mechanism and a new optimal design method are proposed. In the optimal design, the performance indices, i.e., the local motion/force transmission indices ITI, OTI, good transmission workspace good transmission workspace(GTW) and the global transmission indices GTIs are defined. The non-dimensional normalization method is used to get all feasible solutions in dimensional synthesis. Thereafter, the performance atlases, which can present all possible design solutions, are depicted. As a result, the feasible solution of the mechanism with good motion/force transmission performance is obtained. And the solution can be flexibly adjusted by designer according to the practical design requirements. The proposed mechanism is original, and the presented design method provides a feasible solution to the optimal design of the main driving mechanism for servo punch press.
Reduction in training time of a deep learning model in detection of lesions in CT
NASA Astrophysics Data System (ADS)
Makkinejad, Nazanin; Tajbakhsh, Nima; Zarshenas, Amin; Khokhar, Ashfaq; Suzuki, Kenji
2018-02-01
Deep learning (DL) emerged as a powerful tool for object detection and classification in medical images. Building a well-performing DL model, however, requires a huge number of images for training, and it takes days to train a DL model even on a cutting edge high-performance computing platform. This study is aimed at developing a method for selecting a "small" number of representative samples from a large collection of training samples to train a DL model for the could be used to detect polyps in CT colonography (CTC), without compromising the classification performance. Our proposed method for representative sample selection (RSS) consists of a K-means clustering algorithm. For the performance evaluation, we applied the proposed method to select samples for the training of a massive training artificial neural network based DL model, to be used for the classification of polyps and non-polyps in CTC. Our results show that the proposed method reduce the training time by a factor of 15, while maintaining the classification performance equivalent to the model trained using the full training set. We compare the performance using area under the receiveroperating- characteristic curve (AUC).
Bayesian analysis of input uncertainty in hydrological modeling: 2. Application
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.
2006-03-01
The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Charles J.; Edwards, Thomas B.
2005-04-30
The wet chemistry digestion method development for providing process control elemental analyses of the Hanford Tank Waste Treatment and Immobilization Plant (WTP) Melter Feed Preparation Vessel (MFPV) samples is divided into two phases: Phase I consists of: (1) optimizing digestion methods as a precursor to elemental analyses by ICP-AES techniques; (2) selecting methods with the desired analytical reliability and speed to support the nine-hour or less turnaround time requirement of the WTP; and (3) providing baseline comparison to the laser ablation (LA) sample introduction technique for ICP-AES elemental analyses that is being developed at the Savannah River National Laboratory (SRNL).more » Phase II consists of: (1) Time-and-Motion study of the selected methods from Phase I with actual Hanford waste or waste simulants in shielded cell facilities to ensure that the methods can be performed remotely and maintain the desired characteristics; and (2) digestion of glass samples prepared from actual Hanford Waste tank sludge for providing comparative results to the LA Phase II study. Based on the Phase I testing discussed in this report, a tandem digestion approach consisting of sodium peroxide fusion digestions carried out in nickel crucibles and warm mixed-acid digestions carried out in plastic bottles has been selected for Time-and-Motion study in Phase II. SRNL experience with performing this analytical approach in laboratory hoods indicates that well-trained cell operator teams will be able to perform the tandem digestions in five hours or less. The selected approach will produce two sets of solutions for analysis by ICP-AES techniques. Four hours would then be allocated for performing the ICP-AES analyses and reporting results to meet the nine-hour or less turnaround time requirement. The tandem digestion approach will need to be performed in two separate shielded analytical cells by two separate cell operator teams in order to achieve the nine-hour or less turnaround time. Because of the simplicity of the warm mixed-acid method, a well-trained cell operator team may in time be able to perform both sets of digestions. However, having separate shielded cells for each of the methods is prudent to avoid overcrowding problems that would impede a minimal turnaround time.« less
Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.
Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter
2017-09-01
An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.
Controlled-Root Approach To Digital Phase-Locked Loops
NASA Technical Reports Server (NTRS)
Stephens, Scott A.; Thomas, J. Brooks
1995-01-01
Performance tailored more flexibly and directly to satisfy design requirements. Controlled-root approach improved method for analysis and design of digital phase-locked loops (DPLLs). Developed rigorously from first principles for fully digital loops, making DPLL theory and design simpler and more straightforward (particularly for third- or fourth-order DPLL) and controlling performance more accurately in case of high gain.
ERIC Educational Resources Information Center
Kiliç, Çigdem
2017-01-01
This study examined pre-service primary school teachers' performance in posing problems that require knowledge of problem-solving strategies. Quantitative and qualitative methods were combined. The 120 participants were asked to pose a problem that could be solved by using the find-a-pattern a particular problem-solving strategy. After that,…
How to Recover a Qubit That Has Fallen into a Black Hole
NASA Astrophysics Data System (ADS)
Chatwin-Davies, Aidan; Jermyn, Adam S.; Carroll, Sean M.
2015-12-01
We demonstrate an algorithm for the retrieval of a qubit, encoded in spin angular momentum, that has been dropped into a no-firewall black hole. Retrieval is achieved analogously to quantum teleportation by collecting Hawking radiation and performing measurements on the black hole. Importantly, these methods require only the ability to perform measurements from outside the event horizon.
40 CFR 63.865 - Performance test requirements and test methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the chemical recovery system at the kraft or soda pulp mill, kg/Mg (lb/ton) of black liquor solids... the performance test, megagrams per day (Mg/d) (tons per day (ton/d)) of black liquor solids fired. ER1ref, SDT = reference emission rate of 0.10 kg/Mg (0.20 lb/ton) of black liquor solids fired for...
40 CFR 63.865 - Performance test requirements and test methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the chemical recovery system at the kraft or soda pulp mill, kg/Mg (lb/ton) of black liquor solids... the performance test, megagrams per day (Mg/d) (tons per day (ton/d)) of black liquor solids fired. ER1ref, SDT = reference emission rate of 0.10 kg/Mg (0.20 lb/ton) of black liquor solids fired for...
How to Recover a Qubit That Has Fallen into a Black Hole.
Chatwin-Davies, Aidan; Jermyn, Adam S; Carroll, Sean M
2015-12-31
We demonstrate an algorithm for the retrieval of a qubit, encoded in spin angular momentum, that has been dropped into a no-firewall black hole. Retrieval is achieved analogously to quantum teleportation by collecting Hawking radiation and performing measurements on the black hole. Importantly, these methods require only the ability to perform measurements from outside the event horizon.
ERIC Educational Resources Information Center
Chiarini, Marc A.
2010-01-01
Traditional methods for system performance analysis have long relied on a mix of queuing theory, detailed system knowledge, intuition, and trial-and-error. These approaches often require construction of incomplete gray-box models that can be costly to build and difficult to scale or generalize. In this thesis, we present a black-box analysis…
Optimizing Multiple QoS for Workflow Applications using PSO and Min-Max Strategy
NASA Astrophysics Data System (ADS)
Umar Ambursa, Faruku; Latip, Rohaya; Abdullah, Azizol; Subramaniam, Shamala
2017-08-01
Workflow scheduling under multiple QoS constraints is a complicated optimization problem. Metaheuristic techniques are excellent approaches used in dealing with such problem. Many metaheuristic based algorithms have been proposed, that considers various economic and trustworthy QoS dimensions. However, most of these approaches lead to high violation of user-defined QoS requirements in tight situation. Recently, a new Particle Swarm Optimization (PSO)-based QoS-aware workflow scheduling strategy (LAPSO) is proposed to improve performance in such situations. LAPSO algorithm is designed based on synergy between a violation handling method and a hybrid of PSO and min-max heuristic. Simulation results showed a great potential of LAPSO algorithm to handling user requirements even in tight situations. In this paper, the performance of the algorithm is anlysed further. Specifically, the impact of the min-max strategy on the performance of the algorithm is revealed. This is achieved by removing the violation handling from the operation of the algorithm. The results show that LAPSO based on only the min-max method still outperforms the benchmark, even though the LAPSO with the violation handling performs more significantly better.
Confronting uncertainty in wildlife management: performance of grizzly bear management.
Artelle, Kyle A; Anderson, Sean C; Cooper, Andrew B; Paquet, Paul C; Reynolds, John D; Darimont, Chris T
2013-01-01
Scientific management of wildlife requires confronting the complexities of natural and social systems. Uncertainty poses a central problem. Whereas the importance of considering uncertainty has been widely discussed, studies of the effects of unaddressed uncertainty on real management systems have been rare. We examined the effects of outcome uncertainty and components of biological uncertainty on hunt management performance, illustrated with grizzly bears (Ursus arctos horribilis) in British Columbia, Canada. We found that both forms of uncertainty can have serious impacts on management performance. Outcome uncertainty alone--discrepancy between expected and realized mortality levels--led to excess mortality in 19% of cases (population-years) examined. Accounting for uncertainty around estimated biological parameters (i.e., biological uncertainty) revealed that excess mortality might have occurred in up to 70% of cases. We offer a general method for identifying targets for exploited species that incorporates uncertainty and maintains the probability of exceeding mortality limits below specified thresholds. Setting targets in our focal system using this method at thresholds of 25% and 5% probability of overmortality would require average target mortality reductions of 47% and 81%, respectively. Application of our transparent and generalizable framework to this or other systems could improve management performance in the presence of uncertainty.
HUDSON, PARISA; HUDSON, STEPHEN D.; HANDLER, WILLIAM B.; SCHOLL, TIMOTHY J.; CHRONIK, BLAINE A.
2010-01-01
High-performance shim coils are required for high-field magnetic resonance imaging and spectroscopy. Complete sets of high-power and high-performance shim coils were designed using two different methods: the minimum inductance and the minimum power target field methods. A quantitative comparison of shim performance in terms of merit of inductance (ML) and merit of resistance (MR) was made for shim coils designed using the minimum inductance and the minimum power design algorithms. In each design case, the difference in ML and the difference in MR given by the two design methods was <15%. Comparison of wire patterns obtained using the two design algorithms show that minimum inductance designs tend to feature oscillations within the current density; while minimum power designs tend to feature less rapidly varying current densities and lower power dissipation. Overall, the differences in coil performance obtained by the two methods are relatively small. For the specific case of shim systems customized for small animal imaging, the reduced power dissipation obtained when using the minimum power method is judged to be more significant than the improvements in switching speed obtained from the minimum inductance method. PMID:20411157
Windowed multipole for cross section Doppler broadening
NASA Astrophysics Data System (ADS)
Josey, C.; Ducru, P.; Forget, B.; Smith, K.
2016-02-01
This paper presents an in-depth analysis on the accuracy and performance of the windowed multipole Doppler broadening method. The basic theory behind cross section data is described, along with the basic multipole formalism followed by the approximations leading to windowed multipole method and the algorithm used to efficiently evaluate Doppler broadened cross sections. The method is tested by simulating the BEAVRS benchmark with a windowed multipole library composed of 70 nuclides. Accuracy of the method is demonstrated on a single assembly case where total neutron production rates and 238U capture rates compare within 0.1% to ACE format files at the same temperature. With regards to performance, clock cycle counts and cache misses were measured for single temperature ACE table lookup and for windowed multipole. The windowed multipole method was found to require 39.6% more clock cycles to evaluate, translating to a 7.9% performance loss overall. However, the algorithm has significantly better last-level cache performance, with 3 fewer misses per evaluation, or a 65% reduction in last-level misses. This is due to the small memory footprint of the windowed multipole method and better memory access pattern of the algorithm.
EMC Test Report Electrodynamic Dust Shield
NASA Technical Reports Server (NTRS)
Carmody, Lynne M.; Boyette, Carl B.
2014-01-01
This report documents the Electromagnetic Interference E M I evaluation performed on the Electrodynamic Dust Shield (EDS) which is part of the MISSE-X System under the Electrostatics and Surface Physics Laboratory at Kennedy Space Center. Measurements are performed to document the emissions environment associated with the EDS units. The purpose of this report is to collect all information needed to reproduce the testing performed on the Electrodynamic Dust Shield units, document data gathered during testing, and present the results. This document presents information unique to the measurements performed on the Bioculture Express Rack payload; using test methods prepared to meet SSP 30238 requirements. It includes the information necessary to satisfy the needs of the customer per work order number 1037104. The information presented herein should only be used to meet the requirements for which it was prepared.
Evaluating markers for the early detection of cancer: overview of study designs and methods.
Baker, Stuart G; Kramer, Barnett S; McIntosh, Martin; Patterson, Blossom H; Shyr, Yu; Skates, Steven
2006-01-01
The field of cancer biomarker development has been evolving rapidly. New developments both in the biologic and statistical realms are providing increasing opportunities for evaluation of markers for both early detection and diagnosis of cancer. To review the major conceptual and methodological issues in cancer biomarker evaluation, with an emphasis on recent developments in statistical methods together with practical recommendations. We organized this review by type of study: preliminary performance, retrospective performance, prospective performance and cancer screening evaluation. For each type of study, we discuss methodologic issues, provide examples and discuss strengths and limitations. Preliminary performance studies are useful for quickly winnowing down the number of candidate markers; however their results may not apply to the ultimate target population, asymptomatic subjects. If stored specimens from cohort studies with clinical cancer endpoints are available, retrospective studies provide a quick and valid way to evaluate performance of the markers or changes in the markers prior to the onset of clinical symptoms. Prospective studies have a restricted role because they require large sample sizes, and, if the endpoint is cancer on biopsy, there may be bias due to overdiagnosis. Cancer screening studies require very large sample sizes and long follow-up, but are necessary for evaluating the marker as a trigger of early intervention.
Micro methods and micro apparatus for chemical pathology with special reference to paediatrics
Clayton, Barbara E.; Jenkins, P.
1966-01-01
This article describes methods and apparatus which permit the estimation of a particular substance without requiring more blood than can conveniently and safely be removed from a child by capillary puncture. No reference will be made to the use of methods on the Technicon Auto-Analyzer as that machine is not yet generally geared to paediatric work, although a few centres have made their own modifications to permit certain methods to be performed on capillary samples of blood. PMID:5937614
Investigation of aged hot-mix asphalt pavements : technical summary.
DOT National Transportation Integrated Search
2013-09-01
Over the lifetime of an asphalt concrete (AC) pavement, the roadway requires periodic resurfacing and rehabilitation to provide acceptable performance. The most popular resurfacing method is an asphalt overlay over the existing roadway. In the design...
Effects of aggregate angularity on mix design characteristics and pavement performance.
DOT National Transportation Integrated Search
2009-12-01
This research targeted two primary purposes: to estimate current aggregate angularity test methods and to evaluate current : aggregate angularity requirements in the Nebraska asphalt mixture/pavement specification. To meet the first research : object...
U.S. GASOLINE COMPOSITION STUDY
This presentation presents results from a 2004/2005 study of U.S. gasoline composition. Differences in composition are driven by regulation, octane requirements, refining methods, and performance needs. Major differences in composition were traced to a few compounds: benzene, MTB...
40 CFR 89.309 - Analyzers required for gaseous emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... condensation is acceptable. A water trap performing this function and meeting the specifications in § 89.308(b) is an acceptable method. Means other than condensation may be used only with prior approval from the...
40 CFR 89.309 - Analyzers required for gaseous emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... condensation is acceptable. A water trap performing this function and meeting the specifications in § 89.308(b) is an acceptable method. Means other than condensation may be used only with prior approval from the...
40 CFR 89.309 - Analyzers required for gaseous emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... condensation is acceptable. A water trap performing this function and meeting the specifications in § 89.308(b) is an acceptable method. Means other than condensation may be used only with prior approval from the...