14 CFR 171.27 - Performance requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Performance requirements. 171.27 Section 171.27 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED... Performance requirements. (a) The facility must meet the performance requirements set forth in the...
14 CFR 171.27 - Performance requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Performance requirements. 171.27 Section 171.27 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED... Performance requirements. (a) The facility must meet the performance requirements set forth in the...
14 CFR 171.27 - Performance requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Performance requirements. 171.27 Section 171.27 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED... Performance requirements. (a) The facility must meet the performance requirements set forth in the...
14 CFR 171.27 - Performance requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Performance requirements. 171.27 Section 171.27 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED... Performance requirements. (a) The facility must meet the performance requirements set forth in the...
14 CFR 171.157 - Performance requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Performance requirements. 171.157 Section... Performance requirements. (a) The DME must meet the performance requirements set forth in the “International... functional and performance characteristics of the DME transponder must be conducted in accordance with the...
14 CFR 171.157 - Performance requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Performance requirements. 171.157 Section... Performance requirements. (a) The DME must meet the performance requirements set forth in the “International... functional and performance characteristics of the DME transponder must be conducted in accordance with the...
14 CFR 171.157 - Performance requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Performance requirements. 171.157 Section... Performance requirements. (a) The DME must meet the performance requirements set forth in the “International... functional and performance characteristics of the DME transponder must be conducted in accordance with the...
14 CFR 171.157 - Performance requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Performance requirements. 171.157 Section... Performance requirements. (a) The DME must meet the performance requirements set forth in the “International... functional and performance characteristics of the DME transponder must be conducted in accordance with the...
14 CFR 171.157 - Performance requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Performance requirements. 171.157 Section... Performance requirements. (a) The DME must meet the performance requirements set forth in the “International... functional and performance characteristics of the DME transponder must be conducted in accordance with the...
14 CFR 171.207 - Performance requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Performance requirements. 171.207 Section...) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES VHF Marker Beacons § 171.207 Performance requirements. (a) VHF Marker Beacons must meet the performance requirements set forth in the “International...
14 CFR 171.207 - Performance requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Performance requirements. 171.207 Section...) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES VHF Marker Beacons § 171.207 Performance requirements. (a) VHF Marker Beacons must meet the performance requirements set forth in the “International...
14 CFR 171.207 - Performance requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Performance requirements. 171.207 Section...) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES VHF Marker Beacons § 171.207 Performance requirements. (a) VHF Marker Beacons must meet the performance requirements set forth in the “International...
NASA Technical Reports Server (NTRS)
Bithell, R. A.; Pence, W. A., Jr.
1972-01-01
The effect of two sets of performance requirements, commercial and military, on the design and operation of the space shuttle booster is evaluated. Critical thrust levels are established according to both sets of operating rules for the takeoff, cruise, and go-around flight modes, and the effect on engine requirements determined. Both flyback and ferry operations are considered. The impact of landing rules on potential shuttle flyback and ferry bases is evaluated. Factors affecting reserves are discussed, including winds, temperature, and nonstandard flight operations. Finally, a recommended set of operating rules is proposed for both flyback and ferry operations that allows adequate performance capability and safety margins without compromising design requirements for either flight phase.
NASA Astrophysics Data System (ADS)
Wilby, W. A.; Brett, A. R. H.
Frequency set on techniques used in ECM applications include repeater jammers, frequency memory loops (RF and optical), coherent digital RF memories, and closed loop VCO set on systems. Closed loop frequency set on systems using analog phase and frequency locking are considered to have a number of cost and performance advantages. Their performance is discussed in terms of frequency accuracy, bandwidth, locking time, stability, and simultaneous signals. Some experimental results are presented which show typical locking performance. Future ECM systems might require a response to very short pulses. Acoustooptic and fiber-optic pulse stretching techniques can be used to meet such requirements.
Engineered Barrier System performance requirements systems study report. Revision 02
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balady, M.A.
This study evaluates the current design concept for the Engineered Barrier System (EBS), in concert with the current understanding of the geologic setting to assess whether enhancements to the required performance of the EBS are necessary. The performance assessment calculations are performed by coupling the EBS with the geologic setting based on the models (some of which were updated for this study) and assumptions used for the 1995 Total System Performance Assessment (TSPA). The need for enhancements is determined by comparing the performance assessment results against the EBS related performance requirements. Subsystem quantitative performance requirements related to the EBS includemore » the requirement to allow no more than 1% of the waste packages (WPs) to fail before 1,000 years after permanent closure of the repository, as well as a requirement to control the release rate of radionuclides from the EBS. The EBS performance enhancements considered included additional engineered components as well as evaluating additional performance available from existing design features but for which no performance credit is currently being taken.« less
Higher speed freight truck design : performance requirements.
DOT National Transportation Integrated Search
2013-10-01
This proposed requirements document combines a set of requirements for high-speed freight car truck design and performance : from the generally accepted standards in the U.S. Code of Federal Regulation (CFR), the Association of American Railroads : (...
Jacobsen, Sonja; Patel, Pranav; Schmidt-Chanasit, Jonas; Leparc-Goffart, Isabelle; Teichmann, Anette; Zeller, Herve; Niedrig, Matthias
2016-03-01
Since the re-emergence of Chikungunya virus (CHIKV) in Reunion in 2005 and the recent outbreak in the Caribbean islands with an expansion to the Americas the CHIK diagnostic became very important. We evaluate the performance of laboratories regarding molecular and serological diagnostic of CHIK worldwide. A panel of 12 samples for molecular and 13 samples for serology were provided to 60 laboratories in 40 countries for evaluating the sensitivity and specificity of molecular and serology testing. The panel for molecular diagnostic testing was analysed by 56 laboratories returning 60 data sets of results whereas the 56 and 60 data sets were returned for IgG and IgM diagnostic from the participating laboratories. Twenty-three from 60 data sets performed optimal, 7 acceptable and 30 sets of results require improvement. From 50 data sets only one laboratory shows an optimal performance for IgM detection, followed by 9 data sets with acceptable and the rest need for improvement. From 46 IgG serology data sets 20 provide an optimal, 2 an acceptable and 24 require improvement performance. The evaluation of some of the diagnostic performances allows linking the quality of results to the in-house methods or commercial assays used. The external quality assurance for CHIK diagnostics provides a good overview on the laboratory performance regarding sensitivity and specificity for the molecular and serology diagnostic required for the quick and reliable analysis of suspected CHIK patients. Nearly half of the laboratories have to improve their diagnostic profile to achieve a better performance. Copyright © 2016 Z. Published by Elsevier B.V. All rights reserved.
13 CFR 126.700 - What are the performance of work requirements for HUBZone contracts?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What are the performance of work... ADMINISTRATION HUBZONE PROGRAM Contract Performance Requirements § 126.700 What are the performance of work... meet the performance of work requirements set forth in § 125.6(c) of this chapter. (b) In addition to...
5 CFR 9901.405 - Performance management system requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... management system for NSPS employees, subject to the requirements set forth in this subpart. (b) The NSPS performance management system— (1) Provides for the appraisal of the performance of each employee annually; (2... employees based on performance and contribution; (3) Foster and reward excellent performance; (4) Address...
Multi-angle Imaging Spectro Radiometer (MISR) Design Issues Influened by Performance Requirements
NASA Technical Reports Server (NTRS)
Bruegge, C. J.; White, M. L.; Chrien, N. C. L.; Villegas, E. B.; Raouf, N.
1993-01-01
The design of an Earth Remote Sensing Sensor, such as the Multi-angle Imaging SpectroRadiometer (MISR), begins with a set of science requirements and is quickly followed by a set of instrument specifications.
5 CFR 9701.405 - Performance management system requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... performance management systems for DHS employees, subject to the requirements set forth in this subpart. (b) Each DHS performance management system must— (1) Specify the employees covered by the system(s); (2... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Performance management system...
5 CFR 9701.405 - Performance management system requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... performance management systems for DHS employees, subject to the requirements set forth in this subpart. (b) Each DHS performance management system must— (1) Specify the employees covered by the system(s); (2... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Performance management system...
5 CFR 9701.405 - Performance management system requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... performance management systems for DHS employees, subject to the requirements set forth in this subpart. (b) Each DHS performance management system must— (1) Specify the employees covered by the system(s); (2... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Performance management system...
5 CFR 9701.405 - Performance management system requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... performance management systems for DHS employees, subject to the requirements set forth in this subpart. (b) Each DHS performance management system must— (1) Specify the employees covered by the system(s); (2... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Performance management system...
48 CFR 1328.101-1 - Policy on use.
Code of Federal Regulations, 2010 CFR
2010-10-01
... REQUIREMENTS BONDS AND INSURANCE Bonds and Other Financial Protections 1328.101-1 Policy on use. The designee authorized to make a class waiver for the requirement to obtain a bid guarantee when a performance bond or a performance and payment bond is required is set forth in CAM 1301.70. ...
49 CFR 229.205 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Locomotive Crashworthiness Design... the minimum crashworthiness performance requirements set forth in Appendix E of this part. Compliance with those performance criteria must be established by: (1) Meeting an FRA-approved crashworthiness...
49 CFR 229.205 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Locomotive Crashworthiness Design... the minimum crashworthiness performance requirements set forth in Appendix E of this part. Compliance with those performance criteria must be established by: (1) Meeting an FRA-approved crashworthiness...
49 CFR 229.205 - General requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Locomotive Crashworthiness Design... the minimum crashworthiness performance requirements set forth in Appendix E of this part. Compliance with those performance criteria must be established by: (1) Meeting an FRA-approved crashworthiness...
49 CFR 229.205 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Locomotive Crashworthiness Design... the minimum crashworthiness performance requirements set forth in Appendix E of this part. Compliance with those performance criteria must be established by: (1) Meeting an FRA-approved crashworthiness...
49 CFR 229.205 - General requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Locomotive Crashworthiness Design... the minimum crashworthiness performance requirements set forth in Appendix E of this part. Compliance with those performance criteria must be established by: (1) Meeting an FRA-approved crashworthiness...
Space station needs, attributes, and architectural options: Brief analysis
NASA Technical Reports Server (NTRS)
Shepphird, F. H.
1983-01-01
A baseline set of model missions is thoroughly characterized in terms of support requirements, demands on the Space Station, operating regimes, payload properties, and statements of the mission goals and objectives. This baseline is a representative set of mission requirements covering the most likely extent of space station support requirements from which architectural options can be constructed and exercised. The baseline set of 90 missions are assessed collectively and individually in terms of the economic, performance, and social benefits.
Relationship between Workplace Spatial Settings and Occupant-Perceived Support for Collaboration
ERIC Educational Resources Information Center
Hua, Ying; Loftness, Vivian; Heerwagen, Judith H.; Powell, Kevin M.
2011-01-01
The increasingly collaborative nature of knowledge-based work requires workplaces to support both dynamic interactions and concentrated work, both of which are critical for collaboration performance. Given the prevalence of open-plan settings, this requirement has created new challenges for workplace design. Therefore, an understanding of the…
Code of Federal Regulations, 2010 CFR
2010-04-01
... server or hard drive. Certificate Policy means a named set of rules that sets forth the applicability of..., prescribe specific performance requirements, practices, formats, communications protocols, etc., for...
Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set
NASA Technical Reports Server (NTRS)
Carder, Kendall L.; Hawes, Steve K.; Lee, Zhongping
1997-01-01
A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged parameters. Finally, the effects of even more extreme pigment packaging must be examined in order to improve algorithm performance at high latitudes. Note, however, that the North Sea and Mississippi River plume studies contributed data to the packaged and unpackaged classess, respectively, with little effect on algorithm performance. This suggests that gelbstoff-rich Case 2 waters do not seriously degrade performance of the semi-analytical algorithm.
41 CFR 109-38.5105 - Motor vehicle local use objectives.
Code of Federal Regulations, 2011 CFR
2011-01-01
... performance of daily work assignments, would have uniquely tailored use objectives, different from those set... future motor vehicle requirements, must be established and documented by the Organizational Motor Equipment Fleet Manager. The objectives should take into consideration past performance, future requirements...
Stolzenburg, Jens-Uwe; Kallidonis, Panagiotis; Oh, Min-A; Ghulam, Nabi; Do, Minh; Haefner, Tim; Dietel, Anja; Till, Holger; Sakellaropoulos, George; Liatsikos, Evangelos N
2010-02-01
Laparoendoscopic single-site surgery (LESS) represents the latest innovation in laparoscopic surgery. We compare in dry and animal laboratory the efficacy of recently introduced pre-bent instruments with conventional laparoscopic and flexible instruments in terms of time requirement, maneuverability, and ease of handling. Participants of varying laparoscopic experience were included in the study and divided in groups according to their experience. The participants performed predetermined tasks in dry laboratory using all sets of instruments. An experienced laparoscopic surgeon performed 24 nephrectomies in 12 pigs using all sets of instruments. Single port was used for all instrument sets except for the conventional instruments, which were inserted through three ports. The time required for the performance of dry laboratory tasks and the porcine nephrectomies was recorded. Errors in the performance of dry laboratory tasks of each instrument type were also recorded. Pre-bent instruments had a significant advantage over flexible instruments in terms of time requirement to accomplish tasks and procedures as well as maneuverability. Flexible instruments were more time consuming in comparison to the conventional laparoscopic instruments during the performance of the tasks. There were no significant differences in the time required for the accomplishment of dry laboratory tasks or steps of nephrectomy using conventional instruments through appropriate number of ports in comparison to pre-bent instruments through single port. Pre-bent instruments were less time consuming and with better maneuverability in comparison to flexible instruments in experimental single-port access surgery. Further clinical investigations would elucidate the efficacy of pre-bent instruments.
Geomagnetic field modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
1980-01-01
Data sets selected for mini-batches and the software modifications required for processing these sets are described. Initial analysis was performed on minibatch field model recovery. Studies are being performed to examine the convergence of the solutions and the maximum expansion order the data will support in the constant and secular terms.
NASA Technical Reports Server (NTRS)
1985-01-01
The initial task in the Space Station Data System (SSDS) Analysis/Architecture Study is the definition of the functional and key performance requirements for the SSDS. The SSDS is the set of hardware and software, both on the ground and in space, that provides the basic data management services for Space Station customers and systems. The primary purpose of the requirements development activity was to provide a coordinated, documented requirements set as a basis for the system definition of the SSDS and for other subsequent study activities. These requirements should also prove useful to other Space Station activities in that they provide an indication of the scope of the information services and systems that will be needed in the Space Station program. The major results of the requirements development task are as follows: (1) identification of a conceptual topology and architecture for the end-to-end Space Station Information Systems (SSIS); (2) development of a complete set of functional requirements and design drivers for the SSIS; (3) development of functional requirements and key performance requirements for the Space Station Data System (SSDS); and (4) definition of an operating concept for the SSIS. The operating concept was developed both from a Space Station payload customer and operator perspective in order to allow a requirements practicality assessment.
User Account Passwords | High-Performance Computing | NREL
Account Passwords User Account Passwords For NREL's high-performance computing (HPC) systems, learn about user account password requirements and how to set up, log in, and change passwords. Password Logging In the First Time After you request an HPC user account, you'll receive a temporary password. Set
Extending Participation in Standard Setting: An Online Judging Proposal
ERIC Educational Resources Information Center
MacCann, Robert G.; Stanley, Gordon
2010-01-01
In order for standard setting to retain public confidence, it will be argued there are two important requirements. One is that the judges' allocation of students to performance bands would yield results broadly consistent with the expectation of the wider educational community. Secondly, in the absence of any change in educational performance,…
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Chimczak, Grzegorz; Lemr, Karel
2017-02-01
We describe a direct method for experimental determination of the negativity of an arbitrary two-qubit state with 11 measurements performed on multiple copies of the two-qubit system. Our method is based on the experimentally accessible sequences of singlet projections performed on up to four qubit pairs. In particular, our method permits the application of the Peres-Horodecki separability criterion to an arbitrary two-qubit state. We explicitly demonstrate that measuring entanglement in terms of negativity requires three measurements more than detecting two-qubit entanglement. The reported minimal set of interferometric measurements provides a complete description of bipartite quantum entanglement in terms of two-photon interference. This set is smaller than the set of 15 measurements needed to perform a complete quantum state tomography of an arbitrary two-qubit system. Finally, we demonstrate that the set of nine Makhlin's invariants needed to express the negativity can be measured by performing 13 multicopy projections. We demonstrate both that these invariants are a useful theoretical concept for designing specialized quantum interferometers and that their direct measurement within the framework of linear optics does not require performing complete quantum state tomography.
78 FR 71617 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
... agencies that have prescription drug programs are required to perform prospective and retrospective drug... study to validate the core competency set among the workforce; (2) establishing the core competency set...
Lesson 7: From Requirements to Specific Solutions
CROMERR requirements set performance goals, they do not dictate specific system functions, operating procedures,system architecture, or technology. The task is to decide on a solution to meet the goals.
76 FR 77536 - Agency Information Collection Request. 60-Day Public Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... required data elements from applicants. The SF-424 Project/Performance Site Location(s) form is a part of Grants.gov 's mission to reduce duplication of similar or identical forms and data sets, establish... Project/Performance Site Location(s) form and data set that will serve as a common form for various grant...
45 CFR 305.61 - Penalty for failure to meet IV-D requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... HEALTH AND HUMAN SERVICES PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.61 Penalty for failure to meet IV-D requirements. (a) A State will be subject to a financial... order establishment and current collections performance measures as set forth in § 305.40 of this part...
13 CFR 125.6 - Prime contractor performance requirements (limitations on subcontracting).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Prime contractor performance requirements (limitations on subcontracting). 125.6 Section 125.6 Business Credit and Assistance SMALL BUSINESS... subcontracting). (a) In order to be awarded a full or partial small business set-aside contract, an 8(a) contract...
Quality Program Provisions for Aeronautical and Space System Contractors
NASA Technical Reports Server (NTRS)
1969-01-01
This publication sets forth quality program requirements for NASA aeronautical and space programs, systems, subsystems, and related services. These requirements provide for the effective operation of a quality program which ensures that quality criteria and requirements are recognized, definitized, and performed satisfactorily.
12 CFR Appendix B to Part 704 - Expanded Authorities and Requirements
Code of Federal Regulations, 2010 CFR
2010-01-01
... appendix if it meets the applicable requirements of Part 704 and appendix B, fulfills additional management... rate stress tests set forth in § 704.8(d)(1)(i), allow its NEV to decline as much as 20 percent. Part I... to 300 percent of capital. (c) In performing the rate stress tests set forth in § 704.8(d)(1)(i), the...
12 CFR Appendix B to Part 704 - Expanded Authorities and Requirements
Code of Federal Regulations, 2014 CFR
2014-01-01
... appendix if it meets the applicable requirements of part 704 and appendix B, fulfills additional management... stress tests set forth in 704.8(d)(1)(i), allow its NEV to decline as much as 20 percent. Part I (a) A... transaction. (b) In performing the rate stress tests set forth in § 704.8(d), the NEV of a corporate credit...
12 CFR Appendix B to Part 704 - Expanded Authorities and Requirements
Code of Federal Regulations, 2011 CFR
2011-01-01
... appendix if it meets the applicable requirements of part 704 and appendix B, fulfills additional management... rate stress tests set forth in § 704.8(d)(1)(i), allow its NEV to decline as much as 20 percent. Part I... to 300 percent of capital. (c) In performing the rate stress tests set forth in § 704.8(d)(1)(i), the...
22 CFR Appendix C to Part 513 - Certification Regarding Drug-Free Workplace Requirements
Code of Federal Regulations, 2012 CFR
2012-04-01
... is providing the certification set out below. 2. The certification set out below is a material... employees in each local unemployment office, performers in concert halls or radio studios). 7. If the...
Airport surface detection equipment ASDE-3 radar set : appendix I
DOT National Transportation Integrated Search
1973-02-01
This specification establishes the performance, design, development, and test requirements for the Airport Surface Detection Equipment, the ASDE-3 Radar Set, intended as a replacement for the currently FAA-commissioned ASDE-2. It provides improvement...
26 CFR 801.1 - Balanced performance measurement system; in general.
Code of Federal Regulations, 2011 CFR
2011-04-01
... (CONTINUED) INTERNAL REVENUE PRACTICE BALANCED SYSTEM FOR MEASURING ORGANIZATIONAL AND EMPLOYEE PERFORMANCE... and regulatory provisions require the IRS to set performance goals for organizational units and to... 26 Internal Revenue 20 2011-04-01 2011-04-01 false Balanced performance measurement system; in...
26 CFR 801.1 - Balanced performance measurement system; in general.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (CONTINUED) INTERNAL REVENUE PRACTICE BALANCED SYSTEM FOR MEASURING ORGANIZATIONAL AND EMPLOYEE PERFORMANCE... and regulatory provisions require the IRS to set performance goals for organizational units and to... 26 Internal Revenue 20 2010-04-01 2010-04-01 false Balanced performance measurement system; in...
Strategic sustainability performance plan
DOT National Transportation Integrated Search
2010-06-01
In October 2009, President Obama signed Executive Order (EO) 13514 that sets sustainability : goals for Federal agencies and focuses on making improvements in environmental, energy and : economic performance. The Executive Order requires Federal agen...
Indirect addressing and load balancing for faster solution to Mandelbrot Set on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1989-01-01
SIMD computers with local indirect addressing allow programs to have queues and buffers, making certain kinds of problems much more efficient. Examined here are a class of problems characterized by computations on data points where the computation is identical, but the convergence rate is data dependent. Normally, in this situation, the algorithm time is governed by the maximum number of iterations required by each point. Using indirect addressing allows a processor to proceed to the next data point when it is done, reducing the overall number of iterations required to approach the mean convergence rate when a sufficiently large problem set is solved. Load balancing techniques can be applied for additional performance improvement. Simulations of this technique applied to solving Mandelbrot Sets indicate significant performance gains.
Task Analysis of Tactical Leadership Skills for Bradley Infantry Fighting Vehicle Leaders
1986-10-01
The Bradley Leader Trainer is conceptualized as a device or set of de - vices that can be used to teach Bradley leaders to perform their full set of...experts. The task list was examined to de - termine critical training requirements, requirements for training device sup- port of this training, and...Functions/ j ITask | |Task | |Task | [Training j , To Further De - | ;Critical Train- | iTninir
Thinking within the box: The relational processing style elicited by counterfactual mind-sets.
Kray, Laura J; Galinsky, Adam D; Wong, Elaine M
2006-07-01
By comparing reality to what might have been, counterfactuals promote a relational processing style characterized by a tendency to consider relationships and associations among a set of stimuli. As such, counterfactual mind-sets were expected to improve performance on tasks involving the consideration of relationships and associations but to impair performance on tasks requiring novel ideas that are uninfluenced by salient associations. The authors conducted several experiments to test this hypothesis. In Experiments 1a and 1b, the authors determined that counterfactual mind-sets increase mental states and preferences for thinking styles consistent with relational thought. Experiment 2 demonstrated a facilitative effect of counterfactual mind-sets on an analytic task involving logical relationships; Experiments 3 and 4 demonstrated that counterfactual mind-sets structure thought and imagination around salient associations and therefore impaired performance on creative generation tasks. In Experiment 5, the authors demonstrated that the detrimental effect of counterfactual mind-sets is limited to creative tasks involving novel idea generation; in a creative association task involving the consideration of relationships between task stimuli, counterfactual mind-sets improved performance. Copyright 2006 APA, all rights reserved.
30 CFR 7.97 - Application requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... internal parts, exhaust inlet and outlet, sensors, and the exhaust gas path through the exhaust conditioner... temperature sensor setting and exhaust gas temperature sensor setting used to meet the performance... sensors, flame arresters, exhaust conditioner, emergency intake air shutoff device, automatic fuel shutoff...
30 CFR 7.97 - Application requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... internal parts, exhaust inlet and outlet, sensors, and the exhaust gas path through the exhaust conditioner... temperature sensor setting and exhaust gas temperature sensor setting used to meet the performance... sensors, flame arresters, exhaust conditioner, emergency intake air shutoff device, automatic fuel shutoff...
Deep Borehole Field Test Requirements and Controlled Assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest
2015-07-01
This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientificmore » characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.« less
NASA Technical Reports Server (NTRS)
Akle, W.
1983-01-01
This study report defines a set of tests and measurements required to characterize the performance of a Large Space System (LSS), and to scale this data to other LSS satellites. Requirements from the Mobile Communication Satellite (MSAT) configurations derived in the parent study were used. MSAT utilizes a large, mesh deployable antenna, and encompasses a significant range of LSS technology issues in the areas of structural/dynamics, control, and performance predictability. In this study, performance requirements were developed for the antenna. Special emphasis was placed on antenna surface accuracy, and pointing stability. Instrumentation and measurement systems, applicable to LSS, were selected from existing or on-going technology developments. Laser ranging and angulation systems, presently in breadboard status, form the backbone of the measurements. Following this, a set of ground, STS, and GEO-operational were investigated. A third scale (15 meter) antenna system as selected for ground characterization followed by STS flight technology development. This selection ensures analytical scaling from ground-to-orbit, and size scaling. Other benefits are cost and ability to perform reasonable ground tests. Detail costing of the various tests and measurement systems were derived and are included in the report.
Effect of Single Setting versus Multiple Setting Training on Learning to Shop in a Department Store.
ERIC Educational Resources Information Center
Westling, David L.; And Others
1990-01-01
Fifteen students, age 13-21, with moderate to profound mental retardation received shopping skills training in either 1 or 3 department stores. A study of operational behaviors, social behaviors, number of settings in which criterion performance was achieved, and number of sessions required to achieve criterion found no significant differences…
NASA Technical Reports Server (NTRS)
Kofal, Allen E.
1987-01-01
The mission and system requirements for the concept definition and system analysis of the Orbital Transfer Vehicle (OTV) are established. The requirements set forth constitute the single authority for the selection, evaluation, and optimization of the technical performance and design of the OTV. This requirements document forms the basis for the Ground and Space Based OTV concept definition analyses and establishes the physical, functional, performance and design relationships to STS, Space Station, Orbital Maneuvering Vehicle (OMV), and payloads.
Robust Multivariable Optimization and Performance Simulation for ASIC Design
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.
The Second SeaWiFS HPLC Analysis Round-Robin Experiment (SeaHARRE-2)
NASA Technical Reports Server (NTRS)
2005-01-01
Eight international laboratories specializing in the determination of marine pigment concentrations using high performance liquid chromatography (HPLC) were intercompared using in situ samples and a variety of laboratory standards. The field samples were collected primarily from eutrophic waters, although mesotrophic waters were also sampled to create a dynamic range in chlorophyll concentration spanning approximately two orders of magnitude (0.3 25.8 mg m-3). The intercomparisons were used to establish the following: a) the uncertainties in quantitating individual pigments and higher-order variables (sums, ratios, and indices); b) an evaluation of spectrophotometric versus HPLC uncertainties in the determination of total chlorophyll a; and c) the reduction in uncertainties as a result of applying quality assurance (QA) procedures associated with extraction, separation, injection, degradation, detection, calibration, and reporting (particularly limits of detection and quantitation). In addition, the remote sensing requirements for the in situ determination of total chlorophyll a were investigated to determine whether or not the average uncertainty for this measurement is being satisfied. The culmination of the activity was a validation of the round-robin methodology plus the development of the requirements for validating an individual HPLC method. The validation process includes the measurements required to initially demonstrate a pigment is validated, and the measurements that must be made during sample analysis to confirm a method remains validated. The so-called performance-based metrics developed here describe a set of thresholds for a variety of easily-measured parameters with a corresponding set of performance categories. The aggregate set of performance parameters and categories establish a) the overall performance capability of the method, and b) whether or not the capability is consistent with the required accuracy objectives.
32 CFR 101.6 - Criteria for satisfactory performance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 1 2010-07-01 2010-07-01 false Criteria for satisfactory performance. 101.6..., MILITARY AND CIVILIAN PARTICIPATION IN RESERVE TRAINING PROGRAMS § 101.6 Criteria for satisfactory...) Shall require members to: (1) Meet the standards of satisfactory performance of training duty set forth...
NASA Technical Reports Server (NTRS)
Howard, H. T. (Editor)
1979-01-01
The functional and performance requirements for support of multimission radio science are established. The classes of radio science investigation are described and the needed data is discussed. This document is for a sliding ten year period and will be iterated as the mission set evolves.
Implementing Pay-for-Performance in the Neonatal Intensive Care Unit
Profit, Jochen; Zupancic, John A. F.; Gould, Jeffrey B.; Petersen, Laura A.
2011-01-01
Pay-for-performance initiatives in medicine are proliferating rapidly. Neonatal intensive care is a likely target for these efforts because of the high cost, available databases, and relative strength of evidence for at least some measures of quality. Pay-for-performance may improve patient care but requires valid measurements of quality to ensure that financial incentives truly support superior performance. Given the existing uncertainty with respect to both the effectiveness of pay-for-performance and the state of quality measurement science, experimentation with pay-for-performance initiatives should proceed with caution and in controlled settings. In this article, we describe approaches to measuring quality and implementing pay-for-performance in the NICU setting. PMID:17473099
Optimising the performance of an outpatient setting.
Sendi, Pedram; Al, Maiwenn J; Battegay, Manuel; Al Maiwenn, J
2004-01-24
An outpatient setting typically includes experienced and novice resident physicians who are supervised by senior staff physicians. The performance of this kind of outpatient setting, for a given mix of experienced and novice resident physicians, is determined by the number of senior staff physicians available for supervision. The optimum mix of human resources may be determined using discrete-event simulation. An outpatient setting represents a system where concurrency and resource sharing are important. These concepts can be modelled by means of timed Coloured Petri Nets (CPN), which is a discrete-event simulation formalism. We determined the optimum mix of resources (i.e. the number of senior staff physicians needed for a given number of experienced and novice resident physicians) to guarantee efficient overall system performance. In an outpatient setting with 10 resident physicians, two staff physicians are required to guarantee a minimum level of system performance (42-52 patients are seen per 5-hour period). However, with 3 senior staff physicians system performance can be improved substantially (49-56 patients per 5-hour period). An additional fourth staff physician does not substantially enhance system performance (50-57 patients per 5-hour period). Coloured Petri Nets provide a flexible environment in which to simulate an outpatient setting and assess the impact of any staffing changes on overall system performance, to promote informed resource allocation decisions.
1992-09-01
to acquire or develop effective simulation tools to observe the behavior of a RISC implementation as it executes different types of programs . We choose...Performance Computer performance is measured by the amount of the time required to execute a program . Performance encompasses two types of time, elapsed time...and CPU time. Elapsed time is the time required to execute a program from start to finish. It includes latency of input/output activities such as
Analyzing the Interaction of Performance Appraisal Factors Using Interpretive Structural Modeling
ERIC Educational Resources Information Center
Manoharan, T. R.; Muralidharan, C.; Deshmukh, S. G.
2010-01-01
In today's changed environment where the economy and industry are driven by customers, business is open to worldwide competition. Manufacturing firms have looked at employee performance improvement as a means to succeed. These findings advocate setting up priorities for employee performance improvement. This requires a continuous improvement…
48 CFR 1344.302 - Requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SUBCONTRACTING POLICIES AND PROCEDURES Contractors' Purchasing Systems Reviews § 1344.302 Requirements. The designee authorized to lower or raise the $25 million sales threshold for performing a review to determine if a contractor purchasing system review is needed is set forth in CAM 1301.70. ...
ERIC Educational Resources Information Center
Educational Services, Inc., Washington, DC.
Since 1975, the Head Start Program Performance Standards have defined the services that local programs are required to provide to enrolled children and families. With revisions effective in 1998, the Program Performance Standards translate the Head Start vision into quality practices implemented at the local level. This document is comprised of a…
Development of composite calibration standard for quantitative NDE by ultrasound and thermography
NASA Astrophysics Data System (ADS)
Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.
2018-04-01
Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.
Performance measurement in cancer care: uses and challenges.
Lazar, G S; Desch, C E
1998-05-15
Unnecessary, inappropriate, and futile care are given in all areas of health care including cancer care. Not only does such care increase costs and waste precious resources, but patients may have adverse outcomes when the wrong care is given. One of the ways to address this issue is to measure performance with the use of administrative data sets. Through performance measurement, the best providers can be chosen, providers can be rewarded on the basis of the quality of their performance, opportunities for improvement can be identified, and variation in practice can be minimized. Purchasers should take leadership role in creating data sets that will enhance, clinical performance. Specifically, purchasers should require the following from payers: 1) staging information; 2) requirements and/or incentives for proper International Classification of Diseases coding, including other important (comorbid) conditions; 3) incentives or requirements for proper data collection if the payer is using a reimbursement strategy that places the risk on the provider; and 4) a willingness to collect and report information to providers of care, with a view toward increasing quality and decreasing the costs of cancer care. Demanding better clinical performance can lead to better outcomes. Once good data is presented to patients and providers, better clinical behavior and improved cancer care systems will quickly follow.
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
Designing a Standard Model for Development and Execution of an Analysis Project Plan
2012-06-01
mitigations set forth are agreeable to all parties involved. 1.3 Document Risks, Issues, and Constraints 1.1 Gather Information 1.2 Develop...parent requirement into lower level objective, performance-based sibling actions. Collective accomplishment of the set of derived “ sibling ” actions
Final postflight hardware evaluation report RSRM-32 (STS-57)
NASA Technical Reports Server (NTRS)
Nielson, Greg
1993-01-01
This document is the final report for the postflight assessment of the RSRM-32 (STS-57) flight set. This report presents the disassembly evaluations performed at the Thiokol facilities in Utah and is a continuation of the evaluations performed at KSC (TWR-64239). The PEEP for this assessment is outlined in TWR-50051, Revision B. The PEEP defines the requirements for evaluating RSRM hardware. Special hardware issues pertaining to this flight set requiring additional or modified assessment are outlined in TWR-64237. All observed hardware conditions were documented on PFOR's which are included in Appendix A. Observations were compared against limits defined in the PEEP. Any observation that was categorized as reportable or had no defined limits was documented on a preliminary PFAR by the assessment engineers. Preliminary PFAR's were reviewed by the Thiokol SPAT Executive Board to determine if elevation to PFAR's was required.
Assessment of meteorological uncertainties as they apply to the ASCENDS mission
NASA Astrophysics Data System (ADS)
Snell, H. E.; Zaccheo, S.; Chase, A.; Eluszkiewicz, J.; Ott, L. E.; Pawson, S.
2011-12-01
Many environment-oriented remote sensing and modeling applications require precise knowledge of the atmospheric state (temperature, pressure, water vapor, surface pressure, etc.) on a fine spatial grid with a comprehensive understanding of the associated errors. Coincident atmospheric state measurements may be obtained via co-located remote sensing instruments or by extracting these data from ancillary models. The appropriate technique for a given application depends upon the required accuracy. State-of-the-art mesoscale/regional numerical weather prediction (NWP) models operate on spatial scales of a few kilometers resolution, and global scale NWP models operate on scales of tens of kilometers. Remote sensing measurements may be made on spatial scale comparable to the measurement of interest. These measurements normally require a separate sensor, which increases the overall size, weight, power and complexity of the satellite payload. Thus, a comprehensive understanding of the errors associated with each of these approaches is a critical part of the design/characterization of a remote-sensing system whose measurement accuracy depends on knowledge of the atmospheric state. One of the requirements as part of the overall ASCENDS (Active Sensing of CO2 Emissions over Nights, Days, and Seasons) mission development is to develop a consistent set of atmospheric state variables (vertical temperature and water vapor profiles, and surface pressure) for use in helping to constrain overall retrieval error budget. If the error budget requires tighter uncertainties on ancillary atmospheric parameters than can be provided by NWP models and analyses, additional sensors may be required to reduce the overall measurement error and meet mission requirements. To this end we have used NWP models and reanalysis information to generate a set of atmospheric profiles which contain reasonable variability. This data consists of a "truth" set and a companion "measured" set of profiles. The truth set contains climatologically-relevant profiles of pressure, temperature and humidity with an accompanying surface pressure. The measured set consists of some number of instances of the truth set which have been perturbed to represent realistic measurement uncertainty for the truth profile using measurement error covariance matrices. The primary focus has been to develop matrices derived using information about the profile retrieval accuracy as documented for on-orbit sensor systems including AIRS, AMSU, ATMS, and CrIS. Surface pressure variability and uncertainty was derived from globally-compiled station pressure information. We generated an additional measurement set of profiles which represent the overall error within NWP models. These profile sets will allow for comprehensive trade studies for sensor system design and provide a basis for setting measurement requirements for co-located temperature, humidity sounders, determine the utility of NWP data to either replace or supplement collocated measurements, and to assess the overall end-to-end system performance of the sensor system. In this presentation we discuss the process by which we created these data sets and show their utility in performing trade studies for sensor system concepts and designs.
Considerations for designing robotic upper limb rehabilitation devices
NASA Astrophysics Data System (ADS)
Nadas, I.; Vaida, C.; Gherman, B.; Pisla, D.; Carbone, G.
2017-12-01
The present study highlights the advantages of robotic systems for post-stroke rehabilitation of the upper limb. The latest demographic studies illustrate a continuous increase of the average life span, which leads to a continuous increase of stroke incidents and patients requiring rehabilitation. Some studies estimate that by 2030 the number of physical therapists will be insufficient for the patients requiring physical rehabilitation, imposing a shift in the current methodologies. A viable option is the implementation of robotic systems that assist the patient in performing rehabilitation exercises, the physical therapist role being to establish the therapeutic program for each patient and monitor their individual progress. Using a set of clinical measurements for the upper limb motions, the analysis of rehabilitation robotic systems provides a comparative study between the motions required by clinicians and the ones that robotic systems perform for different therapeutic exercises. A critical analysis of existing robots is performed using several classifications: mechanical design, assistance type, actuation and power transmission, control systems and human robot interaction (HRI) strategies. This classification will determine a set of pre-requirements for the definition of new concepts and efficient solutions for robotic assisted rehabilitation therapy.
Performance Measurement and Target-Setting in California's Safety Net Health Systems.
Hemmat, Shirin; Schillinger, Dean; Lyles, Courtney; Ackerman, Sara; Gourley, Gato; Vittinghoff, Eric; Handley, Margaret; Sarkar, Urmimala
Health policies encourage implementing quality measurement with performance targets. The 2010-2015 California Medicaid waiver mandated quality measurement and reporting. In 2013, California safety net hospitals participating in the waiver set a voluntary performance target (the 90th percentile for Medicare preferred provider organization plans) for mammography screening and cholesterol control in diabetes. They did not reach the target, and the difference-in-differences analysis suggested that there was no difference for mammography ( P = .39) and low-density lipoprotein control ( P = .11) performance compared to measures for which no statewide quality improvement initiative existed. California's Medicaid waiver was associated with improved performance on a number of metrics, but this performance was not attributable to target setting on specific health conditions. Performance may have improved because of secular trends or systems improvements related to waiver funding. Relying on condition-specific targets to measure performance may underestimate improvements and disadvantage certain health systems. Achieving ambitious targets likely requires sustained fiscal, management, and workforce investments.
Training a whole-book LSTM-based recognizer with an optimal training set
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Yousefi, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2018-04-01
Despite the recent progress in OCR technologies, whole-book recognition, is still a challenging task, in particular in case of old and historical books, that the unknown font faces or low quality of paper and print contributes to the challenge. Therefore, pre-trained recognizers and generic methods do not usually perform up to required standards, and usually the performance degrades for larger scale recognition tasks, such as of a book. Such reportedly low error-rate methods turn out to require a great deal of manual correction. Generally, such methodologies do not make effective use of concepts such redundancy in whole-book recognition. In this work, we propose to train Long Short Term Memory (LSTM) networks on a minimal training set obtained from the book to be recognized. We show that clustering all the sub-words in the book, and using the sub-word cluster centers as the training set for the LSTM network, we can train models that outperform any identical network that is trained with randomly selected pages of the book. In our experiments, we also show that although the sub-word cluster centers are equivalent to about 8 pages of text for a 101- page book, a LSTM network trained on such a set performs competitively compared to an identical network that is trained on a set of 60 randomly selected pages of the book.
Laser data transfer flight experiment definition
NASA Technical Reports Server (NTRS)
Merritt, J. R.
1975-01-01
A set of laser communication flight experiments to be performed between a relay satellite, ground terminals, and space shuttles were synthesized and evaluated. Results include a definition of the space terminals, NASA ground terminals, test methods, and test schedules required to perform the experiments.
Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less
Effect of Liquid Penetrant Sensitivity on Probability of Detection
NASA Technical Reports Server (NTRS)
Parker, Bradford H.
2011-01-01
The objective of the task is to investigate the effect of liquid penetrant sensitivity level on probability of detection (POD) of cracks in various metals. NASA-STD-5009 currently requires the use of only sensitivity level 4 liquid penetrants for NASA Standard Level inspections. This requirement is based on the fact that the data used to establish the reliably detectable flaw sizes penetrant inspection was from studies performed in the 1970s using penetrant deemed to be equivalent only to modern day sensitivity level 4 penetrants. However, many NDE contractors supporting NASA Centers routinely use sensitivity level 3 penetrants. Because of the new NASA-STD-5009 requirement, these contractors will have to either shift to sensitivity level 4 penetrants or perform formal POD demonstration tests to qualify their existing process. We propose a study to compare the POD generated for two penetrant manufactures, Sherwin and Magnaflux, and for the two most common penetrant inspection methods, water washable and post emulsifiable, hydrophilic. NDE vendors local to GSFC will be employed. A total of six inspectors will inspect a set of crack panels with a broad range of fatigue crack sizes. Each inspector will perform eight inspections of the panel set using the combination of methods and sensitivity levels described above. At least one inspector will also perform multiple inspections using a fixed technique to investigate repeatability. The hit/miss data sets will be evaluated using both the NASA generated DOEPOD software and the MIL-STD-1823 software.
Using string invariants for prediction searching for optimal parameters
NASA Astrophysics Data System (ADS)
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
SeaWiFS Technical Report Series. Volume 22: Prelaunch Acceptance Report for the SeaWiFS Radiometer
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Acker, James G. (Editor); Barnes, Robert A.; Barnes, William L.; Esaias, Wayne E.; Mcclain, Charles R.
1994-01-01
The final acceptance, or rejection, of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) will be determined by the instrument's on-orbit operation. There is, however, an extensive set of laboratory measurements describing the operating characteristics of the radiometer. Many of the requirements in the Ocean Color Data Mission (OCDM) specifications can be checked only by laboratory measurements. Here, the calibration review panel examines the laboratory characterization and calibration of SeaWiFS in the light of the OCDM performance specification. Overall, the performance of the SeaWiFS instrument meets or exceeds the requirements of the OCDM contract in all but a few unimportant details. The detailed results of this examination are presented here by following the outline of the specifications, as found in the Contract. The results are presented in the form of requirements and compliance pairs. These results give conclusions on many, but not all, of the performance specifications. The acceptance by this panel of the performance of SeaWiFS must only be considered as an intermediate conclusion. The ultimate acceptance (or rejection) of the SeaWiFS data set will rely on the measurements made by the instrument on orbit.
Goal setting dynamics that facilitate or impede a client-centered approach.
Kessler, Dorothy; Walker, Ian; Sauvé-Schenk, Katrine; Egan, Mary
2018-04-19
Client-centred goal setting is central to the process of enabling occupation. Yet, there are multiple barriers to incorporating client-centred goal setting in practice. We sought to determine what might facilitate or impede the formation of client-centred goals in a context highly supportive of client-centred goal setting Methods: We used conversational analysis to examine goal-setting conversations that took place during a pilot trial of Occupational Performance Coaching for stroke survivors. Twelve goal-setting sessions were purposively selected, transcribed, and analyzed according to conventions for conversation analysis. Two main types of interactions were observed: introductory actions and goal selection actions. Introductory actions set the context for goal setting and involved sharing information and seeking clarification related to goal requirements and clients' occupational performance competencies. Goal selection actions were a series of interactions whereby the goals were explored, endorsed or dropped. Client-centred occupational performance goals may be facilitated through placing goal-setting in the context of life changes and lifelong development of goals, and through listening to clients' stories. Therapists may improve consistency in adoption of client-suggested goals through clarifying meaning attached to goals and being attuned to power dynamics and underlying values and beliefs around risk and goal attainability.
Image Processing Using a Parallel Architecture.
1987-12-01
ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than
Incremental wind tunnel testing of high lift systems
NASA Astrophysics Data System (ADS)
Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu
2016-06-01
Efficiency of trailing edge high lift systems is essential for long range future transport aircrafts evolving in the direction of laminar wings, because they have to compensate for the low performance of the leading edge devices. Modern high lift systems are subject of high performance requirements and constrained to simple actuation, combined with a reduced number of aerodynamic elements. Passive or active flow control is thus required for the performance enhancement. An experimental investigation of reduced kinematics flap combined with passive flow control took place in a low speed wind tunnel. The most important features of the experimental setup are the relatively large size, corresponding to a Reynolds number of about 2 Million, the sweep angle of 30 degrees corresponding to long range airliners with high sweep angle wings and the large number of flap settings and mechanical vortex generators. The model description, flap settings, methodology and results are presented.
42 CFR 431.610 - Relations with standard-setting and survey agencies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... and suppliers of services to participate in Medicare (see 42 CFR 405.1902). The requirement for... 42 Public Health 4 2012-10-01 2012-10-01 false Relations with standard-setting and survey agencies... specified in § 488.308 of this chapter. (3) Have qualified personnel perform on-site inspections— (i) At...
42 CFR 431.610 - Relations with standard-setting and survey agencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... and suppliers of services to participate in Medicare (see 42 CFR 405.1902). The requirement for... 42 Public Health 4 2013-10-01 2013-10-01 false Relations with standard-setting and survey agencies... specified in § 488.308 of this chapter. (3) Have qualified personnel perform on-site inspections— (i) At...
42 CFR 431.610 - Relations with standard-setting and survey agencies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... and suppliers of services to participate in Medicare (see 42 CFR 405.1902). The requirement for... 42 Public Health 4 2014-10-01 2014-10-01 false Relations with standard-setting and survey agencies... specified in § 488.308 of this chapter. (3) Have qualified personnel perform on-site inspections— (i) At...
ERIC Educational Resources Information Center
Dick, Anthony Steven
2012-01-01
Two experiments examined processes underlying cognitive inflexibility in set-shifting tasks typically used to assess the development of executive function in children. Adult participants performed a Flexible Item Selection Task (FIST) that requires shifting from categorizing by one dimension (e.g., color) to categorizing by a second orthogonal…
Complex Instruction Set Quantum Computing
NASA Astrophysics Data System (ADS)
Sanders, G. D.; Kim, K. W.; Holton, W. C.
1998-03-01
In proposed quantum computers, electromagnetic pulses are used to implement logic gates on quantum bits (qubits). Gates are unitary transformations applied to coherent qubit wavefunctions and a universal computer can be created using a minimal set of gates. By applying many elementary gates in sequence, desired quantum computations can be performed. This reduced instruction set approach to quantum computing (RISC QC) is characterized by serial application of a few basic pulse shapes and a long coherence time. However, the unitary matrix of the overall computation is ultimately a unitary matrix of the same size as any of the elementary matrices. This suggests that we might replace a sequence of reduced instructions with a single complex instruction using an optimally taylored pulse. We refer to this approach as complex instruction set quantum computing (CISC QC). One trades the requirement for long coherence times for the ability to design and generate potentially more complex pulses. We consider a model system of coupled qubits interacting through nearest neighbor coupling and show that CISC QC can reduce the time required to perform quantum computations.
HALE UAS Command and Control Communications: Step 1 - Functional Requirements Document. Version 4.0
NASA Technical Reports Server (NTRS)
2006-01-01
The High Altitude Long Endurance (HALE) unmanned aircraft system (UAS) communicates with an off-board pilot-in-command in all flight phases via the C2 data link, making it a critical component for the UA to fly in the NAS safely and routinely. This is a new requirement in current FAA communications planning and monitoring processes. This document provides a set of comprehensive C2 communications functional requirements and performance guidelines to help facilitate the future FAA certification process for civil UAS to operate in the NAS. The objective of the guidelines is to provide the ability to validate the functional requirements and in future be used to develop performance-level requirements.
7 CFR 245.12 - State agencies and direct certification requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS DETERMINING ELIGIBILITY FOR FREE AND... performance benchmarks set forth in paragraph (b) of this section for directly certifying children who are.... State agencies must meet performance benchmarks for directly certifying for free school meals children...
1980-05-01
performed on the TOM-T Turret Mock- Up . Those marked with a number symbol (#) can be done on the TOM-T Programmable Maintenance Trainer. Tasks with both...an asterisk and a number symbol (*/#) are those which can be performed partially on the turret mock- up and partially on the programmable trainer...time, the tasks serve as prerequisites to troubleshooting. With the exception of setting- up , testing, and shutting-down the STE/XMI test set, and
Multiyear Interactive Computer Almanac (MICA)
from the U.S. Naval Observatory About MICA Features System Requirements Delta T File and Software Requirements | Delta T and Software Updates | FAQ and Bug Reports | Ordering ] Features MICA can perform the , and delta T). Twilight, rise, set, and transit times for major solar system bodies, selected bright
Code of Federal Regulations, 2013 CFR
2013-10-01
... RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General Provisions § 84.1 Purpose. The... construction, performance, and respiratory protection requirements set forth in this part; and (d) To specify...
Code of Federal Regulations, 2011 CFR
2011-10-01
... RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General Provisions § 84.1 Purpose. The... construction, performance, and respiratory protection requirements set forth in this part; and (d) To specify...
Code of Federal Regulations, 2012 CFR
2012-10-01
... RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General Provisions § 84.1 Purpose. The... construction, performance, and respiratory protection requirements set forth in this part; and (d) To specify...
Code of Federal Regulations, 2014 CFR
2014-10-01
... RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General Provisions § 84.1 Purpose. The... construction, performance, and respiratory protection requirements set forth in this part; and (d) To specify...
Code of Federal Regulations, 2010 CFR
2010-10-01
... RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General Provisions § 84.1 Purpose. The... construction, performance, and respiratory protection requirements set forth in this part; and (d) To specify...
Functional Analysis of Metabolomics Data.
Chagoyen, Mónica; López-Ibáñez, Javier; Pazos, Florencio
2016-01-01
Metabolomics aims at characterizing the repertory of small chemical compounds in a biological sample. As it becomes more massive and larger sets of compounds are detected, a functional analysis is required to convert these raw lists of compounds into biological knowledge. The most common way of performing such analysis is "annotation enrichment analysis," also used in transcriptomics and proteomics. This approach extracts the annotations overrepresented in the set of chemical compounds arisen in a given experiment. Here, we describe the protocols for performing such analysis as well as for visualizing a set of compounds in different representations of the metabolic networks, in both cases using free accessible web tools.
Moving Large Data Sets Over High-Performance Long Distance Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodson, Stephen W; Poole, Stephen W; Ruwart, Thomas
2011-04-01
In this project we look at the performance characteristics of three tools used to move large data sets over dedicated long distance networking infrastructure. Although performance studies of wide area networks have been a frequent topic of interest, performance analyses have tended to focus on network latency characteristics and peak throughput using network traffic generators. In this study we instead perform an end-to-end long distance networking analysis that includes reading large data sets from a source file system and committing large data sets to a destination file system. An evaluation of end-to-end data movement is also an evaluation of themore » system configurations employed and the tools used to move the data. For this paper, we have built several storage platforms and connected them with a high performance long distance network configuration. We use these systems to analyze the capabilities of three data movement tools: BBcp, GridFTP, and XDD. Our studies demonstrate that existing data movement tools do not provide efficient performance levels or exercise the storage devices in their highest performance modes. We describe the device information required to achieve high levels of I/O performance and discuss how this data is applicable in use cases beyond data movement performance.« less
Buchler, Norbou G; Hoyer, William J; Cerella, John
2008-06-01
Task-switching performance was assessed in young and older adults as a function of the number of task sets to be actively maintained in memory (varied from 1 to 4) over the course of extended training (5 days). Each of the four tasks required the execution of a simple computational algorithm, which was instantaneously cued by the color of the two-digit stimulus. Tasks were presented in pure (task set size 1) and mixed blocks (task set sizes 2, 3, 4), and the task sequence was unpredictable. By considering task switching beyond two tasks, we found evidence for a cognitive control system that is not overwhelmed by task set size load manipulations. Extended training eliminated age effects in task-switching performance, even when the participants had to manage the execution of up to four tasks. The results are discussed in terms of current theories of cognitive control, including task set inertia and production system postulates.
Task scheduling in dataflow computer architectures
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.
46 CFR 160.024-3 - Materials, workmanship, construction, and performance requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
....29 mm (0.090 in.) and 2.67 mm (0.015 in.) in thickness. The centered primer shall be set below the surface of the base between 0.25 mm (0.010 in.) and 0.50 mm (0.020 in.). (d) Performance. Signals shall...
46 CFR 160.024-3 - Materials, workmanship, construction, and performance requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
....29 mm (0.090 in.) and 2.67 mm (0.015 in.) in thickness. The centered primer shall be set below the surface of the base between 0.25 mm (0.010 in.) and 0.50 mm (0.020 in.). (d) Performance. Signals shall...
Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Castellano, Timothy
1991-01-01
The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.
MIL-STD-1553 dynamic bus controller/remote terminal hybrid set
NASA Astrophysics Data System (ADS)
Friedman, S. N.
This paper describes the performance, physical and electrical requirements of a Dual Redundant BUS Interface Unit (BIU) acting as a BUS Controller Interface Unit (BCIU) or Remote Terminal Unit (RTU) between a Motorola 68000 VME BUS and MIL-STD-1553B Multiplex Data Bus. A discussion of how the BIU Hybrid set is programmed, and operates as a BCIU or RTU, will be included. This paper will review Dynamic Bus Control and other Mode Code capabilities. The BIU Hybrid Set interfaces to a 68000 Microprocessor with a VME Bus using programmed I/O transfers. This special interface will be discussed along with the internal Dual Access Memory (4K x 16) used to support the data exchanges between the CPU and the BIU Hybrid Set. The hybrid set's physical size and power requirements will be covered. This includes the present Double Eurocard the BIU function is presently being offered on.
Requirements for the conceptual design of advanced underground coal extraction systems
NASA Technical Reports Server (NTRS)
Gangal, M. D.; Lavin, M. L.
1981-01-01
Conceptual design requirements are presented for underground coal mining systems having substantially improved performance in the areas of production cost and miner safety. Mandatory performance levels are also set for miner health, environmental impact, and coal recovery. In addition to mandatory design goals and constraints, a number of desirable system characteristics are identified which must be assessed in terms of their impact on production cost and their compatibility with other system elements. Although developed for the flat lying, moderately thick seams of Central Appalachia, these requirements are designed to be easily adaptable to other coals.
ERIC Educational Resources Information Center
Estrada, Brittany; Warren, Susan
2014-01-01
Efforts to support marginalized students require not only identifying systemic inequities, but providing a classroom infrastructure that supports the academic achievement of all students. This action research study examined the effects of implementing goal-setting strategies and emphasizing creativity in a culturally responsive classroom (CRC) on…
Neural activity in the hippocampus predicts individual visual short-term memory capacity.
von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter
2013-07-01
Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period = 900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity = 3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity = 5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ruane, Garreth; Mawet, Dimitri; Mennesson, Bertrand; Jewell, Jeffrey; Shaklan, Stuart
2018-01-01
The Habitable Exoplanet Imaging Mission concept requires an optical coronagraph that provides deep starlight suppression over a broad spectral bandwidth, high throughput for point sources at small angular separation, and insensitivity to temporally varying, low-order aberrations. Vortex coronagraphs are a promising solution that performs optimally on off-axis, monolithic telescopes and may also be designed for segmented telescopes with minor losses in performance. We describe the key advantages of vortex coronagraphs on off-axis telescopes such as (1) unwanted diffraction due to aberrations is passively rejected in several low-order Zernike modes relaxing the wavefront stability requirements for imaging Earth-like planets from <10 to >100 pm rms, (2) stars with angular diameters >0.1 λ / D may be sufficiently suppressed, (3) the absolute planet throughput is >10 % , even for unfavorable telescope architectures, and (4) broadband solutions (Δλ / λ > 0.1) are readily available for both monolithic and segmented apertures. The latter make use of grayscale apodizers in an upstream pupil plane to provide suppression of diffracted light from amplitude discontinuities in the telescope pupil without inducing additional stroke on the deformable mirrors. We set wavefront stability requirements on the telescope, based on a stellar irradiance threshold set at an angular separation of 3 ± 0.5λ / D from the star, and discuss how some requirements may be relaxed by trading robustness to aberrations for planet throughput.
29 CFR 1908.8 - Consultant specifications.
Code of Federal Regulations, 2013 CFR
2013-07-01
... assignment to work under an Agreement, and annually thereafter, that they meet the requirements set out in § 1908.8(b)(2), and that they have the ability to perform satisfactorily pursuant to the Cooperative... satisfy the RA that they have the ability to perform consultant duties independently may, with RA approval...
29 CFR 1908.8 - Consultant specifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
... assignment to work under an Agreement, and annually thereafter, that they meet the requirements set out in § 1908.8(b)(2), and that they have the ability to perform satisfactorily pursuant to the Cooperative... satisfy the RA that they have the ability to perform consultant duties independently may, with RA approval...
29 CFR 1908.8 - Consultant specifications.
Code of Federal Regulations, 2012 CFR
2012-07-01
... assignment to work under an Agreement, and annually thereafter, that they meet the requirements set out in § 1908.8(b)(2), and that they have the ability to perform satisfactorily pursuant to the Cooperative... satisfy the RA that they have the ability to perform consultant duties independently may, with RA approval...
29 CFR 1908.8 - Consultant specifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... assignment to work under an Agreement, and annually thereafter, that they meet the requirements set out in § 1908.8(b)(2), and that they have the ability to perform satisfactorily pursuant to the Cooperative... satisfy the RA that they have the ability to perform consultant duties independently may, with RA approval...
Dynamic Cognitive Tracing: Towards Unified Discovery of Student and Cognitive Models
ERIC Educational Resources Information Center
Gonzalez-Brenes, Jose P.; Mostow, Jack
2012-01-01
This work describes a unified approach to two problems previously addressed separately in Intelligent Tutoring Systems: (i) Cognitive Modeling, which factorizes problem solving steps into the latent set of skills required to perform them; and (ii) Student Modeling, which infers students' learning by observing student performance. The practical…
Phase structure rewrite systems in information retrieval
NASA Technical Reports Server (NTRS)
Klingbiel, P. H.
1985-01-01
Operational level automatic indexing requires an efficient means of normalizing natural language phrases. Subject switching requires an efficient means of translating one set of authorized terms to another. A phrase structure rewrite system called a Lexical Dictionary is explained that performs these functions. Background, operational use, other applications and ongoing research are explained.
42 CFR 438.358 - Activities related to external quality review.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Mandatory activities. For each MCO and PIHP, the EQR must use information from the following activities: (1) Validation of performance improvement projects required by the State to comply with requirements set forth in § 438.240(b)(1) and that were underway during the preceding 12 months. (2) Validation of MCO or PIHP...
42 CFR 438.358 - Activities related to external quality review.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Validation of performance improvement projects required by the State to comply with requirements set forth in § 438.240(b)(1) and that were underway during the preceding 12 months. (2) Validation of MCO or PIHP... derived during the preceding 12 months from the following optional activities: (1) Validation of encounter...
42 CFR 438.358 - Activities related to external quality review.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Validation of performance improvement projects required by the State to comply with requirements set forth in § 438.240(b)(1) and that were underway during the preceding 12 months. (2) Validation of MCO or PIHP... derived during the preceding 12 months from the following optional activities: (1) Validation of encounter...
42 CFR 438.358 - Activities related to external quality review.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Validation of performance improvement projects required by the State to comply with requirements set forth in § 438.240(b)(1) and that were underway during the preceding 12 months. (2) Validation of MCO or PIHP... derived during the preceding 12 months from the following optional activities: (1) Validation of encounter...
42 CFR 438.358 - Activities related to external quality review.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Validation of performance improvement projects required by the State to comply with requirements set forth in § 438.240(b)(1) and that were underway during the preceding 12 months. (2) Validation of MCO or PIHP... derived during the preceding 12 months from the following optional activities: (1) Validation of encounter...
5 CFR 9901.406 - Setting and communicating performance expectations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... other work requirements, such as standard operating procedures, operating instructions, manuals... standards of conduct and behavior, such as civility and respect for others. (d) In addition to the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
KLARER,PAUL R.; BINDER,ALAN B.; LENARD,ROGER X.
A preliminary set of requirements for a robotic rover mission to the lunar polar region are described and assessed. Tasks to be performed by the rover include core drill sample acquisition, mineral and volatile soil content assay, and significant wide area traversals. Assessment of the postulated requirements is performed using first order estimates of energy, power, and communications throughput issues. Two potential rover system configurations are considered, a smaller rover envisioned as part of a group of multiple rovers, and a larger single rover envisioned along more traditional planetary surface rover concept lines.
An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon
NASA Technical Reports Server (NTRS)
Rutherford, Brian
2000-01-01
The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Poteet, Carl C.; Chen, Roger R.; Wurster, Kathryn E.
2002-01-01
A technology development program was conducted to evolve an earlier metallic thermal protection system (TPS) panel design, with the goals of: improving operations features, increasing adaptability (ease of attaching to a variety of tank shapes and structural concepts), and reducing weight. The resulting Adaptable Robust Metallic Operable Reusable (ARMOR) TPS system incorporates a high degree of design flexibility (allowing weight and operability to be traded and balanced) and can also be easily integrated with a large variety of tank shapes, airframe structural arrangements and airframe structure/material concepts. An initial attempt has been made to establish a set of performance based TPS design requirements. A set of general (FARtype) requirements have been proposed, focusing on defining categories that must be included for a comprehensive design. Load cases required for TPS design must reflect the full flight envelope, including a comprehensive set of limit loads, However, including additional loads. such as ascent abort trajectories, as ultimate load cases, and on-orbit debris/micro-meteoroid hypervelocity impact, as one of the discrete -source -damage load cases, will have a significant impact on system design and resulting performance, reliability and operability. Although these load cases have not been established, they are of paramount importance for reusable vehicles, and until properly included, all sizing results and assessments of reliability and operability must be considered optimistic at a minimum.
SeaWiFS technical report series. Volume 22: Prelaunch acceptance report for the SeaWFS radiometer
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine, R. (Editor); Barnes, Robert A.; Barnes, William L.; Esaias, Wayne E.; Mcclain, Charles R.; Acker, James G. (Editor)
1994-01-01
The final acceptance, or rejection, of the Sea-viewing Wide field-of-view Sensor (SeaWiFS) will be determined by the instrument's on-orbit operation. There is, however, an extensive set of laboratory measurements describing the operating characteristics of the radiometer. Many of the requirements in the Ocean Color Data Mission (OCDM) specifications can be checked only by laboratory measurements. Here, the calibration review panel (composed of the authors of this technical memorandum) examines the laboratory characterization and calibration of SeaWiFS in the light of the OCDM performance specification. Overall, the performance of the SeaWiFS instrument meets or exceeds the requirements of the OCDM contract in all but a few unimportant details. The detailed results of this examination are presented here by following the outline of the specifications, as found in the Contract. The results are presented in the form of requirements and compliance pairs. These results give conclusions on many, but not all, of the performance specifications. The acceptance of this panel of the performance of SeaWiFS must only be considered as an intermediate conclusion. The ultimate acceptance (or rejection) of the SeaWiFS data set will rely on the measurements made by the instrument on orbit.
Prediction of pump cavitation performance
NASA Technical Reports Server (NTRS)
Moore, R. D.
1974-01-01
A method for predicting pump cavitation performance with various liquids, liquid temperatures, and rotative speeds is presented. Use of the method requires that two sets of test data be available for the pump of interest. Good agreement between predicted and experimental results of cavitation performance was obtained for several pumps operated in liquids which exhibit a wide range of properties. Two cavitation parameters which qualitatively evaluate pump cavitation performance are also presented.
Classification software technique assessment
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.; Atkinson, R.; Dasarathy, B. V.; Lybanon, M.; Ramapryian, H. K.
1976-01-01
A catalog of software options is presented for the use of local user communities to obtain software for analyzing remotely sensed multispectral imagery. The resources required to utilize a particular software program are described. Descriptions of how a particular program analyzes data and the performance of that program for an application and data set provided by the user are shown. An effort is made to establish a statistical performance base for various software programs with regard to different data sets and analysis applications, to determine the status of the state-of-the-art.
40 CFR 92.119 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... performed: (i) According to the procedures outlined in Society of Automotive Engineers (SAE) paper No... operating adjustments. (B) Set the oven temperature 5 °C hotter than the required sample-line temperature...
40 CFR 92.119 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... performed: (i) According to the procedures outlined in Society of Automotive Engineers (SAE) paper No... operating adjustments. (B) Set the oven temperature 5 °C hotter than the required sample-line temperature...
40 CFR 92.119 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... performed: (i) According to the procedures outlined in Society of Automotive Engineers (SAE) paper No... operating adjustments. (B) Set the oven temperature 5 °C hotter than the required sample-line temperature...
40 CFR 92.119 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... performed: (i) According to the procedures outlined in Society of Automotive Engineers (SAE) paper No... operating adjustments. (B) Set the oven temperature 5 °C hotter than the required sample-line temperature...
A comprehensive evaluation of strip performance in multiple blood glucose monitoring systems.
Katz, Laurence B; Macleod, Kirsty; Grady, Mike; Cameron, Hilary; Pfützner, Andreas; Setford, Steven
2015-05-01
Accurate self-monitoring of blood glucose is a key component of effective self-management of glycemic control. Accurate self-monitoring of blood glucose results are required for optimal insulin dosing and detection of hypoglycemia. However, blood glucose monitoring systems may be susceptible to error from test strip, user, environmental and pharmacological factors. This report evaluated 5 blood glucose monitoring systems that each use Verio glucose test strips for precision, effect of hematocrit and interferences in laboratory testing, and lay user and system accuracy in clinical testing according to the guidelines in ISO15197:2013(E). Performance of OneTouch(®) VerioVue™ met or exceeded standards described in ISO15197:2013 for precision, hematocrit performance and interference testing in a laboratory setting. Performance of OneTouch(®) Verio IQ™, OneTouch(®) Verio Pro™, OneTouch(®) Verio™, OneTouch(®) VerioVue™ and Omni Pod each met or exceeded accuracy standards for user performance and system accuracy in a clinical setting set forth in ISO15197:2013(E).
Low cost high efficiency GaAs monolithic RF module for SARSAT distress beacons
NASA Technical Reports Server (NTRS)
Petersen, W. C.; Siu, D. P.; Cook, H. F.
1991-01-01
Low cost high performance (5 Watts output) 406 MHz beacons are urgently needed to realize the maximum utilization of the Search and Rescue Satellite-Aided Tracking (SARSAT) system spearheaded in the U.S. by NASA. Although current technology can produce beacons meeting the output power requirement, power consumption is high due to the low efficiency of available transmitters. Field performance is currently unsatisfactory due to the lack of safe and reliable high density batteries capable of operation at -40 C. Low cost production is also a crucial but elusive requirement for the ultimate wide scale utilization of this system. Microwave Monolithics Incorporated (MMInc.) has proposed to make both the technical and cost goals for the SARSAT beacon attainable by developing a monolithic GaAs chip set for the RF module. This chip set consists of a high efficiency power amplifier and a bi-phase modulator. In addition to implementing the RF module in Monolithic Microwave Integrated Circuit (MMIC) form to minimize ultimate production costs, the power amplifier has a power-added efficiency nearly twice that attained with current commercial technology. A distress beacon built using this RF module chip set will be significantly smaller in size and lighter in weight due to a smaller battery requirement, since the 406 MHz signal source and the digital controller have far lower power consumption compared to the 5 watt power amplifier. All the program tasks have been successfully completed. The GaAs MMIC RF module chip set has been designed to be compatible with the present 406 MHz signal source and digital controller. A complete high performance low cost SARSAT beacon can be realized with only additional minor iteration and systems integration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potts, C.; Faber, M.; Gunderson, G.
The as-built lattice of the Rapid Cycling Synchrotron (RCS) had two sets of correction sextupoles and two sets of quadrupoles energized by dc power supplies to control the tune and the tune tilt. With this method of powering these magnets, adjustment of tune conditions during the accelerating cycle as needed was not possible. A set of dynamically programmable power supplies has been built and operated to provide the required chromaticity adjustment. The short accelerating time (16.7 ms) of the RCS and the inductance of the magnets dictated large transistor amplifier power supplies. The required time resolution and waveform flexibility indicatedmore » the desirability of computer control. Both the amplifiers and controls are described, along with resulting improvements in the beam performance. 5 refs.« less
Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong
2016-01-01
Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.
Use of software tools in the development of real time software systems
NASA Technical Reports Server (NTRS)
Garvey, R. C.
1981-01-01
The transformation of a preexisting software system into a larger and more versatile system with different mission requirements is discussed. The history of this transformation is used to illustrate the use of structured real time programming techniques and tools to produce maintainable and somewhat transportable systems. The predecessor system is a single ground diagnostic system; its purpose is to exercise a computer controlled hardware set prior to its deployment in its functional environment, as well as test the equipment set by supplying certain well known stimulas. The successor system (FTE) is required to perform certain testing and control functions while this hardware set is in its functional environment. Both systems must deal with heavy user input/output loads and a new I/O requirement is included in the design of the FTF system. Human factors are enhanced by adding an improved console interface and special function keyboard handler. The additional features require the inclusion of much new software to the original set from which FTF was developed. As a result, it is necessary to split the system into a duel programming configuration with high rates of interground communications. A generalized information routing mechanism is used to support this configuration.
WATCHMAN: A Data Warehouse Intelligent Cache Manager
NASA Technical Reports Server (NTRS)
Scheuermann, Peter; Shim, Junho; Vingralek, Radek
1996-01-01
Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.
Nieder, Alan M; Meinbach, David S; Kim, Sandy S; Soloway, Mark S
2005-12-01
We established a database on the incidence of intraoperative and postoperative complications associated with transurethral bladder tumor resection (TURBT) in an academic teaching setting, and we prospectively recorded all TURBTs performed by residents and fellows in our urology department. : We prospectively evaluated all TURBTs performed between November 2003 and October 2004. All cases were performed at least in part by residents and fellows under direct attending supervision at a single academic medical center with 3 different teaching hospitals. Intraoperative complications were recorded by the resident and attending surgeon at the completion of the operative procedure. At patient discharge from the hospital the data sheet was reviewed, and length of stay, postoperative transfusions and any other complications were recorded. A total of 173 consecutive TURBTs were performed by residents and fellows at 3 different teaching hospitals. There were 10 (5.8%) complications, including 4 (2.3%) cases of hematuria that required blood transfusion and 6 (3.5%) cases of bladder perforation. Of these 6 perforations 4 were small extraperitoneal perforations requiring only prolonged catheter drainage. These perforations were caused by residents in their first or third year of urology training. Two perforations were intraperitoneal, caused by a senior resident or a fellow, 1 of which required abdominal exploration to control bleeding. TURBT is a reasonably safe procedure when performed by urologists in training under direct attending supervision. The complication rate was 5.8%, however only 1 case required surgical intervention. Contrary to expected findings, more senior residents were involved in the complications, likely secondary to their disproportionate roles in more difficult resections.
Counselman, Francis L; Kowalenko, Terry; Marco, Catherine A; Joldersma, Kevin B; Korte, Robert C; Reisdorff, Earl J
2016-10-01
In 2003, the Accreditation Council for Graduate Medical Education (ACGME) instituted requirements that limited the number of hours residents could spend on duty, and in 2011, it revised these requirements. This study explored whether the implementation of the 2003 and 2011 duty hour limits was associated with a change in emergency medicine residents' performance on the American Board of Emergency Medicine (ABEM) Qualifying Examination (QE). Beginning with the 1999 QE and ending with the 2014 QE, candidates for whom all training occurred without duty hour requirements (Group A), candidates under the first set of duty hour requirements (Group C), and candidates under the second set of duty hour requirements (Group E) were compared. Comparisons included mean scores and pass rates. In Group A, 5690 candidates completed the examination, with a mean score of 82.8 and a 90.2% pass rate. In Group C, 8333 candidates had a mean score of 82.4 and a 90.5% pass rate. In Group E, there were 1269 candidates, with a mean score of 82.5 and an 89.4% pass rate. There was a small but statistically significant decrease in the mean scores (0.04, P < .001) after implementation of the first duty hour requirements, but this difference did not occur after implementation of the 2011 standards. There was no difference among pass rates for any of the study groups (χ 2 = 1.68, P = .43). We did not identify an association between the 2003 and 2011 ACGME duty hour requirements and performance of test takers on the ABEM QE.
The E-Balanced Scorecard (e-BSC) for Measuring Academic Staff Performance Excellence
ERIC Educational Resources Information Center
Yu, May Leen; Hamid, Suraya; Ijab, Mohamad Taha; Soo, Hsaio Pei
2009-01-01
This research paper is a pilot study that investigated the suitability of adopting an automated balanced scorecard for managing and measuring the performance excellence of academic staffs in the higher education setting. A comprehensive study of related literature with requirements elicited from the target population in a selected premier…
40 CFR Table 4 to Subpart Kkkkk of... - Requirements for Performance Tests
Code of Federal Regulations, 2010 CFR
2010-07-01
... block average pressure drop values for the three test runs, and determine and record the 3-hour block... limit for the limestone feeder setting Data from the limestone feeder during the performance test You must ensure that you maintain an adequate amount of limestone in the limestone hopper, storage bin...
Acts of Fabrication in the Performance Management of Teachers' Work
ERIC Educational Resources Information Center
Naidu, Sham
2012-01-01
"Performativity," it is argued, is a new mode of state regulation which makes it possible to govern in an "advanced liberal" way. It requires individual [teachers] to organize themselves as a response to targets, indicators and evaluations. To set aside personal beliefs and commitments and live an existence of calculation. The…
A Conceptual Framework for Assessing Performance in Games and Simulations. CRESST Report 771
ERIC Educational Resources Information Center
Koenig, Alan D.; Lee, John J.; Iseli, Markus; Wainess, Richard
2010-01-01
The military's need for high-fidelity games and simulations is substantial, as these environments can be valuable for demonstration of essential knowledge, skills, and abilities required in complex tasks. However assessing performance in these settings can be difficult--particularly in non-linear simulations where more than one pathway to success…
Choosing the Most Effective Pattern Classification Model under Learning-Time Constraint.
Saito, Priscila T M; Nakamura, Rodrigo Y M; Amorim, Willian P; Papa, João P; de Rezende, Pedro J; Falcão, Alexandre X
2015-01-01
Nowadays, large datasets are common and demand faster and more effective pattern analysis techniques. However, methodologies to compare classifiers usually do not take into account the learning-time constraints required by applications. This work presents a methodology to compare classifiers with respect to their ability to learn from classification errors on a large learning set, within a given time limit. Faster techniques may acquire more training samples, but only when they are more effective will they achieve higher performance on unseen testing sets. We demonstrate this result using several techniques, multiple datasets, and typical learning-time limits required by applications.
R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove
2016-01-01
The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...
Beck, Peter; Truskaller, Thomas; Rakovac, Ivo; Cadonna, Bruno; Pieber, Thomas R
2006-01-01
In this paper we describe the approach to build a web-based clinical data management infrastructure on top of an entity-attribute-value (EAV) database which provides for flexible definition and extension of clinical data sets as well as efficient data handling and high performance query execution. A "mixed" EAV implementation provides a flexible and configurable data repository and at the same time utilizes the performance advantages of conventional database tables for rarely changing data structures. A dynamically configurable data dictionary contains further information for data validation. The online user interface can also be assembled dynamically. A data transfer object which encapsulates data together with all required metadata is populated by the backend and directly used to dynamically render frontend forms and handle incoming data. The "mixed" EAV model enables flexible definition and modification of clinical data sets while reducing performance drawbacks of pure EAV implementations to a minimum. The system currently is in use in an electronic patient record with focus on flexibility and a quality management application (www.healthgate.at) with high performance requirements.
Hong, Eva; Barraud, Olivier; Bidet, Philippe; Bingen, Edouard; Blondiaux, Nicolas; Bonacorsi, Stéphane; Burucoa, Christophe; Carrer, Amélie; Fortineau, Nicolas; Couetdic, Gérard; Courcol, René; Garnier, Fabien; Hery-Arnaud, Geneviève; Lanotte, Philippe; Le Bars, Hervé; Legrand-Quillien, Marie-Christine; Lemée, Ludovic; Mereghetti, Laurent; Millardet, Chantal; Minet, Jacques; Plouzeau-Jayle, Chloé; Pons, Jean-Louis; Schneider, Jacqueline; Taha, Muhamed-Kheir
2012-01-01
Meningococcal meningitis requires rapid diagnosis and immediate management which is enhanced by the use of PCR for the ascertainment of these infections. However, its use is still restricted to reference laboratories. We conducted an inter-laboratory study to assess the implementation and the performance of PCR in ten French hospital settings in 2010. Our data are in favour of this implementation. Although good performance was obtained in identifying Neisseria meningitidis positive samples, the main issue was reported in identifying other species (Streptococcus pneumoniae and Haemophilus influenzae) which are also involved in bacterial meningitis cases. Several recommendations are required and, mainly, PCR should target the major etiological agents (N. meningitidis, S. pneumonia, and H. influenzae) of acute bacterial meningitis. Moreover, PCR should predict the most frequent serogroups of Neisseria meningitidis according to local epidemiology.
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
Forrest, Charlotte L D; Monsell, Stephen; McLaren, Ian P L
2014-07-01
Task-cuing experiments are usually intended to explore control of task set. But when small stimulus sets are used, they plausibly afford learning of the response associated with a combination of cue and stimulus, without reference to tasks. In 3 experiments we presented the typical trials of a task-cuing experiment: a cue (colored shape) followed, after a short or long interval, by a digit to which 1 of 2 responses was required. In a tasks condition, participants were (as usual) directed to interpret the cue as an instruction to perform either an odd/even or a high/low classification task. In a cue + stimulus → response (CSR) condition, to induce learning of mappings between cue-stimulus compound and response, participants were, in Experiment 1, given standard task instructions and additionally encouraged to learn the CSR mappings; in Experiment 2, informed of all the CSR mappings and asked to learn them, without standard task instructions; in Experiment 3, required to learn the mappings by trial and error. The effects of a task switch, response congruence, preparation, and transfer to a new set of stimuli differed substantially between the conditions in ways indicative of classification according to task rules in the tasks condition, and retrieval of responses specific to stimulus-cue combinations in the CSR conditions. Qualitative features of the latter could be captured by an associative learning network. Hence associatively based compound retrieval can serve as the basis for performance with a small stimulus set. But when organization by tasks is apparent, control via task set selection is the natural and efficient strategy. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Effect of rest interval on strength recovery in young and old women.
Theou, Olga; Gareth, Jones R; Brown, Lee E
2008-11-01
This study compares the effects of rest intervals on isokinetic muscle torque recovery between sets of a knee extensor and flexor exercise protocol in physically active younger and older women. Twenty young (22.4 +/- 1.7 years) and 16 older (70.7 +/- 4.3 years) women performed three sets of eight maximum repetitions of knee extension/flexion at 60 degrees x s(-1). The rest interval between sets was 15, 30, and 60 seconds and was randomly assigned across three testing days. No significant interaction of rest by set by age group was observed. There was a significant decline in mean knee extensor torque when 15- and 30-second rest intervals were used between sets, but not when a 60-second rest interval was applied for both the young and the old women. No significant decline for mean knee flexor torque was observed in the older women when a 30-second rest interval was used, whereas a longer 60-second rest interval was required in younger women. Active younger and older women require similar rest intervals between sets of a knee extensor exercise (60 seconds) for complete recovery. However, older women recovered faster (30 seconds) than younger women (60 seconds) between sets of a knee flexor exercise. The exercise-to-rest ratio for knee extensors was similar for young and old women (1:2). Old women required only a 1:1 exercise-to-rest ratio for knee flexor recovery, whereas younger women required a longer 1:2 exercise-to-rest ratio. The results of the present study are specific to isokinetic testing and training and are more applicable in rehabilitation and research settings. Practitioners should consider age and gender when prescribing rest intervals between sets.
Paul V. Ellefson; M.A. Kilgore; Kenneth E. Skog; Christopher D. Risbrudt
2007-01-01
The ability of forest products research and development organizations to contribute to a nationâs well-being requires that they be well organized, effectively managed, and held to high standards of performance. In order to obtain a better understanding of how such organizations are structured and administered, and how they judge organizational performance, a review of...
System-Level Radiation Hardening
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2014-01-01
Although system-level radiation hardening can enable the use of high-performance components and enhance the capabilities of a spacecraft, hardening techniques can be costly and can compromise the very performance designers sought from the high-performance components. Moreover, such techniques often result in a complicated design, especially if several complex commercial microcircuits are used, each posing its own hardening challenges. The latter risk is particularly acute for Commercial-Off-The-Shelf components since high-performance parts (e.g. double-data-rate synchronous dynamic random access memories - DDR SDRAMs) may require other high-performance commercial parts (e.g. processors) to support their operation. For these reasons, it is essential that system-level radiation hardening be a coordinated effort, from setting requirements through testing up to and including validation.
Platts-Mills, James A; Amour, Caroline; Gratz, Jean; Nshama, Rosemary; Walongo, Thomas; Mujaga, Buliga; Maro, Athanasia; McMurry, Timothy L; Liu, Jie; Mduma, Estomih; Houpt, Eric R
2017-05-29
No data are available on the etiology of diarrhea requiring hospitalization after rotavirus vaccine introduction in Africa. The monovalent rotavirus vaccine was introduced in Tanzania on January 1, 2013. We performed a vaccine impact and effectiveness study as well as a qPCR-based etiology study at a rural Tanzanian hospital. We obtained data on admissions among children under 5 years to Haydom Lutheran Hospital between January 1, 2010 and December 31, 2015, and estimated the impact of vaccine introduction on all-cause diarrhea admissions. We then performed a vaccine effectiveness study using the test-negative design. Finally, we tested diarrheal specimens during 2015 by qPCR for a broad range of enteropathogens and calculated pathogen-specific attributable fractions. Vaccine introduction was associated with a 44.9% (95% CI 17.6 - 97.4) reduction in diarrhea admissions in 2015, as well as delay of the rotavirus season. The effectiveness of two doses of vaccine was 74.8% (-8.2 - 94.1) using an enzyme immunoassay-based case definition and 85.1% (26.5 - 97.0) using a qPCR-based case definition. Among 146 children enrolled in 2015, rotavirus remained the leading etiology of diarrhea requiring hospitalization (AF 25.8%, 95% CI: 24.4 - 26.7), followed by heat-stabile enterotoxin-producing E. coli (18.4%, 12.9 - 21.9), Shigella/enteroinvasive E. coli (14.5%, 10.2 - 22.8), and Cryptosporidium (7.9%, 6.2 - 9.3). Despite the clear impact of vaccine introduction in this setting, rotavirus remained the leading etiology of diarrhea requiring hospitalization. Further efforts to maximize vaccine coverage and improve vaccine performance in these settings are warranted. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Effect of missing data on multitask prediction methods.
de la Vega de León, Antonio; Chen, Beining; Gillet, Valerie J
2018-05-22
There has been a growing interest in multitask prediction in chemoinformatics, helped by the increasing use of deep neural networks in this field. This technique is applied to multitarget data sets, where compounds have been tested against different targets, with the aim of developing models to predict a profile of biological activities for a given compound. However, multitarget data sets tend to be sparse; i.e., not all compound-target combinations have experimental values. There has been little research on the effect of missing data on the performance of multitask methods. We have used two complete data sets to simulate sparseness by removing data from the training set. Different models to remove the data were compared. These sparse sets were used to train two different multitask methods, deep neural networks and Macau, which is a Bayesian probabilistic matrix factorization technique. Results from both methods were remarkably similar and showed that the performance decrease because of missing data is at first small before accelerating after large amounts of data are removed. This work provides a first approximation to assess how much data is required to produce good performance in multitask prediction exercises.
Task set induces dynamic reallocation of resources in visual short-term memory.
Sheremata, Summer L; Shomstein, Sarah
2017-08-01
Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.
NASA Technical Reports Server (NTRS)
Mclees, Robert E.; Cohen, Gerald C.
1991-01-01
The requirements are presented for an Advanced Subsonic Civil Transport (ASCT) flight control system generated using structured techniques. The requirements definition starts from initially performing a mission analysis to identify the high level control system requirements and functions necessary to satisfy the mission flight. The result of the study is an example set of control system requirements partially represented using a derivative of Yourdon's structured techniques. Also provided is a research focus for studying structured design methodologies and in particular design-for-validation philosophies.
System administrator`s guide to CDPS. Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Didier, B.T.; Portwood, M.H.
The System Administrator`s Guide to CDPS is intended for those responsible for setting up and maintaining the hardware and software of a Common Mapping Standard (CMS) Date Production System (CDPS) installation. This guide assists the system administrator in performing typical administrative functions. It is not intended to replace the Ultrix Documentation Set that should be available for a DCPS installation. The Ultrix Documentation Set will be required to provide details on referenced Ultrix commands as well as procedures for performing Ultrix maintenance functions. There are six major sections in this guide. Section 1 introduces the system administrator to CDPS andmore » describes the assumptions that are made by this guide. Section 2 describes the CDPS platform configuration. Section 3 describes the platform preparation that is required to install the CDPS software. Section 4 describes the CPS software and its installation procedures. Section 5 describes the CDS software and its installation procedures. Section 6 describes various operation and maintenance procedures. Four appendices are also provided. Appendix A contains a list of used acronyms. Appendix B provides a terse description of common Ultrix commands that are used in administrative functions. Appendix C provides sample CPS and CDS configuration files. Appendix D provides a required list and a recommended list of Ultrix software subsets for installation on a CDPS platform.« less
Muscle preservation in long duration space missions: The eccentric factor
NASA Technical Reports Server (NTRS)
Buchanan, Paul; Dudley, Gary A.; Tesch, Per A.; Hather, Bruce M.
1990-01-01
In our quest to understand, and eventually prevent, the loss of muscle strength and mass that occurs during prolonged periods in microgravity, we have organized our research approach by systems and useful terrestrial analogs. Our hypothesis was that: The eccentric movement, or lengthening component, of dynamic, resistive exercise, is required for the production of the greatest gains in strength and muscle hypertrophy in the most metabolically efficient, and time effective manner. The exercises selected were leg presses, leg (knee) extensions, and hamstring curls. In this 30 week study, 38 male subjects, between the ages of 25 and 50, were divided into four groups. One group performed 5 sets of 8-12 repetitions per set of conventional concentric/eccentric (CON/ECC) exercises. Another group performed only the concentric (CON) movement on the same schedule. The third group performed twice the number of sets in the concentric only mode (CON/CON), and the last group served as controls. We interpret these data as convincing evidence that the eccentric component of heavy resistance training is required along with the concentric for the most effective increase in strength and muscle fiber size in the least time. We also conclude that such heavy exercise of any such muscle group need not consume inordinately long periods of time, and is quite satisfactorily effective when performed on 72 hour centers.
ALGORITHM FOR SORTING GROUPED DATA
NASA Technical Reports Server (NTRS)
Evans, J. D.
1994-01-01
It is often desirable to sort data sets in ascending or descending order. This becomes more difficult for grouped data, i.e., multiple sets of data, where each set of data involves several measurements or related elements. The sort becomes increasingly cumbersome when more than a few elements exist for each data set. In order to achieve an efficient sorting process, an algorithm has been devised in which the maximum most significant element is found, and then compared to each element in succession. The program was written to handle the daily temperature readings of the Voyager spacecraft, particularly those related to the special tracking requirements of Voyager 2. By reducing each data set to a single representative number, the sorting process becomes very easy. The first step in the process is to reduce the data set of width 'n' to a data set of width '1'. This is done by representing each data set by a polynomial of length 'n' based on the differences of the maximum and minimum elements. These single numbers are then sorted and converted back to obtain the original data sets. Required input data are the name of the data file to read and sort, and the starting and ending record numbers. The package includes a sample data file, containing 500 sets of data with 5 elements in each set. This program will perform a sort of the 500 data sets in 3 - 5 seconds on an IBM PC-AT with a hard disk; on a similarly equipped IBM PC-XT the time is under 10 seconds. This program is written in BASIC (specifically the Microsoft QuickBasic compiler) for interactive execution and has been implemented on the IBM PC computer series operating under PC-DOS with a central memory requirement of approximately 40K of 8 bit bytes. A hard disk is desirable for speed considerations, but is not required. This program was developed in 1986.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potts, C.; Faber, M.; Gunderson, G.
The as-built lattice of the Rapid-Cycling Synchrotron (RCS) had two sets of correction sextupoles and two sets of quadrupoles energized by dc power supplies to control the tune and the tune tilt. With this method of powering these magnets, adjustment of tune conditions during the accelerating cycle as needed was not possible. A set of dynamically programmable power supplies has been built and operated to provide the required chromaticity adjustment. The short accelerating time (16.7 ms) of the RCS and the inductance of the magnets dictated large transistor amplifier power supplies. The required time resolution and waveform flexibility indicated themore » desirability of computer control. Both the amplifiers and controls are described, along with resulting improvements in the beam performance. A set of octupole magnets and programmable power supplies with similar dynamic qualities have been constructed and installed to control the anticipated high-intensity transverse instability. This system will be operational in the spring of 1981.« less
ERIC Educational Resources Information Center
Sachs, Steven G.
Planning the activities for an instructional development unit and evaluating how well it has performed requires a set of standards against which the unit can be compared. This paper proposes a set of standards developed from a variety of references and personal experiences with instructional development units from across the country. Thirty-eight…
The CO₂ GAP Project--CO₂ GAP as a prognostic tool in emergency departments.
Shetty, Amith L; Lai, Kevin H; Byth, Karen
2010-12-01
To determine whether CO₂ GAP [(a-ET) PCO₂] value differs consistently in patients presenting with shortness of breath to the ED requiring ventilatory support. To determine a cut-off value of CO₂ GAP, which is consistently associated with measured outcome and to compare its performance against other derived variables. This prospective observational study was conducted in ED on a convenience sample of 412 from 759 patients who underwent concurrent arterial blood gas and ETCO₂ (end-tidal CO₂) measurement. They were randomized to test sample of 312 patients and validation set of 100 patients. The primary outcome of interest was the need for ventilatory support and secondary outcomes were admission to high dependency unit or death during stay in ED. The randomly selected training set was used to select cut-points for the possible predictors; that is, CO₂ GAP, CO₂ gradient, physiologic dead space and A-a gradient. The sensitivity, specificity and predictive values of these predictors were validated in the test set of 100 patients. Analysis of the receiver operating characteristic curves revealed the CO₂ GAP performed significantly better than the arterial-alveolar gradient in patients requiring ventilator support (area under the curve 0.950 vs 0.726). A CO₂ GAP ≥10 was associated with assisted ventilation outcomes when applied to the validation test set (100% sensitivity 70% specificity). The CO₂ GAP [(a-ET) PCO₂] differs significantly in patients requiring assisted ventilation when presenting with shortness of breath to EDs and further research addressing the prognostic value of CO₂ GAP in this specific aspect is required. © 2010 The Authors. EMA © 2010 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
Diez-Martin, J; Moreno-Ortega, M; Bagney, A; Rodriguez-Jimenez, R; Padilla-Torres, D; Sanchez-Morla, E M; Santos, J L; Palomo, T; Jimenez-Arriero, M A
2014-01-01
To assess insight in a large sample of patients with schizophrenia and to study its relationship with set shifting as an executive function. The insight of a sample of 161 clinically stable, community-dwelling patients with schizophrenia was evaluated by means of the Scale to Assess Unawareness of Mental Disorder (SUMD). Set shifting was measured using the Trail-Making Test time required to complete part B minus the time required to complete part A (TMT B-A). Linear regression analyses were performed to investigate the relationships of TMT B-A with different dimensions of general insight. Regression analyses revealed a significant association between TMT B-A and two of the SUMD general components: 'awareness of mental disorder' and 'awareness of the efficacy of treatment'. The 'awareness of social consequences' component was not significantly associated with set shifting. Our results show a significant relation between set shifting and insight, but not in the same manner for the different components of the SUMD general score. Copyright © 2013 S. Karger AG, Basel.
Willardson, Jeffrey M; Simão, Roberto; Fontana, Fabio E
2012-11-01
The purpose of this study was to compare 4 different loading schemes for the free weight bench press, wide grip front lat pull-down, and free weight back squat to determine the extent of progressive load reductions necessary to maintain repetition performance. Thirty-two recreationally trained women (age = 29.34 ± 4.58 years, body mass = 59.61 ± 4.72 kg, height = 162.06 ± 4.04 cm) performed 4 resistance exercise sessions that involved 3 sets of the free weight bench press, wide grip front lat pull-down, and free weight back squat, performed in this exercise order during all 4 sessions. Each of the 4 sessions was conducted under different randomly ordered loading schemes, including (a) a constant 10 repetition maximum (RM) load for all 3 sets and for all 3 exercises, (b) a 5% reduction after the first and second sets for all the 3 exercises, (c) a 10% reduction after the first and second sets for all the 3 exercises, and (d) a 15% reduction after the first and second sets for all the 3 exercises. The results indicated that for the wide grip front lat pull-down and free weight back squat, a 10% load reduction was necessary after the first and second sets to accomplish 10 repetitions on all the 3 sets. For the free weight bench press, a load reduction between 10 and 15% was necessary; specifically, a 10% reduction was insufficient and a 15% reduction was excessive, as evidenced by significantly >10 repetitions on the second and third sets for this exercise (p ≤ 0.05). In conclusion, the results of this study indicate that a resistance training prescription that involves 1-minute rest intervals between multiple 10RM sets does require load reductions to maintain repetition performance. Practitioners might apply these results by considering an approximate 10% load reduction after the first and second sets for the exercises examined, when training women of similar characteristics as in this study.
1945-04-01
sustentation at various altitudes in the ground-effect region is shown in figure 2. A hovering point obtained at approximately 400 feet altitude and a...power required. DISCUSSION Ground-effect data.- The effect of rpm on the power— .— required for sustentation is clearly indicated by figure 2. A ret...Alfred ORIG. AGENCY : Langley Memorial Aeronautical Lab., Langley Field, Va. PUBLISHED BY : National Advisory Committee for Aeronautics
Mendoza, Nohora Marcela; González, Nohora Elizabeth
2015-01-01
One of the most important activities for quality assurance of malaria diagnosis is performance assessment. In Colombia, performance assessment of malaria microscopists has been done through the external performance assessment and indirect external performance assessment programs. To assess the performance of malaria microscopists of public reference laboratories using slide sets, and to describe the methodology used for this purpose. This was a retrospective study to evaluate the concordance of senior microscopists regarding parasite detection, species identification and parasite count based on the results of the assessment of competences using two sets, one comprising 40 slides, and another one with 17 slides. The concordance for parasite detection was 96.9% (95% CI: 96.0-97.5) and 88.7% (95% CI: 86.6-90.5) for species identification. The average percentage of concordant slides in the group evaluated was 89.7% (95% CI: 87.5-91.6). Most of the senior microscopists in Colombia were classified in the two top categories in the performance assessment using slide sets. The most common difficulty encountered was the identification of parasite species. The use of this tool to assess individual performance of microscopists in the evaluation of samples with different degrees of difficulty allows for characterizing the members of the malaria diagnosis network and strengthening the abilities of those who require it.
Cyberhubs: Virtual Research Environments for Astronomy
NASA Astrophysics Data System (ADS)
Herwig, Falk; Andrassy, Robert; Annau, Nic; Clarkson, Ondrea; Côté, Benoit; D’Sa, Aaron; Jones, Sam; Moa, Belaid; O’Connell, Jericho; Porter, David; Ritter, Christian; Woodward, Paul
2018-05-01
Collaborations in astronomy and astrophysics are faced with numerous cyber-infrastructure challenges, such as large data sets, the need to combine heterogeneous data sets, and the challenge to effectively collaborate on those large, heterogeneous data sets with significant processing requirements and complex science software tools. The cyberhubs system is an easy-to-deploy package for small- to medium-sized collaborations based on the Jupyter and Docker technology, which allows web-browser-enabled, remote, interactive analytic access to shared data. It offers an initial step to address these challenges. The features and deployment steps of the system are described, as well as the requirements collection through an account of the different approaches to data structuring, handling, and available analytic tools for the NuGrid and PPMstar collaborations. NuGrid is an international collaboration that creates stellar evolution and explosion physics and nucleosynthesis simulation data. The PPMstar collaboration performs large-scale 3D stellar hydrodynamics simulations of interior convection in the late phases of stellar evolution. Examples of science that is currently performed on cyberhubs, in the areas of 3D stellar hydrodynamic simulations, stellar evolution and nucleosynthesis, and Galactic chemical evolution, are presented.
Analysis of high-throughput biological data using their rank values.
Dembélé, Doulaye
2018-01-01
High-throughput biological technologies are routinely used to generate gene expression profiling or cytogenetics data. To achieve high performance, methods available in the literature become more specialized and often require high computational resources. Here, we propose a new versatile method based on the data-ordering rank values. We use linear algebra, the Perron-Frobenius theorem and also extend a method presented earlier for searching differentially expressed genes for the detection of recurrent copy number aberration. A result derived from the proposed method is a one-sample Student's t-test based on rank values. The proposed method is to our knowledge the only that applies to gene expression profiling and to cytogenetics data sets. This new method is fast, deterministic, and requires a low computational load. Probabilities are associated with genes to allow a statistically significant subset selection in the data set. Stability scores are also introduced as quality parameters. The performance and comparative analyses were carried out using real data sets. The proposed method can be accessed through an R package available from the CRAN (Comprehensive R Archive Network) website: https://cran.r-project.org/web/packages/fcros .
Effects of digital altimetry on pilot workload
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Glover, B. J.
1985-01-01
A series of VOR-DME instrument landing approaches was flown in the DC-9 full-workload simulator to compare pilot performance, scan behavior, and workload when using a computer-drum-pointer altimeter (CDPA) and a digital altimeter (DA). Six pilots executed two sets of instrument landing approaches, with a CDPA on one set and a DA on the other set. Pilot scanning parameters, flight performance, and subjective opinion data were evaluated. It is found that the processes of gathering information from the CDPA and the DA are different. The DA requires a higher mental workload than the CDPA for a VOR-DME type landing approach. Mental processing of altitude information after transitioning back to the attitude indicator is more evident with the DA than with the CDPA.
40 CFR 86.1430 - Certification Short Test sequence; general requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... test procedure. Fuel tank drain and fill is performed or a transient test procedure is performed, as... sets of test conditions identified in this subpart are based on the test fuel type present in the vehicle fuel tank and the ambient temperature during the test. Tables O-96-1 and O-96-2 outline the...
40 CFR 86.1430 - Certification Short Test sequence; general requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... test procedure. Fuel tank drain and fill is performed or a transient test procedure is performed, as... sets of test conditions identified in this subpart are based on the test fuel type present in the vehicle fuel tank and the ambient temperature during the test. Tables O-96-1 and O-96-2 outline the...
40 CFR 86.1430 - Certification Short Test sequence; general requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... test procedure. Fuel tank drain and fill is performed or a transient test procedure is performed, as... sets of test conditions identified in this subpart are based on the test fuel type present in the vehicle fuel tank and the ambient temperature during the test. Tables O-96-1 and O-96-2 outline the...
40 CFR 86.1430 - Certification Short Test sequence; general requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... test procedure. Fuel tank drain and fill is performed or a transient test procedure is performed, as... sets of test conditions identified in this subpart are based on the test fuel type present in the vehicle fuel tank and the ambient temperature during the test. Tables O-96-1 and O-96-2 outline the...
A practical material decomposition method for x-ray dual spectral computed tomography.
Hu, Jingjing; Zhao, Xing
2016-03-17
X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.
Bianco, Antonino; Filingeri, Davide; Paoli, Antonio; Palma, Antonio
2015-04-01
The aim of this study was to evaluate a new method to perform the one repetition maximum (1RM) bench press test, by combining previously validated predictive and practical procedures. Eight young male and 7 females participants, with no previous experience of resistance training, performed a first set of repetitions to fatigue (RTF) with a workload corresponding to ⅓ of their body mass (BM) for a maximum of 25 repetitions. Following a 5-min recovery period, a second set of RTF was performed with a workload corresponding to ½ of participants' BM. The number of repetitions performed in this set was then used to predict the workload to be used for the 1RM bench press test using Mayhew's equation. Oxygen consumption, heart rate and blood lactate were monitored before, during and after each 1RM attempt. A significant effect of gender was found on the maximum number of repetitions achieved during the RTF set performed with ½ of participants' BM (males: 25.0 ± 6.3; females: 11.0x± 10.6; t = 6.2; p < 0.001). The 1RM attempt performed with the workload predicted by Mayhew's equation resulted in females performing 1.2 ± 0.7 repetitions, while males performed 4.8 ± 1.9 repetitions. All participants reached their 1RM performance within 3 attempts, thus resulting in a maximum of 5 sets required to successfully perform the 1RM bench press test. We conclude that, by combining previously validated predictive equations with practical procedures (i.e. using a fraction of participants' BM to determine the workload for an RTF set), the new method we tested appeared safe, accurate (particularly in females) and time-effective in the practical evaluation of 1RM performance in inexperienced individuals. Copyright © 2014 Elsevier Ltd. All rights reserved.
Soltis, Robert; Verlinden, Nathan; Kruger, Nicholas; Carroll, Ailey; Trumbo, Tiffany
2015-02-17
To determine if the process-oriented guided inquiry learning (POGIL) teaching strategy improves student performance and engages higher-level thinking skills of first-year pharmacy students in an Introduction to Pharmaceutical Sciences course. Overall examination scores and scores on questions categorized as requiring either higher-level or lower-level thinking skills were compared in the same course taught over 3 years using traditional lecture methods vs the POGIL strategy. Student perceptions of the latter teaching strategy were also evaluated. Overall mean examination scores increased significantly when POGIL was implemented. Performance on questions requiring higher-level thinking skills was significantly higher, whereas performance on questions requiring lower-level thinking skills was unchanged when the POGIL strategy was used. Student feedback on use of this teaching strategy was positive. The use of the POGIL strategy increased student overall performance on examinations, improved higher-level thinking skills, and provided an interactive class setting.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Schweikhard, W. G.
1974-01-01
A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Cho, Jay; Freivalds, Andris; Rovniak, Liza S.
2017-01-01
This study investigated the feasibility of using a desk bike in an office setting. Workstation measurements were introduced to accommodate 95% of the general U.S. population in using desk bikes. Reading and typing performances were compared at three different cycling conditions (no cycling, 10 and 25 watts). Thirty healthy individuals (15 female and 15 male; Age mean: 23.1, σ: 4.19) were recruited based on 5/50/95th percentile stature. Participants were required to select preferred workstation settings and perform reading and typing tasks while pedaling. According to anthropometric measurements and variability from user preference, recommended adjustable ranges of workstation settings for the general U.S. population were derived. Repeated measures ANOVA showed that pedaling had no significant effect on reading comprehension (p > .05), but had significant effect on typing performance (p < .001). A preferred level of cycling intensity was determined (mean 17.3 watts, σ: 3.69). PMID:28166871
Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation
Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan
2015-01-01
Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. PMID:26673332
Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.
Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan
2015-09-16
Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015 by Kerman University of Medical Sciences.
Atmospheric Science Data Center
2018-03-14
... which makes it very easy to extract and use MISR data sets. Reading a parameter requires the user to simply specify a file, grid, field, ... Automatically stitch, unpack and unscale MISR data while reading Performing coordinate conversions between lat/lon, SOM x/y, ...
Airline service quality performance reports
DOT National Transportation Integrated Search
2002-01-01
The purpose of this federal regulation (Citation 14CFR234) is to set forth required data that certain air carriers must submit to the Department of Transportation and to computer reservations system vendors in computerized form, except as otherwise p...
A Computational Framework to Control Verification and Robustness Analysis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2010-01-01
This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.
NASA Astrophysics Data System (ADS)
Stewart, Iris T.; Loague, Keith
2003-12-01
Groundwater vulnerability assessments of nonpoint source agrochemical contamination at regional scales are either qualitative in nature or require prohibitively costly computational efforts. By contrast, the type transfer function (TTF) modeling approach for vadose zone pesticide leaching presented here estimates solute concentrations at a depth of interest, only uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application. TTFs are soil texture based travel time probability density functions that describe a characteristic leaching behavior for soil profiles with similar soil hydraulic properties. Seven sets of TTFs, representing different levels of upscaling, were developed for six loam soil textural classes with the aid of simulated breakthrough curves from synthetic data sets. For each TTF set, TTFs were determined from a group or subgroup of breakthrough curves for each soil texture by identifying the effective parameters of the function that described the average leaching behavior of the group. The grouping of the breakthrough curves was based on the TTF index, a measure of the magnitude of the peak concentration, the peak arrival time, and the concentration spread. Comparison to process-based simulations show that the TTFs perform well with respect to mass balance, concentration magnitude, and the timing of concentration peaks. Sets of TTFs based on individual soil textures perform better for all the evaluation criteria than sets that span all textures. As prediction accuracy and computational cost increase with the number of TTFs in a set, the selection of a TTF set is determined by a given application.
From Information Management to Information Visualization
Karami, Mahtab
2016-01-01
Summary Objective The development and implementation of a dashboard of medical imaging department (MID) performance indicators. Method Several articles discussing performance measures of imaging departments were searched for this study. All the related measures were extracted. Then, a panel of imaging experts were asked to rate these measures with an open ended question to seek further potential indicators. A second round was performed to confirm the performance rating. The indicators and their ratings were then reviewed by an executive panel. Based on the final panel’s rating, a list of indicators to be used was developed. A team of information technology consultants were asked to determine a set of user interface requirements for the building of the dashboard. In the first round, based on the panel’s rating, a list of main features or requirements to be used was determined. Next, Qlikview was utilized to implement the dashboard to visualize a set of selected KPI metrics. Finally, an evaluation of the dashboard was performed. Results 92 MID indicators were identified. On top of this, 53 main user interface requirements to build of the prototype of dashboard were determined. Then, the project team successfully implemented a prototype of radiology management dashboards into study site. The visual display that was designed was rated highly by users. Conclusion To develop a dashboard, management of information is essential. It is recommended that a quality map be designed for the MID. It can be used to specify the sequence of activities, their related indicators and required data for calculating these indicators. To achieve both an effective dashboard and a comprehensive view of operations, it is necessary to design a data warehouse for gathering data from a variety of systems. Utilizing interoperability standards for exchanging data among different systems can be also effective in this regard. PMID:27437043
NASA Astrophysics Data System (ADS)
Haagmans, G. G.; Verhagen, S.; Voûte, R. L.; Verbree, E.
2017-09-01
Since GPS tends to fail for indoor positioning purposes, alternative methods like indoor positioning systems (IPS) based on Bluetooth low energy (BLE) are developing rapidly. Generally, IPS are deployed in environments covered with obstacles such as furniture, walls, people and electronics influencing the signal propagation. The major factor influencing the system performance and to acquire optimal positioning results is the geometry of the beacons. The geometry of the beacons is limited to the available infrastructure that can be deployed (number of beacons, basestations and tags), which leads to the following challenge: Given a limited number of beacons, where should they be placed in a specified indoor environment, such that the geometry contributes to optimal positioning results? This paper aims to propose a statistical model that is able to select the optimal configuration that satisfies the user requirements in terms of precision. The model requires the definition of a chosen 3D space (in our case 7 × 10 × 6 meter), number of beacons, possible user tag locations and a performance threshold (e.g. required precision). For any given set of beacon and receiver locations, the precision, internal- and external reliability can be determined on forehand. As validation, the modeled precision has been compared with observed precision results. The measurements have been performed with an IPS of BlooLoc at a chosen set of user tag locations for a given geometric configuration. Eventually, the model is able to select the optimal geometric configuration out of millions of possible configurations based on a performance threshold (e.g. required precision).
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.
Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew
2012-08-08
Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.
Strategic plans open for comment
NASA Astrophysics Data System (ADS)
Under the Government Performance and Results Act (the Results Act), agencies of the U.S. government are required to submit a 5-year strategic plan to the U.S. Congress by September 30, 1997 explaining how, when, and why they are spending tax dollars.Enacted in 1993, the Results Act is in tended to "improve efficiency and effective ness of Federal programs by establishing a system to set goals for program performance and to measure results." Thus according to the U.S. House of Representatives Committee on Science, the aim is for agencies to measure their performances by the results of their task and services, not by the number of tasks and services performed. Toward that goal, the Act requires that federal entities complete the following 3-step process:
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory
NASA Astrophysics Data System (ADS)
Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin
2015-09-01
Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.
Acquisition of Cognitive Skill.
1981-08-03
applications to perform the task. Such a production still requires that the phone number be held in working memory. It is possible to eliminate this...be held in working memory and will apply in a time independent of memory set size. However, there still may be some effect of set size in the...theoretical speculation; unpublished work in our laboratory on effects of practice on memory retrieval has confirmed this relationship. There is a
NASA Astrophysics Data System (ADS)
Uslu, Faruk Sukru
2017-07-01
Oil spills on the ocean surface cause serious environmental, political, and economic problems. Therefore, these catastrophic threats to marine ecosystems require detection and monitoring. Hyperspectral sensors are powerful optical sensors used for oil spill detection with the help of detailed spectral information of materials. However, huge amounts of data in hyperspectral imaging (HSI) require fast and accurate computation methods for detection problems. Support vector data description (SVDD) is one of the most suitable methods for detection, especially for large data sets. Nevertheless, the selection of kernel parameters is one of the main problems in SVDD. This paper presents a method, inspired by ensemble learning, for improving performance of SVDD without tuning its kernel parameters. Additionally, a classifier selection technique is proposed to get more gain. The proposed approach also aims to solve the small sample size problem, which is very important for processing high-dimensional data in HSI. The algorithm is applied to two HSI data sets for detection problems. In the first HSI data set, various targets are detected; in the second HSI data set, oil spill detection in situ is realized. The experimental results demonstrate the feasibility and performance improvement of the proposed algorithm for oil spill detection problems.
Quantity judgments of sequentially presented food items by capuchin monkeys (Cebus apella).
Evans, Theodore A; Beran, Michael J; Harris, Emily H; Rice, Daniel F
2009-01-01
Recent assessments have shown that capuchin monkeys, like chimpanzees and other Old World primate species, are sensitive to quantitative differences between sets of visible stimuli. In the present study, we examined capuchins' performance in a more sophisticated quantity judgment task that required the ability to form representations of food quantities while viewing the quantities only one piece at a time. In three experiments, we presented monkeys with the choice between two sets of discrete homogeneous food items and allowed the monkeys to consume the set of their choice. In Experiments 1 and 2, monkeys compared an entirely visible food set to a second set, presented item-by-item into an opaque container. All monkeys exhibited high accuracy in choosing the larger set, even when the entirely visible set was presented last, preventing the use of one-to-one item correspondence to compare quantities. In Experiment 3, monkeys compared two sets that were each presented item-by-item into opaque containers, but at different rates to control for temporal cues. Some monkeys performed well in this experiment, though others exhibited near-chance performance, suggesting that this species' ability to form representations of food quantities may be limited compared to previously tested species such as chimpanzees. Overall, these findings support the analog magnitude model of quantity representation as an explanation for capuchin monkeys' quantification of sequentially presented food items.
Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency
NASA Astrophysics Data System (ADS)
Soderquist, Peter; Leeser, Miriam E.
1999-01-01
Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.
NASA Astrophysics Data System (ADS)
Erickson, C. M.; Martinez, A.
1993-06-01
The 1992 Integrated Modular Engine (IME) design concept, proposed to the Air Force Space Systems Division as a candidate for a National Launch System (NLS) upper stage, emphasized a detailed Quality Functional Deployment (QFD) procedure which set the basis for its final selection. With a list of engine requirements defined and prioritized by the customer, a QFD procedure was implemented where the characteristics of a number of engine and component configurations were assessed for degree of requirement satisfaction. The QFD process emphasized operability, cost, reliability and performance, with relative importance specified by the customer. Existing technology and near-term advanced technology were surveyed to achieve the required design strategies. In the process, advanced nozzles, advanced turbomachinery, valves, controls, and operational procedures were evaluated. The integrated arrangement of three conventional bell nozzle thrust chambers with two advanced turbopump sets selected as the configuration meeting all requirements was rated significantly ahead of the other candidates, including the Aerospike and horizontal flow nozzle configurations.
Determining medical staffing requirements for humanitarian assistance missions.
Negus, Tracy L; Brown, Carrie J; Konoske, Paula
2010-01-01
The primary mission of hospital ships is to provide acute medical and surgical services to U.S. forces during military operations. Hospital ships also provide a hospital asset in support of disaster relief and humanitarian assistance (HA) operations. HA missions afford medical care to populations with vastly different sets of medical conditions from combat casualty care, which affects staffing requirements. Information from a variety of sources was reviewed to better understand hospital ship HA missions. Factors such as time on-site and location shape the mission and underlying goals. Patient encounter data from previous HA missions were used to determine expected patient conditions encountered in various HA operations. These data points were used to project the medical staffing required for future missions. Further data collection, along with goal setting, must be performed to accomplish successful future HA missions. Refining staffing requirements allows deployments to accomplish needed HA and effectively reach underserved areas.
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
Charnock, P; Jones, R; Fazakerley, J; Wilde, R; Dunn, A F
2011-09-01
Data are currently being collected from hospital radiology information systems in the North West of the UK for the purposes of both clinical audit and patient dose audit. Could these data also be used to satisfy quality assurance (QA) requirements according to UK guidance? From 2008 to 2009, 731 653 records were submitted from 8 hospitals from the North West England. For automatic exposure control QA, the protocol from Institute of Physics and Engineering in Medicine (IPEM) report 91 recommends that milliamperes per second can be monitored for repeatability and reproducibility using a suitable phantom, at 70-81 kV. Abdomen AP and chest PA examinations were analysed to find the most common kilovoltage used with these records then used to plot average monthly milliamperes per second with time. IPEM report 91 also recommends that a range of commonly used clinical settings is used to check output reproducibility and repeatability. For each tube, the dose area product values were plotted over time for two most common exposure factor sets. Results show that it is possible to do performance checks of AEC systems; however more work is required to be able to monitor tube output performance. Procedurally, the management system requires work and the benefits to the workflow would need to be demonstrated.
5 CFR 9701.406 - Setting and communicating performance expectations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... apply to all employees, such as standard operating procedures, handbooks, or other operating... organizational level; (2) Organizational, occupational, or other work requirements, such as standard operating procedures, operating instructions, administrative manuals, internal rules and directives, and/or other...
5 CFR 9701.406 - Setting and communicating performance expectations.
Code of Federal Regulations, 2014 CFR
2014-01-01
... apply to all employees, such as standard operating procedures, handbooks, or other operating... organizational level; (2) Organizational, occupational, or other work requirements, such as standard operating procedures, operating instructions, administrative manuals, internal rules and directives, and/or other...
5 CFR 9701.406 - Setting and communicating performance expectations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... apply to all employees, such as standard operating procedures, handbooks, or other operating... organizational level; (2) Organizational, occupational, or other work requirements, such as standard operating procedures, operating instructions, administrative manuals, internal rules and directives, and/or other...
5 CFR 9701.406 - Setting and communicating performance expectations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... apply to all employees, such as standard operating procedures, handbooks, or other operating... organizational level; (2) Organizational, occupational, or other work requirements, such as standard operating procedures, operating instructions, administrative manuals, internal rules and directives, and/or other...
DfM requirements and ROI analysis for system-on-chip
NASA Astrophysics Data System (ADS)
Balasinski, Artur
2005-11-01
DfM (Design-for-Manufacturability) has become staple requirement beyond 100 nm technology node for efficient generation of mask data, cost reduction, and optimal circuit performance. Layout pattern has to comply to many requirements pertaining to database structure and complexity, suitability for image enhancement by the optical proximity correction, and mask data pattern density and distribution over the image field. These requirements are of particular complexity for Systems-on-Chip (SoC). A number of macro-, meso-, and microscopic effects such as reticle macroloading, planarization dishing, and pattern bridging or breaking would compromise fab yield, device performance, or both. In order to determine the optimal set of DfM rules applicable to the particular designs, Return-on-Investment and Failure Mode and Effect Analysis (FMEA) are proposed.
2015-03-01
the providers in the deployed setting and include the Tactical Combat Casualty Care casualty card. Data are then coded for query and analysis. All...intubate, can’t ventilate” and disruption of head/neck anatomy. Of the four procedures performed in the ED setting, three patients survived to hospital...data from SAMMC are limited by the search methods and data extraction. We searched by Current Procedural Ter- minology code , which requires that the
2011-01-01
stealth features requiring specialised noise and vibra- tion skills and propulsion plants requiring other unique skill sets. Personnel with these...analysis Acoustic, wake , thermal, electromagnetic, and other signature analysis Combat systems and ship control Combat system integration, combat system...to-diagnose flow-induced radiated noise Own-sensor performance degradation Note: Risks can be reduced for given designs using scale models
The Role of Efficient XML Interchange (EXI) in Navy Wide-Area Network (WAN) Optimization
2015-03-01
compress, and re-encrypt data to continue providing optimization through compression; however, that capability requires careful consideration of...optimization 23 of encrypted data requires a careful analysis and comparison of performance improvements and IA vulnerabilities. It is important...Contained EXI capitalizes on multiple techniques to improve compression, and they vary depending on a set of EXI options passed to the codec
Getting a handle on DNFB strategies for boosting performance.
2015-03-01
Keeping tabs on DNFB requires a commitment from multiple departments, including clinical documentation, health information management, utilization management, and patient financial services. Monitoring DNFB performance daily, weekly, and monthly can help an organization quickly resolve short-term problems and also identify and respond to more systemic issues. By leveraging historical and comparison data, including performance information from peer organizations, hospitals and health systems can set more realistic targets and further highlight improvement opportunities.
1985-07-01
requirements entails a coordi- nated set of activities 2. Realistic continuous combat training creates a need for integrated scheduling 3. For some...what ways might the performance degradation resulting from continuous combat create unexpected problems in Com- mand coordination? * Uneven performance...visual experiences or hallucinations after 3 since the beginning of time. Man then is the "weak days, they were unable to communicate verbally, their
ERIC Educational Resources Information Center
Osborn, Robert G.; Meador, Darlene M.
1990-01-01
This study compared the performance of depressed and nondepressed males (ages 9-11) on tasks requiring overt rehearsal and free recall. The depressed children rehearsed less both in repetition of words and in the size of their rehearsal sets and recalled fewer words. It is concluded that depressed children have short-term memory processing…
ERIC Educational Resources Information Center
Loper, Wayne Robert
2012-01-01
This study examined the essential skill sets needed to effectively perform as a school business official in New York State. This study surveyed 132 practicing school business officials across New York State and created a needs-based assessment of the competencies required to successfully perform as a New York State school business official. In…
Integrated Job Skills and Reading Skills Training System. Final Report.
ERIC Educational Resources Information Center
Sticht, Thomas G.; And Others
An exploratory study was conducted to evaluate the feasibility of determining the reading demands of navy jobs, using a methodology that identifies both the type of reading tasks performed on the job and the level of general reading skill required to perform that set of reading tasks. Next, a survey was made of the navy's job skills training…
HodDB: Design and Analysis of a Query Processor for Brick.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierro, Gabriel; Culler, David
Brick is a recently proposed metadata schema and ontology for describing building components and the relationships between them. It represents buildings as directed labeled graphs using the RDF data model. Using the SPARQL query language, building-agnostic applications query a Brick graph to discover the set of resources and relationships they require to operate. Latency-sensitive applications, such as user interfaces, demand response and modelpredictive control, require fast queries — conventionally less than 100ms. We benchmark a set of popular open-source and commercial SPARQL databases against three real Brick models using seven application queries and find that none of them meet thismore » performance target. This lack of performance can be attributed to design decisions that optimize for queries over large graphs consisting of billions of triples, but give poor spatial locality and join performance on the small dense graphs typical of Brick. We present the design and evaluation of HodDB, a RDF/SPARQL database for Brick built over a node-based index structure. HodDB performs Brick queries 3-700x faster than leading SPARQL databases and consistently meets the 100ms threshold, enabling the portability of important latency-sensitive building applications.« less
HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters
NASA Astrophysics Data System (ADS)
Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge
2015-12-01
In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.
An Integrated Approach to Exploration Launch Office Requirements Development
NASA Technical Reports Server (NTRS)
Holladay, Jon B.; Langford, Gary
2006-01-01
The proposed paper will focus on the Project Management and Systems Engineering approach utilized to develop a set of both integrated and cohesive requirements for the Exploration Launch Office, within the Constellation Program. A summary of the programmatic drivers which influenced the approach along with details of the resulting implementation will be discussed as well as metrics evaluating the efficiency and accuracy of the various requirements development activities. Requirements development activities will focus on the procedures utilized to ensure that technical content was valid and mature in preparation for the Crew Launch Vehicle and Constellation System s Requirements Reviews. This discussion will begin at initial requirements development during the Exploration Systems Architecture Study and progress through formal development of the program structure. Specific emphasis will be given to development and validation of the requirements. This discussion will focus on approaches to garner the appropriate requirement owners (or customers), project infrastructure utilized to emphasize proper integration, and finally the procedure to technically mature, verify and validate the requirements. Examples of requirements being implemented on the Launch Vehicle (systems, interfaces, test & verification) will be utilized to demonstrate the various processes and also provide a top level understanding of the launch vehicle(s) performance goals. Details may also be provided on the approaches for verification, which range from typical aerospace hardware development (qualification/acceptance) through flight certification (flight test, etc.). The primary intent of this paper is to provide a demonstrated procedure for the development of a mature, effective, integrated set of requirements on a complex system, which also has the added intricacies of both heritage and new hardware development integration. Ancillary focus of the paper will include discussion of Test and Verification approaches along with top level systems/elements performance capabilities.
Requirements Flowdown for Prognostics and Health Management
NASA Technical Reports Server (NTRS)
Goebel, Kai; Saxena, Abhinav; Roychoudhury, Indranil; Celaya, Jose R.; Saha, Bhaskar; Saha, Sankalita
2012-01-01
Prognostics and Health Management (PHM) principles have considerable promise to change the game of lifecycle cost of engineering systems at high safety levels by providing a reliable estimate of future system states. This estimate is a key for planning and decision making in an operational setting. While technology solutions have made considerable advances, the tie-in into the systems engineering process is lagging behind, which delays fielding of PHM-enabled systems. The derivation of specifications from high level requirements for algorithm performance to ensure quality predictions is not well developed. From an engineering perspective some key parameters driving the requirements for prognostics performance include: (1) maximum allowable Probability of Failure (PoF) of the prognostic system to bound the risk of losing an asset, (2) tolerable limits on proactive maintenance to minimize missed opportunity of asset usage, (3) lead time to specify the amount of advanced warning needed for actionable decisions, and (4) required confidence to specify when prognosis is sufficiently good to be used. This paper takes a systems engineering view towards the requirements specification process and presents a method for the flowdown process. A case study based on an electric Unmanned Aerial Vehicle (e-UAV) scenario demonstrates how top level requirements for performance, cost, and safety flow down to the health management level and specify quantitative requirements for prognostic algorithm performance.
Ricci, Joseph A; Vargas, Christina R; Ho, Olivia A; Lin, Samuel J; Tobias, Adam M; Lee, Bernard T
2017-07-01
Postoperative free flap care has historically required intensive monitoring for 24 hours in an intensive care unit. Continuous monitoring with tissue oximetry has allowed earlier detection of vascular compromise, decreasing flap loss and improving salvage. This study aims to identify whether a fast-track postoperative paradigm can be safely used with tissue oximetry to decrease intensive monitoring and costs. All consecutive microsurgical breast reconstructions performed at a single institution were reviewed (2008-2014) and cases requiring return to the operating room were identified. Data evaluated included patient demographics, the take back time course, and complications of flap loss and salvage. A cost-benefit analysis was performed to analyse the utility of a postoperative intensive monitoring setting. There were 900 flaps performed and 32 required an unplanned return to the operating room. There were 16 flaps that required a reexploration within the first 24 hours; the standard length of intensive unit monitoring. After 4 hours, there were 7 flaps (44%) detected by tissue oximetry for reexploration. After 15 hours of intensive monitoring postoperatively, cost analysis revealed that the majority (15/16; 94%) of failing flaps had been identified and the cost of identifying each subsequent failing flap exceeded the cost of another hour of intensive monitoring. The postoperative paradigm for microsurgical flaps has historically required intensive unit monitoring. Using tissue oximetry, a fast-track pathway can reduce time spent in an intensive monitoring setting from 24 to 15 hours with significant cost savings and minimal risk of missing a failing free flap.
IAC level "O" program development
NASA Technical Reports Server (NTRS)
Vos, R. G.
1982-01-01
The current status of the IAC development activity is summarized. The listed prototype software and documentation was delivered, and details were planned for development of the level 1 operational system. The planned end product IAC is required to support LSST design analysis and performance evaluation, with emphasis on the coupling of required technical disciplines. The long term IAC effectively provides two distinct features: a specific set of analysis modules (thermal, structural, controls, antenna radiation performance and instrument optical performance) that will function together with the IAC supporting software in an integrated and user friendly manner; and a general framework whereby new analysis modules can readily be incorporated into IAC or be allowed to communicate with it.
Expanding the PACS archive to support clinical review, research, and education missions
NASA Astrophysics Data System (ADS)
Honeyman-Buck, Janice C.; Frost, Meryll M.; Drane, Walter E.
1999-07-01
Designing an image archive and retrieval system that supports multiple users with many different requirements and patterns of use without compromising the performance and functionality required by diagnostic radiology is an intellectual and technical challenge. A diagnostic archive, optimized for performance when retrieving diagnostic images for radiologists needed to be expanded to support a growing clinical review network, the University of Florida Brain Institute's demands for neuro-imaging, Biomedical Engineering's imaging sciences, and an electronic teaching file. Each of the groups presented a different set of problems for the designers of the system. In addition, the radiologists did not want to see nay loss of performance as new users were added.
The Global Emergency Observation and Warning System
NASA Technical Reports Server (NTRS)
Bukley, Angelia P.; Mulqueen, John A.
1994-01-01
Based on an extensive characterization of natural hazards, and an evaluation of their impacts on humanity, a set of functional technical requirements for a global warning and relief system was developed. Since no technological breakthroughs are required to implement a global system capable of performing the functions required to provide sufficient information for prevention, preparedness, warning, and relief from natural disaster effects, a system is proposed which would combine the elements of remote sensing, data processing, information distribution, and communications support on a global scale for disaster mitigation.
SU-E-T-468: Implementation of the TG-142 QA Process for Seven Linacs with Enhanced Beam Conformance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woollard, J; Ayan, A; DiCostanzo, D
2015-06-15
Purpose: To develop a TG-142 compliant QA process for 7 Varian TrueBeam linear accelerators (linacs) with enhanced beam conformance and dosimetrically matched beam models. To ensure consistent performance of all 7 linacs, the QA process should include a common set of baseline values for use in routine QA on all linacs. Methods: The TG 142 report provides recommended tests, tolerances and frequencies for quality assurance of medical accelerators. Based on the guidance provided in the report, measurement tests were developed to evaluate each of the applicable parameters listed for daily, monthly and annual QA. These tests were then performed onmore » each of our 7 new linacs as they came on line at our institution. Results: The tolerance values specified in TG-142 for each QA test are either absolute tolerances (i.e. ±2mm) or require a comparison to a baseline value. The results of our QA tests were first used to ensure that all 7 linacs were operating within the suggested tolerance values provided in TG −142 for those tests with absolute tolerances and that the performance of the linacs was adequately matched. The QA test results were then used to develop a set of common baseline values for those QA tests that require comparison to a baseline value at routine monthly and annual QA. The procedures and baseline values were incorporated into a spreadsheets for use in monthly and annual QA. Conclusion: We have developed a set of procedures for daily, monthly and annual QA of our linacs that are consistent with the TG-142 report. A common set of baseline values was developed for routine QA tests. The use of this common set of baseline values for comparison at monthly and annual QA will ensure consistent performance of all 7 linacs.« less
Whitney, Paul; Hinson, John M.; Jackson, Melinda L.; Van Dongen, Hans P.A.
2015-01-01
Study Objectives: To better understand the sometimes catastrophic effects of sleep loss on naturalistic decision making, we investigated effects of sleep deprivation on decision making in a reversal learning paradigm requiring acquisition and updating of information based on outcome feedback. Design: Subjects were randomized to a sleep deprivation or control condition, with performance testing at baseline, after 2 nights of total sleep deprivation (or rested control), and following 2 nights of recovery sleep. Subjects performed a decision task involving initial learning of go and no go response sets followed by unannounced reversal of contingencies, requiring use of outcome feedback for decisions. A working memory scanning task and psychomotor vigilance test were also administered. Setting: Six consecutive days and nights in a controlled laboratory environment with continuous behavioral monitoring. Subjects: Twenty-six subjects (22–40 y of age; 10 women). Interventions: Thirteen subjects were randomized to a 62-h total sleep deprivation condition; the others were controls. Results: Unlike controls, sleep deprived subjects had difficulty with initial learning of go and no go stimuli sets and had profound impairment adapting to reversal. Skin conductance responses to outcome feedback were diminished, indicating blunted affective reactions to feedback accompanying sleep deprivation. Working memory scanning performance was not significantly affected by sleep deprivation. And although sleep deprived subjects showed expected attentional lapses, these could not account for impairments in reversal learning decision making. Conclusions: Sleep deprivation is particularly problematic for decision making involving uncertainty and unexpected change. Blunted reactions to feedback while sleep deprived underlie failures to adapt to uncertainty and changing contingencies. Thus, an error may register, but with diminished effect because of reduced affective valence of the feedback or because the feedback is not cognitively bound with the choice. This has important implications for understanding and managing sleep loss-induced cognitive impairment in emergency response, disaster management, military operations, and other dynamic real-world settings with uncertain outcomes and imperfect information. Citation: Whitney P, Hinson JM, Jackson ML, Van Dongen HPA. Feedback blunting: total sleep deprivation impairs decision making that requires updating based on feedback. SLEEP 2015;38(5):745–754. PMID:25515105
NASA Astrophysics Data System (ADS)
Gordon, Craig A.
This thesis examines the ability of a small, single-engine airplane to return to the runway following an engine failure shortly after takeoff. Two sets of trajectories are examined. One set of trajectories has the airplane fly a straight climb on the runway heading until engine failure. The other set of trajectories has the airplane perform a 90° turn at an altitude of 500 feet and continue until engine failure. Various combinations of wind speed, wind direction, and engine failure times are examined. The runway length required to complete the entire flight from the beginning of the takeoff roll to wheels stop following the return to the runway after engine failure is calculated for each case. The optimal trajectories following engine failure consist of three distinct segments: a turn back toward the runway using a large bank angle and angle of attack; a straight glide; and a reversal turn to align the airplane with the runway. The 90° turn results in much shorter required runway lengths at lower headwind speeds. At higher headwind speeds, both sets of trajectories are limited by the length of runway required for the landing rollout, but the straight climb cases generally require a lower angle of attack to complete the flight. The glide back to the runway is performed at an airspeed below the best glide speed of the airplane due to the need to conserve potential energy after the completion of the turn back toward the runway. The results are highly dependent on the rate of climb of the airplane during powered flight. The results of this study can aid the pilot in determining whether or not a return to the runway could be performed in the event of an engine failure given the specific wind conditions and runway length at the time of takeoff. The results can also guide the pilot in determining the takeoff profile that would offer the greatest advantage in returning to the runway.
A preprocessing strategy for helioseismic inversions
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.; Thompson, M. J.
1993-05-01
Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.
Automatic reactor model synthesis with genetic programming.
Dürrenmatt, David J; Gujer, Willi
2012-01-01
Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site.
Helicopter roll control effectiveness criteria program summary
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Bourne, Simon M.; Mnich, Marc A.
1988-01-01
A study of helicopter roll control effectiveness is summarized for the purpose of defining military helicopter handling qualities requirements. The study is based on an analysis of pilot-in-the-loop task performance of several basic maneuvers. This is extended by a series of piloted simulations using the NASA Ames Vertical Motion Simulator and selected flight data. The main results cover roll control power and short-term response characteristics. In general the handling qualities requirements recommended are set in conjunction with desired levels of flight task and maneuver response which can be directly observed in actual flight. An important aspect of this, however, is that vehicle handling qualities need to be set with regard to some quantitative aspect of mission performance. Specific examples of how this can be accomplished include a lateral unmask/remask maneuver in the presence of a threat and an air tracking maneuver which recognizes the kill probability enhancement connected with decreasing the range to the target. Conclusions and recommendations address not only the handling qualities recommendations, but also the general use of flight simulators and the dependence of mission performance on handling qualities.
The guideline "consultation psychiatry" of the Netherlands Psychiatric Association.
Leentjens, Albert F G; Boenink, Annette D; Sno, Herman N; Strack van Schijndel, Rob J M; van Croonenborg, Joyce J; van Everdingen, Jannes J E; van der Feltz-Cornelis, Christina M; van der Laan, Niels C; van Marwijk, Harm; van Os, Titus W D P
2009-06-01
In 2008, the Netherlands Psychiatric Association authorized a guideline "consultation psychiatry." To set a standard for psychiatric consultations in nonpsychiatric settings. The main objective of the guideline is to answer three questions: Is psychiatric consultation effective and, if so, which forms are most effective? How should a psychiatric consultations be performed? What increases adherence to recommendations given by the consulting psychiatrist? Systematic literature review. Both in general practice and in hospital settings psychiatric consultation is effective. In primary care, the effectiveness of psychiatric consultation is almost exclusively studied in the setting of "collaborative care." Procedural guidance is given on how to perform a psychiatric consultation. In this guidance, psychiatric consultation is explicitly looked upon as a complex activity that requires a broad frame of reference and adequate medical and pharmacological expertise and experience and one that should be performed by doctors. Investing in a good relation with the general practitioner, and the use of a "consultation letter" increased efficacy in general practice. In the hospital setting, investing in liaison activities and an active psychiatric follow-up of consultations increased adherence to advice. Psychiatric consultations are effective and constitute a useful contribution to the patients' treatment. With setting a standard consultations will become more transparent and checkable. It is hoped that this will increase the quality of consultation psychiatry.
SETS. Set Equation Transformation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worrell, R.B.
1992-01-13
SETS is used for symbolic manipulation of Boolean equations, particularly the reduction of equations by the application of Boolean identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze noncoherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access throughmore » nullification of sensors in its protection system.« less
Wall, Stephen N.; Lee, Anne CC; Niermeyer, Susan; English, Mike; Keenan, William J.; Carlo, Wally; Bhutta, Zulfiqar A.; Bang, Abhay; Narayanan, Indira; Ariawan, Iwan; Lawn, Joy E.
2009-01-01
Background Each year approximately 10 million babies do not breathe immediately at birth, of which about 6 million require basic neonatal resuscitation. The major burden is in low-income settings, where health system capacity to provide neonatal resuscitation is inadequate. Objective To systematically review the evidence for neonatal resuscitation content, training and competency, equipment and supplies, cost, and key program considerations, specifically for resource-constrained settings. Results Evidence from several observational studies shows that facility-based basic neonatal resuscitation may avert 30% of intrapartum-related neonatal deaths. Very few babies require advanced resuscitation (endotracheal intubation and drugs) and these newborns may not survive without ongoing ventilation; hence, advanced neonatal resuscitation is not a priority in settings without neonatal intensive care. Of the 60 million nonfacility births, most do not have access to resuscitation. Several trials have shown that a range of community health workers can perform neonatal resuscitation with an estimated effect of a 20% reduction in intrapartum-related neonatal deaths, based on expert opinion. Case studies illustrate key considerations for scale up. Conclusion Basic resuscitation would substantially reduce intrapartum-related neonatal deaths. Where births occur in facilities, it is a priority to ensure that all birth attendants are competent in resuscitation. Strategies to address the gap for home births are urgently required. More data are required to determine the impact of neonatal resuscitation, particularly on long-term outcomes in low-income settings. PMID:19815203
A guide to performance management for the Health Information Manager.
Leggat, Sandra G
This paper provides a summary of human resource management practices that have been identified as being associated with better outcomes in performance management. In general, essential practices include transformational leadership and a coherent program of goal setting, performance monitoring and feedback. Some Health Information Managers may feel they require training assistance to develop the necessary skills in the establishment of meaningful work performance goals for staff and the provision of useful and timely feedback. This paper provides useful information to assist Health Information Managers enhance the performance of their staff.
Minnesota Department of Transportation Research Services : 2009 annual report.
DOT National Transportation Integrated Search
2010-01-01
The purpose of this report is to meet the requirements set forth by the Code of Federal Regulations, : Part 420Planning and Research Program Administration420.117 2(e): : Suitable reports that document the results of activities performed wit...
Administrative Support Occupations Skill Standards.
ERIC Educational Resources Information Center
Professional Secretaries International, Kansas City, MO.
This document establishes a set of performance expectations based on current practices in administrative support occupations. It is designed to assist individuals, training providers, employers, management personnel, and professional organizations in matching knowledge, abilities, and interests to knowledge and skills required for success in…
Exploration of GPS to enhance the safe transport of hazardous materials
DOT National Transportation Integrated Search
1997-12-01
The report (1) documents a set of requirements for the performance of location systems that utilize the Global Positioning System (GPS), (2) identifies potential uses of GPS in hazardous materials transport, (3) develops service descriptions for the ...
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.1 Purpose. The purpose of this part is to set...' quality of service can be made available to consumers of air transportation. This part also requires that service quality data be disclosed directly to consumers. ...
Vehicle information exchange needs for mobility applications : version 3.0.
DOT National Transportation Integrated Search
1996-06-01
The Evaluatory Design Document provides a unifying set of assumptions for other evaluations to utilize. Many of the evaluation activities require the definition of an actual implementation in order to be performed. For example, to cost the elements o...
Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E
2013-07-01
To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.
Gaia challenging performances verification: combination of spacecraft models and test results
NASA Astrophysics Data System (ADS)
Ecale, Eric; Faye, Frédéric; Chassat, François
2016-08-01
To achieve the ambitious scientific objectives of the Gaia mission, extremely stringent performance requirements have been given to the spacecraft contractor (Airbus Defence and Space). For a set of those key-performance requirements (e.g. end-of-mission parallax, maximum detectable magnitude, maximum sky density or attitude control system stability), this paper describes how they are engineered during the whole spacecraft development process, with a focus on the end-to-end performance verification. As far as possible, performances are usually verified by end-to-end tests onground (i.e. before launch). However, the challenging Gaia requirements are not verifiable by such a strategy, principally because no test facility exists to reproduce the expected flight conditions. The Gaia performance verification strategy is therefore based on a mix between analyses (based on spacecraft models) and tests (used to directly feed the models or to correlate them). Emphasis is placed on how to maximize the test contribution to performance verification while keeping the test feasible within an affordable effort. In particular, the paper highlights the contribution of the Gaia Payload Module Thermal Vacuum test to the performance verification before launch. Eventually, an overview of the in-flight payload calibration and in-flight performance verification is provided.
Robotic tape library system level testing at NSA: Present and planned
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1994-01-01
In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.
ERIC Educational Resources Information Center
Boursicot, Katharine
2006-01-01
In this era of audit and accountability, there is an imperative to demonstrate and document that appropriate standards have been set in professional education. In medicine, stakeholders want assurance that graduates have attained the required level of competence to be awarded a provisional licence to practise. To investigate the results of a…
Assesment on the performance of electrode arrays using image processing technique
NASA Astrophysics Data System (ADS)
Usman, N.; Khiruddin, A.; Nawawi, Mohd
2017-08-01
Interpreting inverted resistivity section is time consuming, tedious and requires other sources of information to be relevant geologically. Image processing technique was used in order to perform post inversion processing which make geophysical data interpretation easier. The inverted data sets were imported into the PCI Geomatica 9.0.1 for further processing. The data sets were clipped and merged together in order to match the coordinates of the three layers and permit pixel to pixel analysis. Dipole-dipole array is more sensitive to resistivity variation with depth in comparison with Werner-Schlumberger and pole-dipole. Image processing serves as good post-inversion tool in geophysical data processing.
Cognitive flexibility: A distinct element of performance impairment due to sleep deprivation.
Honn, K A; Hinson, J M; Whitney, P; Van Dongen, H P A
2018-03-14
In around-the-clock operations, reduced alertness due to circadian misalignment and sleep loss causes performance impairment, which can lead to catastrophic errors and accidents. There is mounting evidence that performance on different tasks is differentially affected, but the general principles underlying this differentiation are not well understood. One factor that may be particularly relevant is the degree to which tasks require executive control, that is, control over the initiation, monitoring, and termination of actions in order to achieve goals. A key aspect of this is cognitive flexibility, i.e., the deployment of cognitive control resources to adapt to changes in events. Loss of cognitive flexibility due to sleep deprivation has been attributed to "feedback blunting," meaning that feedback on behavioral outcomes has reduced salience - and that feedback is therefore less effective at driving behavior modification under changing circumstances. The cognitive mechanisms underlying feedback blunting are as yet unknown. Here we present data from an experiment that investigated the effects of sleep deprivation on performance after an unexpected reversal of stimulus-response mappings, requiring cognitive flexibility to maintain good performance. Nineteen healthy young adults completed a 4-day in-laboratory study. Subjects were randomized to either a total sleep deprivation condition (n = 11) or a control condition (n = 8). Athree-phase reversal learning decision task was administered at baseline, and again after 30.5 h of sleep deprivation, or matching well-rested control. The task was based on a go/no go task paradigm, in which stimuli were assigned to either a go (response) set or a no go (no response) set. Each phase of the task included four stimuli (two in the go set and two in the no go set). After each stimulus presentation, subjects could make a response within 750 ms or withhold their response. They were then shown feedback on the accuracy of their response. In phase 1 of the task, subjects were explicitly told which stimuli were assigned to the go and no go sets. In phases 2 and 3, new stimuli were used that were different from those used in phase 1. Subjects were not explicitly told the go/no go mappings and were instead required to use accuracy feedback to learn which stimuli were in the go and nogo sets. Phase 3 continued directly from phase 2 and retained the same stimuli as in phase 2, but there was an unannounced reversal of the stimulus-response mappings. Task results confirmed that sleep deprivation resulted in loss of cognitive flexibility through feedback blunting, and that this effect was not produced solely by (1) general performance impairment because of overwhelming sleep drive; (2) reduced working memory resources available to perform the task; (3) incomplete learning of stimulus-response mappings before the unannounced reversal; or (4) interference with stimulus identification through lapses in vigilant attention. Overall, the results suggest that sleep deprivation causes a fundamental problem with dynamic attentional control. This element of performance impairment due to sleep deprivation appears to be distinct from vigilant attention deficits, and represents a particularly significant challenge for fatigue risk management. Copyright © 2018. Published by Elsevier Ltd.
WFIRST: Update on the Coronagraph Science Requirements
NASA Astrophysics Data System (ADS)
Douglas, Ewan S.; Cahoy, Kerri; Carlton, Ashley; Macintosh, Bruce; Turnbull, Margaret; Kasdin, Jeremy; WFIRST Coronagraph Science Investigation Teams
2018-01-01
The WFIRST Coronagraph instrument (CGI) will enable direct imaging and low resolution spectroscopy of exoplanets in reflected light and imaging polarimetry of circumstellar disks. The CGI science investigation teams were tasked with developing a set of science requirements which advance our knowledge of exoplanet occurrence and atmospheric composition, as well as the composition and morphology of exozodiacal debris disks, cold Kuiper Belt analogs, and protoplanetary systems. We present the initial content, rationales, validation, and verification plans for the WFIRST CGI, informed by detailed and still-evolving instrument and observatory performance models. We also discuss our approach to the requirements development and management process, including the collection and organization of science inputs, open source approach to managing the requirements database, and the range of models used for requirements validation. These tools can be applied to requirements development processes for other astrophysical space missions, and may ease their management and maintenance. These WFIRST CGI science requirements allow the community to learn about and provide insights and feedback on the expected instrument performance and science return.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.
1980-01-01
The results of the economic analysis of the AIDS 3 system design are presented. AIDS 3 evaluated a set of economic feasibility measures including life cycle cost, implementation cost, annual operating expenditures and annual capital expenditures. The economic feasibility of AIDS 3 was determined by comparing the evaluated measures with the same measures, where applicable, evaluated for the current system. A set of future work load scenarios was constructed using JPL's environmental evaluation study of the fingerprint identification system. AIDS 3 and the current system were evaluated for each of the economic feasibility measures for each of the work load scenarios. They were compared for a set of performance measures, including response time and accuracy, and for a set of cost/benefit ratios, including cost per transaction and cost per technical search. Benefit measures related to the economic feasibility of the system are also presented, including the required number of employees and the required employee skill mix.
Periodical capacity setting methods for make-to-order multi-machine production systems
Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert
2014-01-01
The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649
NASA Technical Reports Server (NTRS)
Holladay, Jon; Day, Greg; Gill, Larry
2004-01-01
Spacecraft are typically designed with a primary focus on weight in order to meet launch vehicle performance parameters. However, for pressurized and/or man-rated spacecraft, it is also necessary to have an understanding of the vehicle operating environments to properly size the pressure vessel. Proper sizing of the pressure vessel requires an understanding of the space vehicle's life cycle and compares the physical design optimization (weight and launch "cost") to downstream operational complexity and total life cycle cost. This paper will provide an overview of some major environmental design drivers and provide examples for calculating the optimal design pressure versus a selected set of design parameters related to thermal and environmental perspectives. In addition, this paper will provide a generic set of cracking pressures for both positive and negative pressure relief valves that encompasses worst case environmental effects for a variety of launch / landing sites. Finally, several examples are included to highlight pressure relief set points and vehicle weight impacts for a selected set of orbital missions.
Customer-experienced rapid prototyping
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Zhang, Fu; Li, Anbo
2008-12-01
In order to describe accurately and comprehend quickly the perfect GIS requirements, this article will integrate the ideas of QFD (Quality Function Deployment) and UML (Unified Modeling Language), and analyze the deficiency of prototype development model, and will propose the idea of the Customer-Experienced Rapid Prototyping (CE-RP) and describe in detail the process and framework of the CE-RP, from the angle of the characteristics of Modern-GIS. The CE-RP is mainly composed of Customer Tool-Sets (CTS), Developer Tool-Sets (DTS) and Barrier-Free Semantic Interpreter (BF-SI) and performed by two roles of customer and developer. The main purpose of the CE-RP is to produce the unified and authorized requirements data models between customer and software developer.
NASA Technical Reports Server (NTRS)
Bulfin, R. L.; Perdue, C. A.
1994-01-01
The Mission Planning Division of the Mission Operations Laboratory at NASA's Marshall Space Flight Center is responsible for scheduling experiment activities for space missions controlled at MSFC. In order to draw statistically relevant conclusions, all experiments must be scheduled at least once and may have repeated performances during the mission. An experiment consists of a series of steps which, when performed, provide results pertinent to the experiment's functional objective. Since these experiments require a set of resources such as crew and power, the task of creating a timeline of experiment activities for the mission is one of resource constrained scheduling. For each experiment, a computer model with detailed information of the steps involved in running the experiment, including crew requirements, processing times, and resource requirements is created. These models are then loaded into the Experiment Scheduling Program (ESP) which attempts to create a schedule which satisfies all resource constraints. ESP uses a depth-first search technique to place each experiment into a time interval, and a scoring function to evaluate the schedule. The mission planners generate several schedules and choose one with a high value of the scoring function to send through the approval process. The process of approving a mission timeline can take several months. Each timeline must meet the requirements of the scientists, the crew, and various engineering departments as well as enforce all resource restrictions. No single objective is considered in creating a timeline. The experiment scheduling problem is: given a set of experiments, place each experiment along the mission timeline so that all resource requirements and temporal constraints are met and the timeline is acceptable to all who must approve it. Much work has been done on multicriteria decision making (MCDM). When there are two criteria, schedules which perform well with respect to one criterion will often perform poorly with respect to the other. One schedule dominates another if it performs strictly better on one criterion, and no worse on the other. Clearly, dominated schedules are undesireable. A nondominated schedule can be generated by some sort of optimization problem. Generally there are two approaches: the first is a hierarchical approach while the second requires optimizing a weighting or scoring function.
[Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].
Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T
2003-10-01
Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.
ERIC Educational Resources Information Center
Brown, Frank L.; Jacobs, T. O.
The paper covers the performances, skills, and kinds of knowledge demanded of an infantry rifle squad leader to maintain an organized and effective fighting unit under campaign conditions and to set an example as a leader for his men. It covers personal hygiene and field sanitation, the maintenance of minimal fighting and existence loads, water…
Wootton, Richard; Vladzymyrskyy, Anton; Zolfo, Maria; Bonnardot, Laurent
2011-01-01
Telemedicine has been used for many years to support doctors in the developing world. Several networks provide services in different settings and in different ways. However, to draw conclusions about which telemedicine networks are successful requires a method of evaluating them. No general consensus or validated framework exists for this purpose. To define a basic method of performance measurement that can be used to improve and compare teleconsultation networks; to employ the proposed framework in an evaluation of three existing networks; to make recommendations about the future implementation and follow-up of such networks. Analysis based on the experience of three telemedicine networks (in operation for 7-10 years) that provide services to doctors in low-resource settings and which employ the same basic design. Although there are many possible indicators and metrics that might be relevant, five measures for each of the three user groups appear to be sufficient for the proposed framework. In addition, from the societal perspective, information about clinical- and cost-effectiveness is also required. The proposed performance measurement framework was applied to three mature telemedicine networks. Despite their differences in terms of activity, size and objectives, their performance in certain respects is very similar. For example, the time to first reply from an expert is about 24 hours for each network. Although all three networks had systems in place to collect data from the user perspective, none of them collected information about the coordinator's time required or about ease of system usage. They had only limited information about quality and cost. Measuring the performance of a telemedicine network is essential in understanding whether the network is working as intended and what effect it is having. Based on long-term field experience, the suggested framework is a practical tool that will permit organisations to assess the performance of their own networks and to improve them by comparison with others. All telemedicine systems should provide information about setup and running costs because cost-effectiveness is crucial for sustainability.
Wootton, Richard; Vladzymyrskyy, Anton; Zolfo, Maria; Bonnardot, Laurent
2011-01-01
Background Telemedicine has been used for many years to support doctors in the developing world. Several networks provide services in different settings and in different ways. However, to draw conclusions about which telemedicine networks are successful requires a method of evaluating them. No general consensus or validated framework exists for this purpose. Objective To define a basic method of performance measurement that can be used to improve and compare teleconsultation networks; to employ the proposed framework in an evaluation of three existing networks; to make recommendations about the future implementation and follow-up of such networks. Methods Analysis based on the experience of three telemedicine networks (in operation for 7–10 years) that provide services to doctors in low-resource settings and which employ the same basic design. Findings Although there are many possible indicators and metrics that might be relevant, five measures for each of the three user groups appear to be sufficient for the proposed framework. In addition, from the societal perspective, information about clinical- and cost-effectiveness is also required. The proposed performance measurement framework was applied to three mature telemedicine networks. Despite their differences in terms of activity, size and objectives, their performance in certain respects is very similar. For example, the time to first reply from an expert is about 24 hours for each network. Although all three networks had systems in place to collect data from the user perspective, none of them collected information about the coordinator's time required or about ease of system usage. They had only limited information about quality and cost. Conclusion Measuring the performance of a telemedicine network is essential in understanding whether the network is working as intended and what effect it is having. Based on long-term field experience, the suggested framework is a practical tool that will permit organisations to assess the performance of their own networks and to improve them by comparison with others. All telemedicine systems should provide information about setup and running costs because cost-effectiveness is crucial for sustainability. PMID:22162965
Application-Program-Installer Builder
NASA Technical Reports Server (NTRS)
Wolgast, Paul; Demore, Martha; Lowik, Paul
2007-01-01
A computer program builds application programming interfaces (APIs) and related software components for installing and uninstalling application programs in any of a variety of computers and operating systems that support the Java programming language in its binary form. This program is partly similar in function to commercial (e.g., Install-Shield) software. This program is intended to enable satisfaction of a quasi-industry-standard set of requirements for a set of APIs that would enable such installation and uninstallation and that would avoid the pitfalls that are commonly encountered during installation of software. The requirements include the following: 1) Properly detecting prerequisites to an application program before performing the installation; 2) Properly registering component requirements; 3) Correctly measuring the required hard-disk space, including accounting for prerequisite components that have already been installed; and 4) Correctly uninstalling an application program. Correct uninstallation includes (1) detecting whether any component of the program to be removed is required by another program, (2) not removing that component, and (3) deleting references to requirements of the to-be-removed program for components of other programs so that those components can be properly removed at a later time.
NASA Astrophysics Data System (ADS)
Crowe, B.; Black, P.; Tauxe, J.; Yucel, V.; Rawlinson, S.; Colarusso, A.; DiSanza, F.
2001-12-01
The National Nuclear Security Administration, Nevada Operations Office (NNSA/NV) operates and maintains two active facilities on the Nevada Test Site (NTS) that dispose Department of Energy (DOE) defense-generated low-level radioactive (LLW), mixed radioactive, and classified waste in shallow trenches, pits and large-diameter boreholes. The operation and maintenance of the LLW disposal sites are self-regulated under DOE Order 435.1, which requires review of a Performance Assessment for four performance objectives: 1) all pathways 25 mrem/yr limit; 2) atmospheric pathways 10 mrem/yr limit; 3) radon flux density of 20 pCi/m2/s; and 4) groundwater resource protection (Safe Drinking Water Act; 4 mrem/yr limit). The inadvertent human intruder is protected under a dual 500- and 100-mrem limit (acute and chronic exposure). In response to the Defense Nuclear Facilities Safety Board Recommendation 92 2, a composite analysis is required that must examine all interacting sources for compliance against both 30 and 100 mrem/yr limits. A small component of classified transuranic waste is buried at intermediate depths in 3-meter diameter boreholes at the Area 5 LLW disposal facility and is assessed through DOE-agreement against the requirements of the Environmental Protection Agency (EPA)'s 40 CFR 191. The hazardous components of mixed LLW are assessed against RCRA requirements. The NTS LLW sites fall directly under three sets of federal regulations and the regulatory differences result not only in organizational challenges, but also in different decision objectives and technical paths to completion. The DOE regulations require deterministic analysis for a 1,000-year compliance assessment supplemented by probabilistic analysis under a long-term maintenance program. The EPA regulations for TRU waste are probabilistically based for a compliance interval of 10,000 years. Multiple steps in the assessments are strongly dependent on assumptions for long-term land use policies. Integrating the different requirements into coherent and consistent sets of conceptual models of the disposal setting, alternative scenarios, and system models of fate, transport and dose-based assessments is technically challenging. Environmental assessments for these sites must be broad-based and flexible to accommodate the multiple objectives.
Designing Incentives for Marine Corps Cyber Workforce Retention
2014-12-01
transformation, which Burke and Litwin (1992) describe as distinct sets of organizational dynamics that are required for genuine change in...information- security-analysts.htm . Burke, W. Warner, and George H. Litwin . 1992. “A Causal Model of Organizational Performance and Change.” Journal
ERIC Educational Resources Information Center
Bowman, Richard F.
2017-01-01
Initial teacher licensing is intended to provide public assurance of core competence in classroom settings. Core competence's implicit vulnerability is mediocrity. In daily practice, many educators appear satisfied in reaching a merely acceptable level of performance, thus minimizing the period of effortful skill acquisition required to attain…
Ikeda, Hiroshi; Furukawa, Hisataka
2015-04-01
cThis study examined the interactive effect of management by group goals and job interdependence on employee's activities in terms of task and contextual performance. A survey was conducted among 140 Japanese employees. Results indicated that management by group goals was related only to contextual performance. Job interdependence, however, had a direct effect on both task and contextual performance. Moreover, moderated regression analyses revealed that for work groups requiring higher interdependence among employees, management by group goals had a positive relation to contextual performance but not to task performance. When interdependence was not necessarily required, however, management by group goals had no relation to contextual performance and even negatively impacted task performance, respectively. These results show that management by group goals affects task and contextual performance, and that this effect is moderated by job interdependence. This provides a theoretical extension as well as a practical application to the setting and management of group goals.
NASA Astrophysics Data System (ADS)
Harney, Robert C.
1997-03-01
A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.
Conductor requirements for high-temperature superconducting utility power transformers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pleva, E. F.; Mehrotra, V.; Schwenterly, S W
High-temperature superconducting (HTS) coated conductors in utility power transformers must satisfy a set of operating requirements that are driven by two major considerations-HTS transformers must be economically competitive with conventional units, and the conductor must be robust enough to be used in a commercial manufacturing environment. The transformer design and manufacturing process will be described in order to highlight the various requirements that it imposes on the HTS conductor. Spreadsheet estimates of HTS transformer costs allow estimates of the conductor cost required for an HTS transformer to be competitive with a similarly performing conventional unit.
Large Deployable Reflector (LDR) feasibility study update
NASA Technical Reports Server (NTRS)
Alff, W. H.; Banderman, L. W.
1983-01-01
In 1982 a workshop was held to refine the science rationale for large deployable reflectors (LDR) and develop technology requirements that support the science rationale. At the end of the workshop, a set of LDR consensus systems requirements was established. The subject study was undertaken to update the initial LDR study using the new systems requirements. The study included mirror materials selection and configuration, thermal analysis, structural concept definition and analysis, dynamic control analysis and recommendations for further study. The primary emphasis was on the dynamic controls requirements and the sophistication of the controls system needed to meet LDR performance goals.
Verlinden, Nathan; Kruger, Nicholas; Carroll, Ailey; Trumbo, Tiffany
2015-01-01
Objective. To determine if the process-oriented guided inquiry learning (POGIL) teaching strategy improves student performance and engages higher-level thinking skills of first-year pharmacy students in an Introduction to Pharmaceutical Sciences course. Design. Overall examination scores and scores on questions categorized as requiring either higher-level or lower-level thinking skills were compared in the same course taught over 3 years using traditional lecture methods vs the POGIL strategy. Student perceptions of the latter teaching strategy were also evaluated. Assessment. Overall mean examination scores increased significantly when POGIL was implemented. Performance on questions requiring higher-level thinking skills was significantly higher, whereas performance on questions requiring lower-level thinking skills was unchanged when the POGIL strategy was used. Student feedback on use of this teaching strategy was positive. Conclusion. The use of the POGIL strategy increased student overall performance on examinations, improved higher-level thinking skills, and provided an interactive class setting. PMID:25741027
Tocco-Tussardi, I.; Presman, B.; Cherubino, M.; Garusi, C.; Bassetto, F.
2016-01-01
Summary Post-burn contractures account for up to 50% of the workload of a plastic surgery team volunteering in developing nations. Best possible outcome most likely requires extensive surgery. However, extensive approaches such as microsurgery are generally discouraged in these settings. We report two successful cases of severe hand contractures reconstructed with free flaps on a surgical mission in Kenya. Microsurgery can be safely performed in the humanitarian setting by an integration of: personal skills; technical means; education of local personnel; follow-up services; and an effective network for communication. PMID:27857655
RAVE: Rapid Visualization Environment
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos
1994-01-01
Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.
LANDFIRE 2001 and 2008 Refresh Geographic Area Report--Pacific Southwest
Bastion, Henry; Long, Don; Lundberg, Brenda; Kost, Jay; Natharius, Jeffrey A.; Kreilick, Heather; Martin, Charley; Smail, Tobin; Napoli, James; Hann, Wendel
2011-01-01
In this report, we (1) address the background and provide details pertaining to why there are two Refresh data sets, (2) explain the requirements, planning, and procedures behind the completion and delivery of the updated products for each of the data product sets, (3) show and describe results, and (4) provide case studies illustrating the performance of LANDFIRE National, LANDFIRE 2001 Refresh and LANDFIRE 2008 Refresh (LF_1.1.0) data products on some example wildland fires.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Does working memory load facilitate target detection?
Fruchtman-Steinbok, Tom; Kessler, Yoav
2016-02-01
Previous studies demonstrated that increasing working memory (WM) load delays performance of a concurrent task, by distracting attention and thus interfering with encoding and maintenance processes. The present study used a version of the change detection task with a target detection requirement during the retention interval. In contrast to the above prediction, target detection was faster following a larger set-size, specifically when presented shortly after the memory array (up to 400 ms). The effect of set-size on target detection was also evident when no memory retention was required. The set-size effect was also found using different modalities. Moreover, it was only observed when the memory array was presented simultaneously, but not sequentially. These results were explained by increased phasic alertness exerted by the larger visual display. The present study offers new evidence of ongoing attentional processes in the commonly-used change detection paradigm. Copyright © 2015 Elsevier B.V. All rights reserved.
Earth resources mission performance studies. Volume 1: Requirements definition
NASA Technical Reports Server (NTRS)
1974-01-01
The need for a realistic set of earth resources collection requirements to test and maximize the data gathering capabilities of the EOS remote sensor systems is considered. The collection requirements will be derived from established user requirements. In order to confine and bound the requirements study, some baseline assumptions were established. These are: (1) image acquisition is confined to the contiguous United States, (2) the fundamental data users are select participating federal agencies, (3) the acquired data will be applied to generating information necessary or in support of existing federal agency charters, and (4) the most pressing or desired federal agency earth resources data requirements have been defined, suggested, or implied in current available literature.
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
Hysong, Sylvia J; Thomas, Candice L; Spitzmüller, Christiane; Amspoker, Amber B; Woodard, LeChauncy; Modi, Varsha; Naik, Aanand D
2016-01-15
Team coordination within clinical care settings is a critical component of effective patient care. Less is known about the extent, effectiveness, and impact of coordination activities among professionals within VA Patient-Aligned Care Teams (PACTs). This study will address these gaps by describing the specific, fundamental tasks and practices involved in PACT coordination, their impact on performance measures, and the role of coordination task complexity. First, we will use a web-based survey of coordination practices among 1600 PACTs in the national VHA. Survey findings will characterize PACT coordination practices and assess their association with clinical performance measures. Functional job analysis, using 6-8 subject matter experts who are 3rd and 4th year residents in VA Primary Care rotations, will be utilized to identify the tasks involved in completing clinical performance measures to standard. From this, expert ratings of coordination complexity will be used to determine the level of coordinative complexity required for each of the clinical performance measures drawn from the VA External Peer Review Program (EPRP). For objective 3, data collected from the first two methods will evaluate the effect of clinical complexity on the relationships between measures of PACT coordination and their ratings on the clinical performance measures. Results from this study will support successful implementation of coordinated team-based work in clinical settings by providing knowledge regarding which aspects of care require the most complex levels of coordination and how specific coordination practices impact clinical performance.
Guastello, Stephen J; Reiter, Katherine; Shircel, Anton; Timm, Paul; Malon, Matthew; Fabisch, Megan
2014-07-01
This study examined the relationship between performance variability and actual performance of financial decision makers who were working under experimental conditions of increasing workload and fatigue. The rescaled range statistic, also known as the Hurst exponent (H) was used as an index of variability. Although H is defined as having a range between 0 and 1, 45% of the 172 time series generated by undergraduates were negative. Participants in the study chose the optimum investment out of sets of 3 to 5 options that were presented a series of 350 displays. The sets of options varied in both the complexity of the options and number of options under simultaneous consideration. One experimental condition required participants to make their choices within 15 sec, and the other condition required them to choose within 7.5 sec. Results showed that (a) negative H was possible and not a result of psychometric error; (b) negative H was associated with negative autocorrelations in a time series. (c) H was the best predictor of performance of the variables studied; (d) three other significant predictors were scores on an anagrams test and ratings of physical demands and performance demands; (e) persistence as evidenced by the autocorrelations was associated with ratings of greater time pressure. It was concluded, furthermore, that persistence and overall performance were correlated, that 'healthy' variability only exists within a limited range, and other individual differences related to ability and resistance to stress or fatigue are also involved in the prediction of performance.
Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Oostdyk, Rebecca
2010-01-01
The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project
Application-Controlled Demand Paging for Out-of-Core Visualization
NASA Technical Reports Server (NTRS)
Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)
1997-01-01
In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.
Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates
NASA Technical Reports Server (NTRS)
Patera, Anthony T.
1997-01-01
A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.
Current trends in breast reconstruction: survey of American Society of Plastic Surgeons 2010.
Gurunluoglu, Raffi; Gurunluoglu, Aslin; Williams, Susan A; Tebockhorst, Seth
2013-01-01
We conducted a retrospective survey of American Society of Plastic Surgeons to ascertain the current trends in breast reconstruction (BR). Surveys were sent to 2250 active American Society of Plastic Surgeons members by e-mail with a cover letter including the link using Survey Monkey for the year 2010. In all, 489 surveys (a response rate of 21.7%) were returned. Three hundred fifty-eight surveys from respondents performing BR in their practices were included in the study. The survey included questions on surgeon demographics, practice characteristics, BR after mastectomy, number of BR per year, type and timing of BR, use of acellular dermal matrix, reconstructive choices in the setting of previous irradiation and in patients requiring postmastectomy radiation therapy, timing of contralateral breast surgery, fat grafting, techniques used for nipple-areola reconstruction, the complications, and physician satisfaction and physician reported patient satisfaction. Returned responses were tabulated and assessed. After prophylactic mastectomy, 16% of BRs were performed. In all, 81.2% of plastic surgeons predominantly performed immediate BR. In patients requiring postmastectomy radiation therapy, 81% did not perform immediate BR. Regardless of practice setting and laterality of reconstruction, 82.7% of respondents predominantly performed implant-based BR. Half of the plastic surgeons performing prosthetic BR used acellular dermal matrix. Only 14% of plastic surgeons predominantly performed autologous BR. Surgeons in solo, plastic surgery group practices, and multispecialty group practices preferred implant-based BR for both unilateral and bilateral cases more frequently than those in academic practices (P < 0.05). Overall, plastic surgeons in academic settings preferred autologous BR more frequently than those in other practice locations (P < 0.05). Of total respondents, 64.8% did not perform microsurgical BR at all; 28% reported performing deep inferior epigastric perforator flap BR. Pedicled transverse rectus abdominis myocutaneous flap was the most often used option for unilateral autologous reconstruction, whereas deep inferior epigastric perforator flap was the most commonly used technique for bilateral BR. The overall complication rate reported by respondents was 11%. The survey provides an insight to the current trends in BR practice with respect to surgeon and practice setting characteristics. Although not necessarily the correct best practices, the survey does demonstrate a likely portrayal of what is being practiced in the United States in the area of BR.
Entrepreneurial Leadership Practices and School Innovativeness
ERIC Educational Resources Information Center
Akmaliah, Zaidatol; Pihie, Lope; Asimiran, Soaib; Bagheri, Afsaneh
2014-01-01
Entrepreneurial leadership, as a distinctive type of leadership required for dealing with challenges and crises of current organizational settings, has increasingly been applied to improve school performance. However, there is limited research on the impact of school leaders' entrepreneurial leadership practices on school innovativeness. The main…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Scope. 810.1 Section 810.1 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR PERMANENT PROGRAM... subchapter sets forth the minimum performance standards and design requirements to be adopted and implemented...
40 CFR 160.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... (b) The written standard operating procedures required under § 160.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
40 CFR 160.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... (b) The written standard operating procedures required under § 160.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
40 CFR 160.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (b) The written standard operating procedures required under § 160.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
40 CFR 160.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (b) The written standard operating procedures required under § 160.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
40 CFR 160.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... (b) The written standard operating procedures required under § 160.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
Row-crop planter requirements to support variable-rate seeding of maize
USDA-ARS?s Scientific Manuscript database
Current planting technology possesses the ability to increase crop productivity and improve field efficiency by precisely metering and placing crop seeds. Planter performance depends on using the correct planter and technology setup which consists of determining optimal settings for different planti...
42 CFR 493.1 - Basis and scope.
Code of Federal Regulations, 2010 CFR
2010-10-01
... AND CERTIFICATION LABORATORY REQUIREMENTS General Provisions § 493.1 Basis and scope. This part sets forth the conditions that all laboratories must meet to be certified to perform testing on human specimens under the Clinical Laboratory Improvement Amendments of 1988 (CLIA). It implements sections 1861...
42 CFR 493.1 - Basis and scope.
Code of Federal Regulations, 2011 CFR
2011-10-01
... AND CERTIFICATION LABORATORY REQUIREMENTS General Provisions § 493.1 Basis and scope. This part sets forth the conditions that all laboratories must meet to be certified to perform testing on human specimens under the Clinical Laboratory Improvement Amendments of 1988 (CLIA). It implements sections 1861...
Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.
Human Research Program Requirements Document (Revision C)
NASA Technical Reports Server (NTRS)
Vargas, Paul R.
2009-01-01
The purpose of this document is to define, document, and allocate the Human Research Program (HRP) requirements to the HRP Program Elements. It establishes the flow-down of requirements from Exploration Systems Mission Directorate (ESMD) and Office of the Chief Health and Medical Officer (OCHMO) to the various Program Elements of the HRP to ensure that human research and technology countermeasure investments are made to insure the delivery of countermeasures and technologies that satisfy ESMD's and OCHMO's exploration mission requirements. Requirements driving the HRP work and deliverables are derived from the exploration architecture, as well as Agency standards regarding the maintenance of human health and performance. Agency human health and performance standards will define acceptable risk for each type and duration of exploration mission. It is critical to have the best available scientific and clinical evidence in setting and validating these standards. In addition, it is imperative that the best available evidence on preventing and mitigating human health and performance risks is incorporated into exploration mission and vehicle designs. These elements form the basis of the HRP research and technology development requirements and highlight the importance of HRP investments in enabling NASA's exploration missions. This PRD defines the requirements of the HRP which is comprised of the following major Program Elements: Behavioral Health and Performance (BHP), Exploration Medical Capability (ExMC), Human Health Countermeasures (HHC), ISS Medical Project (ISSMP), Space Human Factors and Habitability (SHFH), and Space Radiation (SR).
Portable, one-step, and rapid GMR biosensor platform with smartphone interface.
Choi, Joohong; Gani, Adi Wijaya; Bechstein, Daniel J B; Lee, Jung-Rok; Utz, Paul J; Wang, Shan X
2016-11-15
Quantitative immunoassay tests in clinical laboratories require trained technicians, take hours to complete with multiple steps, and the instruments used are generally immobile-patient samples have to be sent in to the labs for analysis. This prevents quantitative immunoassay tests to be performed outside laboratory settings. A portable, quantitative immunoassay device will be valuable in rural and resource-limited areas, where access to healthcare is scarce or far away. We have invented Eigen Diagnosis Platform (EDP), a portable quantitative immunoassay platform based on Giant Magnetoresistance (GMR) biosensor technology. The platform does not require a trained technician to operate, and only requires one-step user involvement. It displays quantitative results in less than 15min after sample insertion, and each test costs less than US$4. The GMR biosensor employed in EDP is capable of detecting multiple biomarkers in one test, enabling a wide array of immune diagnostics to be performed simultaneously. In this paper, we describe the design of EDP, and demonstrate its capability. Multiplexed assay of human immunoglobulin G and M (IgG and IgM) antibodies with EDP achieves sensitivities down to 0.07 and 0.33 nanomolar, respectively. The platform will allow lab testing to be performed in remote areas, and open up applications of immunoassay testing in other non-clinical settings, such as home, school, and office. Copyright © 2016 Elsevier B.V. All rights reserved.
Inherent Safety Characteristics of Advanced Fast Reactors
NASA Astrophysics Data System (ADS)
Bochkarev, A. S.; Korsun, A. S.; Kharitonov, V. S.; Alekseev, P. N.
2017-01-01
The study presents SFR transient performance for ULOF events initiated by pump trip and pump seizure with simultaneous failure of all shutdown systems in both cases. The most severe cases leading to the pin cladding rupture and possible sodium boiling are demonstrated. The impact of various features on SFR inherent safety performance for ULOF events was analysed. The decrease in hydraulic resistance of primary loop and increase in primary pump coast down time were investigated. Performing analysis resulted in a set of recommendations to varying parameters for the purpose of enhancing the inherent safety performance of SFR. In order to prevent the safety barrier rupture for ULOF events the set of thermal hydraulic criteria defining the ULOF transient processes dynamics and requirements to these criteria were recommended based on achieved results: primary sodium flow dip under the natural circulation asymptotic level and natural circulation rise time.
Irregular Applications: Architectures & Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feo, John T.; Villa, Oreste; Tumeo, Antonino
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Initial data sets for the Schwarzschild spacetime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Lobo, Alfonso Garcia-Parrado; Kroon, Juan A. Valiente; School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS
2007-01-15
A characterization of initial data sets for the Schwarzschild spacetime is provided. This characterization is obtained by performing a 3+1 decomposition of a certain invariant characterization of the Schwarzschild spacetime given in terms of concomitants of the Weyl tensor. This procedure renders a set of necessary conditions--which can be written in terms of the electric and magnetic parts of the Weyl tensor and their concomitants--for an initial data set to be a Schwarzschild initial data set. Our approach also provides a formula for a static Killing initial data set candidate--a KID candidate. Sufficient conditions for an initial data set tomore » be a Schwarzschild initial data set are obtained by supplementing the necessary conditions with the requirement that the initial data set possesses a stationary Killing initial data set of the form given by our KID candidate. Thus, we obtain an algorithmic procedure of checking whether a given initial data set is Schwarzschildean or not.« less
A canonical correlation neural network for multicollinearity and functional data.
Gou, Zhenkun; Fyfe, Colin
2004-03-01
We review a recent neural implementation of Canonical Correlation Analysis and show, using ideas suggested by Ridge Regression, how to make the algorithm robust. The network is shown to operate on data sets which exhibit multicollinearity. We develop a second model which not only performs as well on multicollinear data but also on general data sets. This model allows us to vary a single parameter so that the network is capable of performing Partial Least Squares regression (at one extreme) to Canonical Correlation Analysis (at the other)and every intermediate operation between the two. On multicollinear data, the parameter setting is shown to be important but on more general data no particular parameter setting is required. Finally, we develop a second penalty term which acts on such data as a smoother in that the resulting weight vectors are much smoother and more interpretable than the weights without the robustification term. We illustrate our algorithms on both artificial and real data.
Error mapping controller: a closed loop neuroprosthesis controlled by artificial neural networks.
Pedrocchi, Alessandra; Ferrante, Simona; De Momi, Elena; Ferrigno, Giancarlo
2006-10-09
The design of an optimal neuroprostheses controller and its clinical use presents several challenges. First, the physiological system is characterized by highly inter-subjects varying properties and also by non stationary behaviour with time, due to conditioning level and fatigue. Secondly, the easiness to use in routine clinical practice requires experienced operators. Therefore, feedback controllers, avoiding long setting procedures, are required. The error mapping controller (EMC) here proposed uses artificial neural networks (ANNs) both for the design of an inverse model and of a feedback controller. A neuromuscular model is used to validate the performance of the controllers in simulations. The EMC performance is compared to a Proportional Integral Derivative (PID) included in an anti wind-up scheme (called PIDAW) and to a controller with an ANN as inverse model and a PID in the feedback loop (NEUROPID). In addition tests on the EMC robustness in response to variations of the Plant parameters and to mechanical disturbances are carried out. The EMC shows improvements with respect to the other controllers in tracking accuracy, capability to prolong exercise managing fatigue, robustness to parameter variations and resistance to mechanical disturbances. Different from the other controllers, the EMC is capable of balancing between tracking accuracy and mapping of fatigue during the exercise. In this way, it avoids overstressing muscles and allows a considerable prolongation of the movement. The collection of the training sets does not require any particular experimental setting and can be introduced in routine clinical practice.
Free DICOM de-identification tools in clinical research: functioning and safety of patient privacy.
Aryanto, K Y E; Oudkerk, M; van Ooijen, P M A
2015-12-01
To compare non-commercial DICOM toolkits for their de-identification ability in removing a patient's personal health information (PHI) from a DICOM header. Ten DICOM toolkits were selected for de-identification tests. Tests were performed by using the system's default de-identification profile and, subsequently, the tools' best adjusted settings. We aimed to eliminate fifty elements considered to contain identifying patient information. The tools were also examined for their respective methods of customization. Only one tool was able to de-identify all required elements with the default setting. Not all of the toolkits provide a customizable de-identification profile. Six tools allowed changes by selecting the provided profiles, giving input through a graphical user interface (GUI) or configuration text file, or providing the appropriate command-line arguments. Using adjusted settings, four of those six toolkits were able to perform full de-identification. Only five tools could properly de-identify the defined DICOM elements, and in four cases, only after careful customization. Therefore, free DICOM toolkits should be used with extreme care to prevent the risk of disclosing PHI, especially when using the default configuration. In case optimal security is required, one of the five toolkits is proposed. • Free DICOM toolkits should be carefully used to prevent patient identity disclosure. • Each DICOM tool produces its own specific outcomes from the de-identification process. • In case optimal security is required, using one DICOM toolkit is proposed.
Error mapping controller: a closed loop neuroprosthesis controlled by artificial neural networks
Pedrocchi, Alessandra; Ferrante, Simona; De Momi, Elena; Ferrigno, Giancarlo
2006-01-01
Background The design of an optimal neuroprostheses controller and its clinical use presents several challenges. First, the physiological system is characterized by highly inter-subjects varying properties and also by non stationary behaviour with time, due to conditioning level and fatigue. Secondly, the easiness to use in routine clinical practice requires experienced operators. Therefore, feedback controllers, avoiding long setting procedures, are required. Methods The error mapping controller (EMC) here proposed uses artificial neural networks (ANNs) both for the design of an inverse model and of a feedback controller. A neuromuscular model is used to validate the performance of the controllers in simulations. The EMC performance is compared to a Proportional Integral Derivative (PID) included in an anti wind-up scheme (called PIDAW) and to a controller with an ANN as inverse model and a PID in the feedback loop (NEUROPID). In addition tests on the EMC robustness in response to variations of the Plant parameters and to mechanical disturbances are carried out. Results The EMC shows improvements with respect to the other controllers in tracking accuracy, capability to prolong exercise managing fatigue, robustness to parameter variations and resistance to mechanical disturbances. Conclusion Different from the other controllers, the EMC is capable of balancing between tracking accuracy and mapping of fatigue during the exercise. In this way, it avoids overstressing muscles and allows a considerable prolongation of the movement. The collection of the training sets does not require any particular experimental setting and can be introduced in routine clinical practice. PMID:17029636
Incorporating Research Findings into Standards and Requirements for Space Medicine
NASA Technical Reports Server (NTRS)
Duncan, J. Michael
2006-01-01
The Vision for Exploration has been the catalyst for NASA to refocus its life sciences research. In the future, life sciences research funded by NASA will be focused on answering questions that directly impact setting physiological standards and developing effective countermeasures to the undesirable physiological and psychological effects of spaceflight for maintaining the health of the human system. This, in turn, will contribute to the success of exploration class missions. We will show how research will impact setting physiologic standards, such as exposure limits, outcome limits, and accepted performance ranges. We will give examples of how a physiologic standard can eventually be translated into an operational requirement, then a functional requirement, and eventually spaceflight hardware or procedures. This knowledge will be important to the space medicine community as well as to vehicle contractors who, for the first time, must now consider the human system in developing and constructing a vehicle that can achieve the goal of success.
NASA Technical Reports Server (NTRS)
Mckendry, M. S.
1985-01-01
The notion of 'atomic actions' has been considered in recent work on data integrity and reliability. It has been found that the standard database operations of 'read' and 'write' carry with them severe performance limitations. For this reason, systems are now being designed in which actions operate on 'objects' through operations with more-or-less arbitrary semantics. An object (i.e., an instance of an abstract data type) comprises data, a set of operations (procedures) to manipulate the data, and a set of invariants. An 'action' is a unit of work. It appears to be primitive to its surrounding environment, and 'atomic' to other actions. Attention is given to the conventional model of nested actions, ordering requirements, the maximum possible visibility (full visibility) for items which must be controlled by ordering constraints, item management paradigms, and requirements for blocking mechanisms which provide the required visibility.
Code of Federal Regulations, 2013 CFR
2013-01-01
... REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.1 Purpose. The purpose of this part is to set... system vendors in computerized form, except as otherwise provided, so that information on air carriers' quality of service can be made available to consumers of air transportation. This part also requires that...
Code of Federal Regulations, 2014 CFR
2014-01-01
... REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.1 Purpose. The purpose of this part is to set... system vendors in computerized form, except as otherwise provided, so that information on air carriers' quality of service can be made available to consumers of air transportation. This part also requires that...
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.1 Purpose. The purpose of this part is to set... system vendors in computerized form, except as otherwise provided, so that information on air carriers' quality of service can be made available to consumers of air transportation. This part also requires that...
Code of Federal Regulations, 2012 CFR
2012-01-01
... REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.1 Purpose. The purpose of this part is to set... system vendors in computerized form, except as otherwise provided, so that information on air carriers' quality of service can be made available to consumers of air transportation. This part also requires that...
48 CFR 242.002 - Interagency agreements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Interagency agreements... agreements. (b)(i) DoD requires reimbursement, at a rate set by the Under Secretary of Defense (Comptroller... administration, and audit services provided under a no-charge reciprocal agreement; (B) Services performed under...
40 CFR 792.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... standardized. (b) The written standard operating procedures required under § 792.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
40 CFR 792.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standardized. (b) The written standard operating procedures required under § 792.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
21 CFR 58.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2013 CFR
2013-04-01
... standardized. (b) The written standard operating procedures required under § 58.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
40 CFR 792.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standardized. (b) The written standard operating procedures required under § 792.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
21 CFR 58.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... standardized. (b) The written standard operating procedures required under § 58.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
21 CFR 58.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2012 CFR
2012-04-01
... standardized. (b) The written standard operating procedures required under § 58.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
21 CFR 58.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2014 CFR
2014-04-01
... standardized. (b) The written standard operating procedures required under § 58.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
40 CFR 792.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... standardized. (b) The written standard operating procedures required under § 792.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
40 CFR 792.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... standardized. (b) The written standard operating procedures required under § 792.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
21 CFR 58.63 - Maintenance and calibration of equipment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... standardized. (b) The written standard operating procedures required under § 58.81(b)(11) shall set forth in... maintenance operations were routine and followed the written standard operating procedures. Written records... operating procedures shall designate the person responsible for the performance of each operation. (c...
Capsule Performance Optimization in the National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O L; MacGowan, B J; Haan, S W
2009-10-13
A capsule performance optimization campaign will be conducted at the National Ignition Facility to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting themore » key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less
Capsule performance optimization in the national ignition campaign
NASA Astrophysics Data System (ADS)
Landen, O. L.; MacGowan, B. J.; Haan, S. W.; Edwards, J.
2010-08-01
A capsule performance optimization campaign will be conducted at the National Ignition Facility [1] to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.
Towards operational multisensor registration
NASA Technical Reports Server (NTRS)
Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.
1991-01-01
To use data from a number of different remote sensors in a synergistic manner, a multidimensional analysis of the data is necessary. However, prior to this analysis, processing to correct for the systematic geometric distortion characteristic of each sensor is required. Furthermore, the registration process must be fully automated to handle a large volume of data and high data rates. A conceptual approach towards an operational multisensor registration algorithm is presented. The performance requirements of the algorithm are first formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several registration techniques that fit within the structure of this algorithm are also presented. Their performance was evaluated using a multisensor test data set assembled from LANDSAT TM, SEASAT, SIR-B, Thermal Infrared Multispectral Scanner (TIMS), and SPOT sensors.
Interchangeable end effector tools utilized on the protoflight manipulator arm
NASA Technical Reports Server (NTRS)
1987-01-01
A subset of teleoperator and effector tools was designed, fabricated, delivered and successfully demonstrated on the Marshall Space Flight Center (MSFC) protoflight manipulator arm (PFMA). The tools delivered included a rotary power tool with interchangeable collets and two fluid coupling mate/demate tools; one for a Fairchild coupling and the other for a Purolator coupling. An electrical interface connector was also provided for the rotary power tool. A tool set, from which the subset was selected, for performing on-orbit satellite maintenance was identified and conceptionally designed. Maintenance requirements were synthesized, evaluated and prioritized to develop design requirements for a set of end effector tools representative of those needed to provide on-orbit maintenance of satellites to be flown in the 1986 to 2000 timeframe.
Risthaus, Tobias; Grimme, Stefan
2013-03-12
A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.
Task-set inertia and memory-consolidation bottleneck in dual tasks.
Koch, Iring; Rumiati, Raffaella I
2006-11-01
Three dual-task experiments examined the influence of processing a briefly presented visual object for deferred verbal report on performance in an unrelated auditory-manual reaction time (RT) task. RT was increased at short stimulus-onset asynchronies (SOAs) relative to long SOAs, showing that memory consolidation processes can produce a functional processing bottleneck in dual-task performance. In addition, the experiments manipulated the spatial compatibility of the orientation of the visual object and the side of the speeded manual response. This cross-task compatibility produced relative RT benefits only when the instruction for the visual task emphasized overlap at the level of response codes across the task sets (Experiment 1). However, once the effective task set was in place, it continued to produce cross-task compatibility effects even in single-task situations ("ignore" trials in Experiment 2) and when instructions for the visual task did not explicitly require spatial coding of object orientation (Experiment 3). Taken together, the data suggest a considerable degree of task-set inertia in dual-task performance, which is also reinforced by finding costs of switching task sequences (e.g., AC --> BC vs. BC --> BC) in Experiment 3.
Identification of Technology Terms in Patents (Open Access, Published Version)
2014-05-31
large set of human anno - tated examples of the target class(es) along with their tex- tual contexts to serve as training examples for generating a machine...perform the equiva- lent function in German and Chinese. 2.2. Manual annotation of terms Supervised learning requires a gold set of manually anno ...Npr, prev Jpr, prev J ). These were intended to capture, for ex- ample, the verb (and any prepositions/articles) for which the term is the object. prev
Set-Based Approach to Design under Uncertainty and Applications to Shaping a Hydrofoil
2016-01-01
given requirements. This notion of set-based designwas pioneered by Toyota and adopted by the U.S. Navy [1]. It responds to most real-world design...in such a way that all desired shape variations are allowed both on the suction and pressure side. Figure 2 gives a schematic representation of the...of the hydrofoil. The control points of the pressure side have been changed in different ways to en- sure the overall hydrodynamic performance
Future thinking improves prospective memory performance and plan enactment in older adults.
Altgassen, Mareike; Rendell, Peter G; Bernhard, Anka; Henry, Julie D; Bailey, Phoebe E; Phillips, Louise H; Kliegel, Matthias
2015-01-01
Efficient intention formation might improve prospective memory by reducing the need for resource-demanding strategic processes during the delayed performance interval. The present study set out to test this assumption and provides the first empirical assessment of whether imagining a future action improves prospective memory performance equivalently at different stages of the adult lifespan. Thus, younger (n = 40) and older (n = 40) adults were asked to complete the Dresden Breakfast Task, which required them to prepare breakfast in accordance with a set of rules and time restrictions. All participants began by generating a plan for later enactment; however, after making this plan, half of the participants were required to imagine themselves completing the task in the future (future thinking condition), while the other half received standard instructions (control condition). As expected, overall younger adults outperformed older adults. Moreover, both older and younger adults benefited equally from future thinking instructions, as reflected in a higher proportion of prospective memory responses and more accurate plan execution. Thus, for both younger and older adults, imagining the specific visual-spatial context in which an intention will later be executed may serve as an easy-to-implement strategy that enhances prospective memory function in everyday life.
Groundwater Remediation using Bayesian Information-Gap Decision Theory
NASA Astrophysics Data System (ADS)
O'Malley, D.; Vesselinov, V. V.
2016-12-01
Probabilistic analyses of groundwater remediation scenarios frequently fail because the probability of an adverse, unanticipated event occurring is often high. In general, models of flow and transport in contaminated aquifers are always simpler than reality. Further, when a probabilistic analysis is performed, probability distributions are usually chosen more for convenience than correctness. The Bayesian Information-Gap Decision Theory (BIGDT) was designed to mitigate the shortcomings of the models and probabilistic decision analyses by leveraging a non-probabilistic decision theory - information-gap decision theory. BIGDT considers possible models that have not been explicitly enumerated and does not require us to commit to a particular probability distribution for model and remediation-design parameters. Both the set of possible models and the set of possible probability distributions grow as the degree of uncertainty increases. The fundamental question that BIGDT asks is "How large can these sets be before a particular decision results in an undesirable outcome?". The decision that allows these sets to be the largest is considered to be the best option. In this way, BIGDT enables robust decision-support for groundwater remediation problems. Here we apply BIGDT to in a representative groundwater remediation scenario where different options for hydraulic containment and pump & treat are being considered. BIGDT requires many model runs and for complex models high-performance computing resources are needed. These analyses are carried out on synthetic problems, but are applicable to real-world problems such as LANL site contaminations. BIGDT is implemented in Julia (a high-level, high-performance dynamic programming language for technical computing) and is part of the MADS framework (http://mads.lanl.gov/ and https://github.com/madsjulia/Mads.jl).
Couto, Thomaz Bittencourt; Kerrey, Benjamin T; Taylor, Regina G; FitzGerald, Michael; Geis, Gary L
2015-04-01
Pediatric emergencies require effective teamwork. These skills are developed and demonstrated in actual emergencies and in simulated environments, including simulation centers (in center) and the real care environment (in situ). Our aims were to compare teamwork performance across these settings and to identify perceived educational strengths and weaknesses between simulated settings. We hypothesized that teamwork performance in actual emergencies and in situ simulations would be higher than for in-center simulations. A retrospective, video-based assessment of teamwork was performed in an academic, pediatric level 1 trauma center, using the Team Emergency Assessment Measure (TEAM) tool (range, 0-44) among emergency department providers (physicians, nurses, respiratory therapists, paramedics, patient care assistants, and pharmacists). A survey-based, cross-sectional assessment was conducted to determine provider perceptions regarding simulation training. One hundred thirty-two videos, 44 from each setting, were reviewed. Mean total TEAM scores were similar and high in all settings (31.2 actual, 31.1 in situ, and 32.3 in-center, P = 0.39). Of 236 providers, 154 (65%) responded to the survey. For teamwork training, in situ simulation was considered more realistic (59% vs. 10%) and more effective (45% vs. 15%) than in-center simulation. In a video-based study in an academic pediatric institution, ratings of teamwork were relatively high among actual resuscitations and 2 simulation settings, substantiating the influence of simulation-based training on instilling a culture of communication and teamwork. On the basis of survey results, providers favored the in situ setting for teamwork training and suggested an expansion of our existing in situ program.
COLLABORATE©, Part IV: Ramping Up Competency-Based Performance Management.
Treiger, Teresa M; Fink-Samnick, Ellen
The purpose of this fourth part of the COLLABORATE© article series provides an expansion and application of previously presented concepts pertaining to the COLLABORATE paradigm of professional case management practice. The model is built upon a value-driven foundation that: PRIMARY PRACTICE SETTING(S):: Applicable to all health care sectors where case management is practiced. As an industry, health care continues to evolve. Terrain shifts and new influences continually surface to challenge professional case management practice. The need for top-performing and nimble professionals who are knowledgeable and proficient in the workplace continues to challenge human resource departments. In addition to care setting knowledge, professional case managers must continually invest in their practice competence toolbox to grow skills and abilities that transcend policies and processes. These individuals demonstrate agility in framing (and reframing) their professional practice to facilitate the best possible outcomes for their clients. Therefore, the continued emphasis on practice competence conveyed through the performance management cycle is an essential ingredient to performance management focused on customer service excellence and organizational improvement. Professional case management transcends professional disciplines, educational levels, and practice settings. Business objectives continue to drive work process and priorities in many practice settings. However, competencies that align with regulatory and accreditation requirements should be the critical driver for consistent, high-quality case management practice. Although there is inherent value in what various disciplines bring to the table, this advanced model unifies behind case management's unique, strengths-based identity instead of continuing to align within traditional divisions (e.g., discipline, work setting, population served). This model fosters case management's expanding career advancement opportunities.
A Gold Standards Approach to Training Instructors to Evaluate Crew Performance
NASA Technical Reports Server (NTRS)
Baker, David P.; Dismukes, R. Key
2003-01-01
The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.
Study on kinematic and compliance test of suspension
NASA Astrophysics Data System (ADS)
Jing, Lixin; Wu, Liguang; Li, Xuepeng; Zhang, Yu
2017-09-01
Chassis performance development is a major difficulty in vehicle research and development, which is the main factor restricting the independent development of vehicles in China. These years, through a large number of studies, chassis engineers have found that the suspension K&C characteristics as a quasi-static characteristic of the suspension provides a technical route for the suspension performance R&D, and the suspension K&C test has become an important means of vehicle benchmarking, optimization and verification. However, the research on suspension K&C test is less in china, and the test conditions and setting requirements vary greatly from OEM to OEM. In this paper, the influence of different settings on the characteristics of the suspension is obtained through experiments, and the causes of the differences are analyzed; in order to fully reflect the suspension characteristics, the author recommends the appropriate test case and settings.
Growth requirements for multidiscipline research and development on the evolutionary space station
NASA Technical Reports Server (NTRS)
Meredith, Barry; Ahlf, Peter; Saucillo, Rudy; Eakman, David
1988-01-01
The NASA Space Station Freedom is being designed to facilitate on-orbit evolution and growth to accommodate changing user needs and future options for U.S. space exploration. In support of the Space Station Freedom Program Preliminary Requirements Review, The Langley Space Station Office has identified a set of resource requirements for Station growth which is deemed adequate for the various evolution options. As part of that effort, analysis was performed to scope requirements for Space Station as an expanding, multidiscipline facility for scientific research, technology development and commercial production. This report describes the assumptions, approach and results of the study.
SP2Bench: A SPARQL Performance Benchmark
NASA Astrophysics Data System (ADS)
Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg
A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.
AORN Ergonomic Tool 5: Tissue Retraction in the Perioperative Setting.
Spera, Patrice; Lloyd, John D; Hernandez, Edward; Hughes, Nancy; Petersen, Carol; Nelson, Audrey; Spratt, Deborah G
2011-07-01
Manual retraction, a task performed to expose the surgical site, poses a high risk for musculoskeletal disorders that affect the hands, arms, shoulders, neck, and back. In recent years, minimally invasive and laparoscopic procedures have led to the development of multifunctional instruments and retractors capable of performing these functions that, in many cases, has eliminated the need for manual retraction. During surgical procedures that are not performed endoscopically, the use of self-retaining retractors enables the assistant to handle tissue and use exposure techniques that do not require prolonged manual retraction. Ergonomic Tool #5: Tissue Retraction in the Perioperative Setting provides an algorithm for perioperative care providers to determine when and under what circumstances manual retraction of tissue is safe and when the use of a self-retaining retractor should be considered. Published by Elsevier Inc.
ISO 9000 Quality Management System
NASA Astrophysics Data System (ADS)
Hadjicostas, Evsevios
The ISO 9000 series describes a quality management system applicable to any organization. In this chapter we present the requirements of the standard in a way that is as close as possible to the needs of analytical laboratories. The sequence of the requirements follows that in the ISO 9001:2008 standard. In addition, the guidelines for performance improvement set out in the ISO 9004 are reviewed. Both standards should be used as a reference as well as the basis for further elaboration.
Fronto-striatal contribution to lexical set-shifting.
Simard, France; Joanette, Yves; Petrides, Michael; Jubault, Thomas; Madjar, Cécile; Monchi, Oury
2011-05-01
Fronto-striatal circuits in set-shifting have been examined in neuroimaging studies using the Wisconsin Card Sorting Task (WCST) that requires changing the classification rule for cards containing visual stimuli that differ in color, shape, and number. The present study examined whether this fronto-striatal contribution to the planning and execution of set-shifts is similar in a modified sorting task in which lexical rules are applied to word stimuli. Young healthy adults were scanned with functional magnetic resonance imaging while performing the newly developed lexical version of the WCST: the Wisconsin Word Sorting Task. Significant activation was found in a cortico-striatal loop that includes area 47/12 of the ventrolateral prefrontal cortex (PFC), and the caudate nucleus during the planning of a set-shift, and in another that includes the posterior PFC and the putamen during the execution of a set-shift. However, in the present lexical task, additional activation peaks were observed in area 45 of the ventrolateral PFC area during both matching periods. These results provide evidence that the functional contributions of the various fronto-striatal loops are not dependent on the modality of the information to be manipulated but rather on the specific executive processes required.
Defining a reference set to support methodological research in drug safety.
Ryan, Patrick B; Schuemie, Martijn J; Welebob, Emily; Duke, Jon; Valentine, Sarah; Hartzema, Abraham G
2013-10-01
Methodological research to evaluate the performance of methods requires a benchmark to serve as a referent comparison. In drug safety, the performance of analyses of spontaneous adverse event reporting databases and observational healthcare data, such as administrative claims and electronic health records, has been limited by the lack of such standards. To establish a reference set of test cases that contain both positive and negative controls, which can serve the basis for methodological research in evaluating methods performance in identifying drug safety issues. Systematic literature review and natural language processing of structured product labeling was performed to identify evidence to support the classification of drugs as either positive controls or negative controls for four outcomes: acute liver injury, acute kidney injury, acute myocardial infarction, and upper gastrointestinal bleeding. Three-hundred and ninety-nine test cases comprised of 165 positive controls and 234 negative controls were identified across the four outcomes. The majority of positive controls for acute kidney injury and upper gastrointestinal bleeding were supported by randomized clinical trial evidence, while the majority of positive controls for acute liver injury and acute myocardial infarction were only supported based on published case reports. Literature estimates for the positive controls shows substantial variability that limits the ability to establish a reference set with known effect sizes. A reference set of test cases can be established to facilitate methodological research in drug safety. Creating a sufficient sample of drug-outcome pairs with binary classification of having no effect (negative controls) or having an increased effect (positive controls) is possible and can enable estimation of predictive accuracy through discrimination. Since the magnitude of the positive effects cannot be reliably obtained and the quality of evidence may vary across outcomes, assumptions are required to use the test cases in real data for purposes of measuring bias, mean squared error, or coverage probability.
Mars Microprobe Entry Analysis
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Mitcheltree, Robert A.; Cheatwood, F. McNeil
1998-01-01
The Mars Microprobe mission will provide the first opportunity for subsurface measurements, including water detection, near the south pole of Mars. In this paper, performance of the Microprobe aeroshell design is evaluated through development of a six-degree-of-freedom (6-DOF) aerodynamic database and flight dynamics simulation. Numerous mission uncertainties are quantified and a Monte-Carlo analysis is performed to statistically assess mission performance. Results from this 6-DOF Monte-Carlo simulation demonstrate that, in a majority of the cases (approximately 2-sigma), the penetrator impact conditions are within current design tolerances. Several trajectories are identified in which the current set of impact requirements are not satisfied. From these cases, critical design parameters are highlighted and additional system requirements are suggested. In particular, a relatively large angle-of-attack range near peak heating is identified.
2009-03-01
Set negative pixel values = 0 (remove bad pixels) -------------- [m,n] = size(data_matrix_new); for i =1:m for j= 1:n if...everything from packaging toothpaste to high speed fluid dynamics. While future engagements will continue to require the development of specialized
WASP (Write a Scientific Paper) using Excel - 10: Contingency tables.
Grech, Victor
2018-06-01
Contingency tables may be required to perform chi-test analyses. This provides pointers as to how to do this in Microsoft Excel and explains how to set up methods to calculate confidence intervals for proportions, including proportions with zero numerators. Copyright © 2018 Elsevier B.V. All rights reserved.
40 CFR 86.505-2004 - Introduction; structure of subpart.
Code of Federal Regulations, 2011 CFR
2011-07-01
... procedures and the test fuel described in subpart B of this part for diesel-fueled light-duty vehicles. PM... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.505-2004 Introduction; structure of... perform exhaust emission tests on motorcycles. Subpart E sets forth the testing requirements and test...
40 CFR 86.505-2004 - Introduction; structure of subpart.
Code of Federal Regulations, 2013 CFR
2013-07-01
... procedures and the test fuel described in subpart B of this part for diesel-fueled light-duty vehicles. PM... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.505-2004 Introduction; structure of... perform exhaust emission tests on motorcycles. Subpart E sets forth the testing requirements and test...
40 CFR 86.505-2004 - Introduction; structure of subpart.
Code of Federal Regulations, 2010 CFR
2010-07-01
... procedures and the test fuel described in subpart B of this part for diesel-fueled light-duty vehicles. PM... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.505-2004 Introduction; structure of... perform exhaust emission tests on motorcycles. Subpart E sets forth the testing requirements and test...
40 CFR 86.505-2004 - Introduction; structure of subpart.
Code of Federal Regulations, 2012 CFR
2012-07-01
... procedures and the test fuel described in subpart B of this part for diesel-fueled light-duty vehicles. PM... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.505-2004 Introduction; structure of... perform exhaust emission tests on motorcycles. Subpart E sets forth the testing requirements and test...
40 CFR 86.505-2004 - Introduction; structure of subpart.
Code of Federal Regulations, 2014 CFR
2014-07-01
... procedures and the test fuel described in subpart B of this part for diesel-fueled light-duty vehicles. PM... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.505-2004 Introduction; structure of... perform exhaust emission tests on motorcycles. Subpart E sets forth the testing requirements and test...
Optical simulations for experimental networks: lessons from MONET
NASA Astrophysics Data System (ADS)
Richards, Dwight H.; Jackel, Janet L.; Goodman, Matthew S.; Roudas, Ioannis; Wagner, Richard E.; Antoniades, Neophytos
1999-08-01
We have used optical simulations as a means of setting component requirements, assessing component compatibility, and designing experiments in the MONET (Multiwavelength Optical Networking) Project. This paper reviews the simulation method, gives some examples of the types of simulations that have been performed, and discusses the validation of the simulations.
45 CFR 2551.72 - Is a written volunteer assignment plan required for each volunteer?
Code of Federal Regulations, 2010 CFR
2010-10-01
... such services; and (5) Is used to review the status of the Senior Companion's services in working with... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE SENIOR COMPANION PROGRAM Senior Companion...) All Senior Companions performing direct services to individual clients in home settings and individual...
Problem-Based Learning: An Experiential Strategy for English Language Teacher Education in Chile
ERIC Educational Resources Information Center
Muñoz Campos, Diego
2017-01-01
The Chilean education system requires English language teachers to be equipped with non-conventional teaching strategies that can foster meaningful learning and assure successful learners' performances in diverse and complex settings. This exploratory, descriptive, research study aimed at discovering the perceptions of 54 pre-service teachers…
Horticultural Training for Adolescent Special Education Students.
ERIC Educational Resources Information Center
Airhart, Douglas L.; And Others
1987-01-01
A horticultural training program was developed in conjunction with a prevocational program designed for students with limited ability to perform in a normal high school setting due to moderate intellectual impairment or socialization problems. Prior appraisal by the job developer of a client's adaptability to the program was required to provide…
Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...
42 CFR 438.207 - Assurances of adequate capacity and services.
Code of Federal Regulations, 2010 CFR
2010-10-01
... HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Quality Assessment and Performance Improvement Access Standards § 438.207 Assurances of adequate capacity and services. (a) Basic rule. The State... with the State's requirements for availability of services, as set forth in § 438.206. (e) CMS' right...
Assessing Pragmatics: DCTS and Retrospective Verbal Reports
ERIC Educational Resources Information Center
Beltrán-Palanques, Vicente
2016-01-01
Assessing pragmatic knowledge in the instructed setting is seen as a complex but necessary task, which requires the design of appropriate research methodologies to examine pragmatic performance. This study discusses the use of two different research methodologies, namely those of Discourse Completion Tests/Tasks (DCTs) and verbal reports. Research…
49 CFR 180.417 - Reporting and record retention requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... with the National Board, or copy the information contained on the cargo tank's identification and ASME.... (b) Test or inspection reporting. Each person performing a test or inspection as specified in § 180... (type of device, set to discharge pressure, pressure at which device opened, pressure at which device re...
49 CFR 180.417 - Reporting and record retention requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... with the National Board, or copy the information contained on the cargo tank's identification and ASME.... (b) Test or inspection reporting. Each person performing a test or inspection as specified in § 180... (type of device, set to discharge pressure, pressure at which device opened, pressure at which device re...
Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong
2016-01-01
Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set–proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters. PMID:26820646
1995-01-01
possible to determine communication points. For this version, a C program spawning Posix threads and using semaphores to synchronize would have to...performance such as the time required for network communication and synchronization as well as issues of asynchrony and memory hierarchy. For example...enhances reusability. Process (or task) parallel computations can also be succinctly expressed with a small set of process creation and synchronization
Range Safety for an Autonomous Flight Safety System
NASA Technical Reports Server (NTRS)
Lanzi, Raymond J.; Simpson, James C.
2010-01-01
The Range Safety Algorithm software encapsulates the various constructs and algorithms required to accomplish Time Space Position Information (TSPI) data management from multiple tracking sources, autonomous mission mode detection and management, and flight-termination mission rule evaluation. The software evaluates various user-configurable rule sets that govern the qualification of TSPI data sources, provides a prelaunch autonomous hold-launch function, performs the flight-monitoring-and-termination functions, and performs end-of-mission safing
The Identification of Software Failure Regions
1990-06-01
be used to detect non-obviously redundant test cases. A preliminary examination of the manual analysis method is performed with a set of programs ...failure regions are defined and a method of failure region analysis is described in detail. The thesis describes how this analysis may be used to detect...is the termination of the ability of a functional unit to perform its required function. (Glossary, 1983) The presence of faults in program code
Paul V. Ellefson; Michael A. Kilgore; Kenneth E. Skog; Christopher D. Risbrudt
2006-01-01
The ability of forest products research programs to contribute to a nationâs well-being requires that research organizations be well organized, effectively managed, and held to high standards of performance. In 2004-2005, a review of forest products and related research organizations beyond the boundaries of the United States was carried out. The intent was to obtain a...
Decoder calibration with ultra small current sample set for intracortical brain-machine interface
NASA Astrophysics Data System (ADS)
Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping
2018-04-01
Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.
Complex extreme learning machine applications in terahertz pulsed signals feature sets.
Yin, X-X; Hadjiloucas, S; Zhang, Y
2014-11-01
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Configuration and Sizing of a Test Fixture for Panels Under Combined Loads
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.
2006-01-01
Future air and space structures are expected to utilize composite panels that are subjected to combined mechanical loads, such as bi-axial compression/tension, shear and pressure. Therefore, the ability to accurately predict the buckling and strength failures of such panels is important. While computational analysis can provide tremendous insight into panel response, experimental results are necessary to verify predicted performances of these panels to judge the accuracy of computational methods. However, application of combined loads is an extremely difficult task due to the complex test fixtures and set-up required. Presented herein is a comparison of several test set-ups capable of testing panels under combined loads. Configurations compared include a D-box, a segmented cylinder and a single panel set-up. The study primarily focuses on the preliminary sizing of a single panel test configuration capable of testing flat panels under combined in-plane mechanical loads. This single panel set-up appears to be best suited to the testing of both strength critical and buckling critical panels. Required actuator loads and strokes are provided for various square, flat panels.
Skill Assessment in Ocean Biological Data Assimilation
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Friedrichs, Marjorie A. M.; Robinson, Allan R.; Rose, Kenneth A.; Schlitzer, Reiner; Thompson, Keith R.; Doney, Scott C.
2008-01-01
There is growing recognition that rigorous skill assessment is required to understand the ability of ocean biological models to represent ocean processes and distributions. Statistical analysis of model results with observations represents the most quantitative form of skill assessment, and this principle serves as well for data assimilation models. However, skill assessment for data assimilation requires special consideration. This is because there are three sets of information in the free-run model, data, and the assimilation model, which uses Data assimilation information from both the flee-run model and the data. Intercom parison of results among the three sets of information is important and useful for assessment, but is not conclusive since the three information sets are intertwined. An independent data set is necessary for an objective determination. Other useful measures of ocean biological data assimilation assessment include responses of unassimilated variables to the data assimilation, performance outside the prescribed region/time of interest, forecasting, and trend analysis. Examples of each approach from the literature are provided. A comprehensive list of ocean biological data assimilation and their applications of skill assessment, in both ecosystem/biogeochemical and fisheries efforts, is summarized.
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus
2011-11-01
A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.
Minion, Jessica; Pai, Madhukar; Ramsay, Andrew; Menzies, Dick; Greenaway, Christina
2011-01-01
Introduction Light emitting diode fluorescence microscopes have many practical advantages over conventional mercury vapour fluorescence microscopes, which would make them the preferred choice for laboratories in both low- and high-resource settings, provided performance is equivalent. Methods In a nested case-control study, we compared diagnostic accuracy and time required to read slides with the Zeiss PrimoStar iLED, LW Scientific Lumin, and a conventional fluorescence microscope (Leica DMLS). Mycobacterial culture was used as the reference standard, and subgroup analysis by specimen source and organism isolated were performed. Results There was no difference in sensitivity or specificity between the three microscopes, and agreement was high for all comparisons and subgroups. The Lumin and the conventional fluorescence microscope were equivalent with respect to time required to read smears, but the Zeiss iLED was significantly time saving compared to both. Conclusions Light emitting diode microscopy should be considered by all tuberculosis diagnostic laboratories, including those in high income countries, as a replacement for conventional fluorescence microscopes. Our findings provide support to the recent World Health Organization policy recommending that conventional fluorescence microscopy be replaced by light emitting diode microscopy using auramine staining in all settings where fluorescence microscopy is currently used. PMID:21811622
Performance Basis for Airborne Separation
NASA Technical Reports Server (NTRS)
Wing, David J.
2008-01-01
Emerging applications of Airborne Separation Assistance System (ASAS) technologies make possible new and powerful methods in Air Traffic Management (ATM) that may significantly improve the system-level performance of operations in the future ATM system. These applications typically involve the aircraft managing certain components of its Four Dimensional (4D) trajectory within the degrees of freedom defined by a set of operational constraints negotiated with the Air Navigation Service Provider. It is hypothesized that reliable individual performance by many aircraft will translate into higher total system-level performance. To actually realize this improvement, the new capabilities must be attracted to high demand and complexity regions where high ATM performance is critical. Operational approval for use in such environments will require participating aircraft to be certified to rigorous and appropriate performance standards. Currently, no formal basis exists for defining these standards. This paper provides a context for defining the performance basis for 4D-ASAS operations. The trajectory constraints to be met by the aircraft are defined, categorized, and assessed for performance requirements. A proposed extension of the existing Required Navigation Performance (RNP) construct into a dynamic standard (Dynamic RNP) is outlined. Sample data is presented from an ongoing high-fidelity batch simulation series that is characterizing the performance of an advanced 4D-ASAS application. Data of this type will contribute to the evaluation and validation of the proposed performance basis.
Waldau, Susanne
2015-09-01
Transparent priority setting in health care based on specific ethical principles is requested by the Swedish Parliament since 1997. Implementation has been limited. In this case, transparent priority setting was performed for a second time round and engaged an entire health care organisation. Objectives were to refine a bottom-up priority setting process, reach a political decision on service limits to make reallocation towards higher prioritised services possible, and raise systems knowledge. An action research approach was chosen. The national model for priority setting was used with addition of dimensions costs, volumes, gender distribution and feasibility. The intervention included a three step process and specific procedures for each step which were created, revised and evaluated regarding factual and functional aspects. Evaluations methods included analyses of documents, recordings and surveys. Vertical and horizontal priority setting occurred and resources were reallocated. Participants' attitudes remained positive, however less so than in the first priority setting round. Identifying low-priority services was perceived difficult, causing resentment and strategic behaviour. The horizontal stage served to raise quality of the knowledge base, level out differences in ranking of services and raise systems knowledge. Existing health care management systems do not meet institutional requirements for transparent priority setting. Introducing transparent priority setting constitutes a complex institutional reform, which needs to be driven by management/administration. Strong managerial commitment is required. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Colonoscopy in the office setting is safe, and financially sound ... for now.
Luchtefeld, Martin A; Kim, Donald G
2006-03-01
In 2000, the Centers for Medicare & Medicaid Services announced a plan to allow for enhanced reimbursement for office endoscopy. This change in reimbursement was phased in during three years. The purpose of this study was to evaluate the fiscal outcomes and quality measures in the first two and a one-half years of performing endoscopy in an office setting under the new Centers for Medicare & Medicaid Services guidelines. The following financial parameters were gathered: number of endoscopies, expenses (divided into salaries and operational), net revenue, and margin for endoscopies performed in the office compared with the hospital. All endoscopies were performed by endoscopists with advanced training (gastroenterology fellowship or colon and rectal surgery residency). Monitoring equipment included continuous SaO2 and automated blood pressure in all patients and continuous electrocardiographic monitors in selected patients. Quality/safety data have been tracked in a prospective manner and include number of transfers to the hospital, perforations, bleeding requiring transfusion or hospitalization, and cardiorespiratory arrest. The financial outcomes are as follows: 13,285 endoscopies performed from the opening of the unit through December 2003; net revenue per case $504 per case; expense per case has dropped from $205 per case to $145 per case; the overall financial benefit of performing endoscopy in the office compared with the hospital was an additional $28 to $143 per case depending on the insurance carrier. The quality outcomes since inception of the unit include the following: 13,285 endoscopies; 0 hospital transfers, 0 cardiorespiratory arrests; 0 perforations; and 1 bleeding episode that required hospitalization. Endoscopy performed in the office setting is safe when done with appropriate monitoring and in the proper patient population. At the time of this study, office endoscopy also is financially rewarding but changes in Centers for Medicare & Medicaid Services reimbursement threaten the ability to retain any financial benefit.
Performance, emissions, and physical characteristics of a rotating combustion aircraft engine
NASA Technical Reports Server (NTRS)
Berkowitz, M.; Hermes, W. L.; Mount, R. E.; Myers, D.
1976-01-01
The RC2-75, a liquid cooled two chamber rotary combustion engine (Wankel type), designed for aircraft use, was tested and representative baseline (212 KW, 285 BHP) performance and emissions characteristics established. The testing included running fuel/air mixture control curves and varied ignition timing to permit selection of desirable and practical settings for running wide open throttle curves, propeller load curves, variable manifold pressure curves covering cruise conditions, and EPA cycle operating points. Performance and emissions data were recorded for all of the points run. In addition to the test data, information required to characterize the engine and evaluate its performance in aircraft use is provided over a range from one half to twice its present power. The exhaust emissions results are compared to the 1980 EPA requirements. Standard day take-off brake specific fuel consumption is 356 g/KW-HR (.585 lb/BHP-HR) for the configuration tested.
Progress toward a performance based specification for diamond grinding wheels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Piscotty, M.S.; Blaedel, K.L.
1996-11-12
This work sought to improve the communication between users and makers of fine diamond grinding wheels. A promising avenue for this is to formulate a voluntary product standard that comprises performance indicators that bridge the gap between specific user requirements and the details of wheel formulations. We propose a set of performance specifiers of figures-of-merit, that might be assessed by straightforward and traceable testing methods, but do not compromise proprietary information of the wheel user of wheel maker. One such performance indicator might be wheel hardness. In addition we consider technologies that might be required to realize the benefits ofmore » optimized grinding wheels. A non-contact wheel-to- workpiece proximity sensor may provide a means of monitoring wheel wear and thus wheel position, for wheels that exhibit high wear rates in exchange for improved surface finish.« less
Crew Exploration Vehicle Launch Abort Controller Performance Analysis
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.; Raney, David L.
2007-01-01
This paper covers the simulation and evaluation of a controller design for the Crew Module (CM) Launch Abort System (LAS), to measure its ability to meet the abort performance requirements. The controller used in this study is a hybrid design, including features developed by the Government and the Contractor. Testing is done using two separate 6-degree-of-freedom (DOF) computer simulation implementations of the LAS/CM throughout the ascent trajectory: 1) executing a series of abort simulations along a nominal trajectory for the nominal LAS/CM system; and 2) using a series of Monte Carlo runs with perturbed initial flight conditions and perturbed system parameters. The performance of the controller is evaluated against a set of criteria, which is based upon the current functional requirements of the LAS. Preliminary analysis indicates that the performance of the present controller meets (with the exception of a few cases) the evaluation criteria mentioned above.
The 30/20 GHz communications system functional requirements
NASA Technical Reports Server (NTRS)
Siperko, C. M.; Frankfort, M.; Markham, R.; Wall, M.
1981-01-01
The characteristics of 30/20 GHz usage in satellite systems to be used in support of projected communication requirements of the 1990's are defined. A requirements analysis which develops projected market demand for satellite services by general and specialized carriers and an analysis of the impact of propagation and system constraints on 30/20 GHz operation are included. A set of technical performance characteristics for the 30/20 GHz systems which can serve the resulting market demand and the experimental program necessary to verify technical and operational aspects of the proposed systems is also discussed.
Space Station Furnace Facility Preliminary Project Implementation Plan (PIP). Volume 2, Appendix 2
NASA Technical Reports Server (NTRS)
Perkey, John K.
1992-01-01
The Space Station Furnace Facility (SSFF) is an advanced facility for materials research in the microgravity environment of the Space Station Freedom and will consist of Core equipment and various sets of Furnace Module (FM) equipment in a three-rack configuration. This Project Implementation Plan (PIP) document was developed to satisfy the requirements of Data Requirement Number 4 for the SSFF study (Phase B). This PIP shall address the planning of the activities required to perform the detailed design and development of the SSFF for the Phase C/D portion of this contract.
Requirements Modeling with Agent Programming
NASA Astrophysics Data System (ADS)
Dasgupta, Aniruddha; Krishna, Aneesh; Ghose, Aditya K.
Agent-oriented conceptual modeling notations are highly effective in representing requirements from an intentional stance and answering questions such as what goals exist, how key actors depend on each other, and what alternatives must be considered. In this chapter, we review an approach to executing i* models by translating these into set of interacting agents implemented in the CASO language and suggest how we can perform reasoning with requirements modeled (both functional and non-functional) using i* models. In this chapter we particularly incorporate deliberation into the agent design. This allows us to benefit from the complementary representational capabilities of the two frameworks.
NASA Technical Reports Server (NTRS)
Oza, Nikunji C.
2005-01-01
Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by presenting some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.
Martins, Ruben; Simard, France; Provost, Jean-Sebastien; Monchi, Oury
2012-06-01
Some older individuals seem to use compensatory mechanisms to maintain high-level performance when submitted to cognitive tasks. However, whether and how these mechanisms affect fronto-striatal activity has never been explored. The purpose of this study was to investigate how aging affects brain patterns during the performance of a lexical analog of the Wisconsin Card Sorting Task, which has been shown to strongly depend on fronto-striatal activity. In the present study, both younger and older individuals revealed significant fronto-striatal loop activity associated with planning and execution of set-shifts, though age-related striatal activity reduction was observed. Most importantly, while the younger group showed the involvement of a "cognitive loop" during the receiving negative feedback period (which indicates that a set-shift will be required to perform the following trial) and the involvement of a "motor loop" during the matching after negative feedback period (when the set-shift must be performed), older participants showed significant activation of both loops during the matching after negative feedback period only. These findings are in agreement with the "load-shift" model postulated by Velanova et al. (Velanova K, Lustig C, Jacoby LL, Buckner RL. 2007. Evidence for frontally mediated controlled processing differences in older adults. Cereb Cortex. 17:1033-1046.) and indicate that the model is not limited to memory retrieval but also applies to executive processes relying on fronto-striatal regions.
Clay, Alison S; Ming, David Y; Knudsen, Nancy W; Engle, Deborah L; Grochowski, Colleen O'Connor; Andolsek, Kathryn M; Chudgar, Saumil M
2017-03-01
Despite the importance of self-directed learning (SDL) in the field of medicine, individuals are rarely taught how to perform SDL or receive feedback on it. Trainee skill in SDL is limited by difficulties with self-assessment and goal setting. Ninety-two graduating fourth-year medical students from Duke University School of Medicine completed an individualized learning plan (ILP) for a transition-to-residency Capstone course in spring 2015 to help foster their skills in SDL. Students completed the ILP after receiving a personalized report from a designated faculty coach detailing strengths and weaknesses on specific topics (e.g., pulmonary medicine) and clinical skills (e.g., generating a differential diagnosis). These were determined by their performance on 12 Capstone Problem Sets of the Week (CaPOWs) compared with their peers. Students used transitional-year milestones to self-assess their confidence in SDL. SDL was successfully implemented in a Capstone course through the development of required clinically oriented problem sets. Coaches provided guided feedback on students' performance to help them identify knowledge deficits. Students' self-assessment of their confidence in SDL increased following course completion. However, students often chose Capstone didactic sessions according to factors other than their CaPOW performance, including perceived relevance to planned specialty and session timing. Future Capstone curriculum changes may further enhance SDL skills of graduating students. Students will receive increased formative feedback on their CaPOW performance and be incentivized to attend sessions in areas of personal weakness.
Performance analysis of medical video streaming over mobile WiMAX.
Alinejad, Ali; Philip, N; Istepanian, R H
2010-01-01
Wireless medical ultrasound streaming is considered one of the emerging application within the broadband mobile healthcare domain. These applications are considered as bandwidth demanding services that required high data rates with acceptable diagnostic quality of the transmitted medical images. In this paper, we present the performance analysis of a medical ultrasound video streaming acquired via special robotic ultrasonography system over emulated WiMAX wireless network. The experimental set-up of this application is described together with the performance of the relevant medical quality of service (m-QoS) metrics.
Study of roles of remote manipulator systems and EVA for shuttle mission support, volume 1
NASA Technical Reports Server (NTRS)
Malone, T. B.; Micocci, A. J.
1974-01-01
Alternate extravehicular activity (EVA) and remote manipulator system (RMS) configurations were examined for their relative effectiveness in performing an array of representative shuttle and payload support tasks. Initially a comprehensive analysis was performed of payload and shuttle support missions required to be conducted exterior to a pressurized inclosure. A set of task selection criteria was established, and study tasks were identified. The EVA and RMS modes were evaluated according to their applicability for each task and task condition. The results are summarized in tabular form, showing the modes which are chosen as most effective or as feasible for each task/condition. Conclusions concerning the requirements and recommendations for each mode are presented.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Delaney, Declan T.; O’Hare, Gregory M. P.
2016-01-01
No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks. PMID:27916929
Delaney, Declan T; O'Hare, Gregory M P
2016-12-01
No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.
X-Ray Phantom Development For Observer Performance Studies
NASA Astrophysics Data System (ADS)
Kelsey, C. A.; Moseley, R. D.; Mettler, F. A.; Parker, T. W.
1981-07-01
The requirements for radiographic imaging phantoms for observer performance testing include realistic tasks which mimic at least some portion of the diagnostic examination presented in a setting which approximates clinically derived images. This study describes efforts to simulate chest and vascular diseases for evaluation of conventional and digital radiographic systems. Images of lung nodules, pulmonary infiltrates, as well as hilar and mediastinal masses are generated with a conventional chest phantom to make up chest disease test series. Vascular images are simulated by hollow tubes embedded in tissue density plastic with widening and narrowing added to mimic aneurysms and stenoses. Both sets of phantoms produce images which allow simultaneous determination of true positive and false positive rates as well as complete ROC curves.
MIL-STD-1553B Marconi LSI chip set in a remote terminal application
NASA Astrophysics Data System (ADS)
Dimarino, A.
1982-11-01
Marconi Avionics is utilizing the MIL-STD-1553B LSI Chip Set in the SCADC Air Data Computer application to perform all of the required remote terminal MIL-STD-1553B protocol functions. Basic components of the RTU are the dual redundant chip set, CT3231 Transceivers, 256 x 16 RAM and a Z8002 microprocessor. Basic transfers are to/from the RAM command of the bus controller or Z8002 processor. During transfers from the processor to the RAM, the chip set busy bit is set for a period not exceeding 250 microseconds. When the transfer is complete, the busy bit is released and transfers to the data bus occur on command. The LSI Chip Set word count lines are used to locate each data word in the local memory and 4 mode codes are used in the application: reset remote terminal, transmit status word, transmitter shut-down, and override transmitter shutdown.
Nasim, Sajid; Maharaj, Chrisen H; Butt, Ihsan; Malik, Muhammad A; O' Donnell, John; Higgins, Brendan D; Harte, Brian H; Laffey, John G
2009-01-01
Background Paramedics are frequently required to perform tracheal intubation, a potentially life-saving manoeuvre in severely ill patients, in the prehospital setting. However, direct laryngoscopy is often more difficult in this environment, and failed tracheal intubation constitutes an important cause of morbidity. Novel indirect laryngoscopes, such as the Airtraq® and Truview® laryngoscopes may reduce this risk. Methods We compared the efficacy of these devices to the Macintosh laryngoscope when used by 21 Paramedics proficient in direct laryngoscopy, in a randomized, controlled, manikin study. Following brief didactic instruction with the Airtraq® and Truview® laryngoscopes, each participant took turns performing laryngoscopy and intubation with each device, in an easy intubation scenario and following placement of a hard cervical collar, in a SimMan® manikin. Results The Airtraq® reduced the number of optimization manoeuvres and reduced the potential for dental trauma when compared to the Macintosh, in both the normal and simulated difficult intubation scenarios. In contrast, the Truview® increased the duration of intubation attempts, and required a greater number of optimization manoeuvres, compared to both the Macintosh and Airtraq® devices. Conclusion The Airtraq® laryngoscope performed more favourably than the Macintosh and Truview® devices when used by Paramedics in this manikin study. Further studies are required to extend these findings to the clinical setting. PMID:19216776
Performance evaluation of a retrofit digital detector-based mammography system.
Marshall, Nicholas W; van Ongeval, Chantal; Bosmans, Hilde
2016-02-01
A retrofit flat panel detector was integrated with a GE DMR+ analog mammography system and characterized using detective quantum efficiency (DQE). Technical system performance was evaluated using the European Guidelines protocol, followed by a limited evaluation of clinical image quality for 20 cases using image quality criteria in the European Guidelines. Optimal anode/filter selections were established using signal difference-to-noise ratio measurements. Only small differences in peak DQE were seen between the three anode/filter settings, with an average value of 0.53. For poly(methyl methacrylate) (PMMA) thicknesses above 60 mm, the Rh/Rh setting was the optimal anode/filter setting. The system required a mean glandular dose of 0.54 mGy at 30 kV Rh/Rh to reach the Acceptable gold thickness limit for 0.1 mm details. Imaging performance of the retrofit unit with the GE DMR+ is notably better than of powder based computed radiography systems and is comparable to current flat panel FFDM systems. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Behavioral and biological interactions with small groups in confined microsocieties
NASA Technical Reports Server (NTRS)
Brady, J. V.; Emurian, H. H.
1982-01-01
Requirements for high levels of human performance in the unfamiliar and stressful environments associated with space missions necessitate the development of research-based technological procedures for maximizing the probability of effective functioning at all levels of personnel participation. Where the successful accomplishment of such missions requires the coordinated contributions of several individuals collectively identified with the achievement of a common objective, the conditions for characterizing a team, crew, or functional group are operationally defined. For the most part, studies of group performances under operational conditions which emphasize relatively long exposure to extended mission environments have been limited by the constraints imposed on experimental manipulations to identify critical effectiveness factors. On the other hand, laboratory studies involving relatively brief exposures to contrived task situations have been considered of questionable generality to operational settings requiring realistic group objectives.
Validation and verification of expert systems
NASA Technical Reports Server (NTRS)
Gilstrap, Lewey
1991-01-01
Validation and verification (V&V) are procedures used to evaluate system structure or behavior with respect to a set of requirements. Although expert systems are often developed as a series of prototypes without requirements, it is not possible to perform V&V on any system for which requirements have not been prepared. In addition, there are special problems associated with the evaluation of expert systems that do not arise in the evaluation of conventional systems, such as verification of the completeness and accuracy of the knowledge base. The criticality of most NASA missions make it important to be able to certify the performance of the expert systems used to support these mission. Recommendations for the most appropriate method for integrating V&V into the Expert System Development Methodology (ESDM) and suggestions for the most suitable approaches for each stage of ESDM development are presented.
snpGeneSets: An R Package for Genome-Wide Study Annotation
Mei, Hao; Li, Lianna; Jiang, Fan; Simino, Jeannette; Griswold, Michael; Mosley, Thomas; Liu, Shijian
2016-01-01
Genome-wide studies (GWS) of SNP associations and differential gene expressions have generated abundant results; next-generation sequencing technology has further boosted the number of variants and genes identified. Effective interpretation requires massive annotation and downstream analysis of these genome-wide results, a computationally challenging task. We developed the snpGeneSets package to simplify annotation and analysis of GWS results. Our package integrates local copies of knowledge bases for SNPs, genes, and gene sets, and implements wrapper functions in the R language to enable transparent access to low-level databases for efficient annotation of large genomic data. The package contains functions that execute three types of annotations: (1) genomic mapping annotation for SNPs and genes and functional annotation for gene sets; (2) bidirectional mapping between SNPs and genes, and genes and gene sets; and (3) calculation of gene effect measures from SNP associations and performance of gene set enrichment analyses to identify functional pathways. We applied snpGeneSets to type 2 diabetes (T2D) results from the NHGRI genome-wide association study (GWAS) catalog, a Finnish GWAS, and a genome-wide expression study (GWES). These studies demonstrate the usefulness of snpGeneSets for annotating and performing enrichment analysis of GWS results. The package is open-source, free, and can be downloaded at: https://www.umc.edu/biostats_software/. PMID:27807048
Managing bond proceeds improves financial performance.
Mates, W J
1989-04-01
Healthcare organizations must actively manage tax-exempt bond proceeds after they are initially invested at the time of financing or refinancing. The Tax Reform Act of 1986 imposes serious penalties on issuers who fail to comply with its complex requirements. An active program of bond proceeds management enables organizations to avoid this pitfall and take advantage of legal investment opportunities. Such a program must start with a set of clear guidelines on permitted investments, target rates of return, acceptable levels of risk, and liquidity requirements.
Comparison of RF BPM Receivers for NSLS-II Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinayev,I.; Singh, O.
2009-05-04
The NSLS-II Light Source being built at Brookhaven National Laboratory requires submicron stability of the electron orbit in the storage ring in order to utilize fully very small emittances and electron beam sizes. This sets high stability requirements for beam position monitors and a program has been initiated for the purpose of characterizing RF beam position monitor (BPM) receivers in use at other light sources. Present state-of-the-art performance will be contrasted with more recently available technologies.
General lighting requirements for photosynthesis
NASA Technical Reports Server (NTRS)
Geiger, Donald R.
1994-01-01
This paper presents data that suggests some criteria for evaluating growth chamber and greenhouse lighting. A review of the general lighting requirements for photosynthesis reveals that four aspects of light are important: irradiance, quality, timing, and duration. Effective lighting should produce plants that perform according to the goals of the project. For example, for physiological studies the plants probably should exhibit morphology and physiology similar to that found in field-grown plants. For other projects the criteria will obviously be set according to the reason for raising the plants.
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
Design of partially supervised classifiers for multispectral image data
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, David
1993-01-01
A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.
Design of a multi-arm randomized clinical trial with no control arm.
Magaret, Amalia; Angus, Derek C; Adhikari, Neill K J; Banura, Patrick; Kissoon, Niranjan; Lawler, James V; Jacob, Shevin T
2016-01-01
Clinical trial designs that include multiple treatments are currently limited to those that perform pairwise comparisons of each investigational treatment to a single control. However, there are settings, such as the recent Ebola outbreak, in which no treatment has been demonstrated to be effective; and therefore, no standard of care exists which would serve as an appropriate control. For illustrative purposes, we focused on the care of patients presenting in austere settings with critically ill 'sepsis-like' syndromes. Our approach involves a novel algorithm for comparing mortality among arms without requiring a single fixed control. The algorithm allows poorly-performing arms to be dropped during interim analyses. Consequently, the study may be completed earlier than planned. We used simulation to determine operating characteristics for the trial and to estimate the required sample size. We present a potential study design targeting a minimal effect size of a 23% relative reduction in mortality between any pair of arms. Using estimated power and spurious significance rates from the simulated scenarios, we show that such a trial would require 2550 participants. Over a range of scenarios, our study has 80 to 99% power to select the optimal treatment. Using a fixed control design, if the control arm is least efficacious, 640 subjects would be enrolled into the least efficacious arm, while our algorithm would enroll between 170 and 430. This simulation method can be easily extended to other settings or other binary outcomes. Early dropping of arms is efficient and ethical when conducting clinical trials with multiple arms. Copyright © 2015 Elsevier Inc. All rights reserved.
Katz, Lee S.; Griswold, Taylor; Williams-Newkirk, Amanda J.; Wagner, Darlene; Petkau, Aaron; Sieffert, Cameron; Van Domselaar, Gary; Deng, Xiangyu; Carleton, Heather A.
2017-01-01
Modern epidemiology of foodborne bacterial pathogens in industrialized countries relies increasingly on whole genome sequencing (WGS) techniques. As opposed to profiling techniques such as pulsed-field gel electrophoresis, WGS requires a variety of computational methods. Since 2013, United States agencies responsible for food safety including the CDC, FDA, and USDA, have been performing whole-genome sequencing (WGS) on all Listeria monocytogenes found in clinical, food, and environmental samples. Each year, more genomes of other foodborne pathogens such as Escherichia coli, Campylobacter jejuni, and Salmonella enterica are being sequenced. Comparing thousands of genomes across an entire species requires a fast method with coarse resolution; however, capturing the fine details of highly related isolates requires a computationally heavy and sophisticated algorithm. Most L. monocytogenes investigations employing WGS depend on being able to identify an outbreak clade whose inter-genomic distances are less than an empirically determined threshold. When the difference between a few single nucleotide polymorphisms (SNPs) can help distinguish between genomes that are likely outbreak-associated and those that are less likely to be associated, we require a fine-resolution method. To achieve this level of resolution, we have developed Lyve-SET, a high-quality SNP pipeline. We evaluated Lyve-SET by retrospectively investigating 12 outbreak data sets along with four other SNP pipelines that have been used in outbreak investigation or similar scenarios. To compare these pipelines, several distance and phylogeny-based comparison methods were applied, which collectively showed that multiple pipelines were able to identify most outbreak clusters and strains. Currently in the US PulseNet system, whole genome multi-locus sequence typing (wgMLST) is the preferred primary method for foodborne WGS cluster detection and outbreak investigation due to its ability to name standardized genomic profiles, its central database, and its ability to be run in a graphical user interface. However, creating a functional wgMLST scheme requires extended up-front development and subject-matter expertise. When a scheme does not exist or when the highest resolution is needed, SNP analysis is used. Using three Listeria outbreak data sets, we demonstrated the concordance between Lyve-SET SNP typing and wgMLST. Availability: Lyve-SET can be found at https://github.com/lskatz/Lyve-SET. PMID:28348549
Tighe, Patrick J; Lucas, Stephen D; Edwards, David A; Boezaart, André P; Aytug, Haldun; Bihorac, Azra
2012-10-01
The purpose of this project was to determine whether machine-learning classifiers could predict which patients would require a preoperative acute pain service (APS) consultation. Retrospective cohort. University teaching hospital. The records of 9,860 surgical patients posted between January 1 and June 30, 2010 were reviewed. Request for APS consultation. A cohort of machine-learning classifiers was compared according to its ability or inability to classify surgical cases as requiring a request for a preoperative APS consultation. Classifiers were then optimized utilizing ensemble techniques. Computational efficiency was measured with the central processing unit processing times required for model training. Classifiers were tested using the full feature set, as well as the reduced feature set that was optimized using a merit-based dimensional reduction strategy. Machine-learning classifiers correctly predicted preoperative requests for APS consultations in 92.3% (95% confidence intervals [CI], 91.8-92.8) of all surgical cases. Bayesian methods yielded the highest area under the receiver operating curve (0.87, 95% CI 0.84-0.89) and lowest training times (0.0018 seconds, 95% CI, 0.0017-0.0019 for the NaiveBayesUpdateable algorithm). An ensemble of high-performing machine-learning classifiers did not yield a higher area under the receiver operating curve than its component classifiers. Dimensional reduction decreased the computational requirements for multiple classifiers, but did not adversely affect classification performance. Using historical data, machine-learning classifiers can predict which surgical cases should prompt a preoperative request for an APS consultation. Dimensional reduction improved computational efficiency and preserved predictive performance. Wiley Periodicals, Inc.
Optical systolic array processor using residue arithmetic
NASA Technical Reports Server (NTRS)
Jackson, J.; Casasent, D.
1983-01-01
The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.
Wong, Lily R; Flynn-Evans, Erin; Ruskin, Keith J
2018-04-01
Long duty periods and overnight call shifts impair physicians' performance on measures of vigilance, psychomotor functioning, alertness, and mood. Anesthesiology residents typically work between 64 and 70 hours per week and are often required to work 24 hours or overnight shifts, sometimes taking call every third night. Mitigating the effects of sleep loss, circadian misalignment, and sleep inertia requires an understanding of the relationship among work schedules, fatigue, and job performance. This article reviews the current Accreditation Council for Graduate Medical Education guidelines for resident duty hours, examines how anesthesiologists' work schedules can affect job performance, and discusses the ramifications of overnight and prolonged duty hours on patient safety and resident well-being. We then propose countermeasures that have been implemented to mitigate the effects of fatigue and describe how training programs or practice groups who must work overnight can adapt these strategies for use in a hospital setting. Countermeasures include the use of scheduling interventions, strategic naps, microbreaks, caffeine use during overnight and extended shifts, and the use of bright lights in the clinical setting when possible or personal blue light devices when the room lights must be turned off. Although this review focuses primarily on anesthesiology residents in training, many of the mitigation strategies described here can be used effectively by physicians in practice.
Cultural differences in self-recognition: the early development of autonomous and related selves?
Ross, Josephine; Yilmaz, Mandy; Dale, Rachel; Cassidy, Rose; Yildirim, Iraz; Suzanne Zeedyk, M
2017-05-01
Fifteen- to 18-month-old infants from three nationalities were observed interacting with their mothers and during two self-recognition tasks. Scottish interactions were characterized by distal contact, Zambian interactions by proximal contact, and Turkish interactions by a mixture of contact strategies. These culturally distinct experiences may scaffold different perspectives on self. In support, Scottish infants performed best in a task requiring recognition of the self in an individualistic context (mirror self-recognition), whereas Zambian infants performed best in a task requiring recognition of the self in a less individualistic context (body-as-obstacle task). Turkish infants performed similarly to Zambian infants on the body-as-obstacle task, but outperformed Zambians on the mirror self-recognition task. Verbal contact (a distal strategy) was positively related to mirror self-recognition and negatively related to passing the body-as-obstacle task. Directive action and speech (proximal strategies) were negatively related to mirror self-recognition. Self-awareness performance was best predicted by cultural context; autonomous settings predicted success in mirror self-recognition, and related settings predicted success in the body-as-obstacle task. These novel data substantiate the idea that cultural factors may play a role in the early expression of self-awareness. More broadly, the results highlight the importance of moving beyond the mark test, and designing culturally sensitive tests of self-awareness. © 2016 John Wiley & Sons Ltd.
An Enclosed Laser Calibration Standard
NASA Astrophysics Data System (ADS)
Adams, Thomas E.; Fecteau, M. L.
1985-02-01
We have designed, evaluated and calibrated an enclosed, safety-interlocked laser calibration standard for use in US Army Secondary Reference Calibration Laboratories. This Laser Test Set Calibrator (LTSC) represents the Army's first-generation field laser calibration standard. Twelve LTSC's are now being fielded world-wide. The main requirement on the LTSC is to provide calibration support for the Test Set (TS3620) which, in turn, is a GO/NO GO tester of the Hand-Held Laser Rangefinder (AN/GVS-5). However, we believe it's design is flexible enough to accommodate the calibration of other laser test, measurement and diagnostic equipment (TMDE) provided that single-shot capability is adequate to perform the task. In this paper we describe the salient aspects and calibration requirements of the AN/GVS-5 Rangefinder and the Test Set which drove the basic LTSC design. Also, we detail our evaluation and calibration of the LTSC, in particular, the LTSC system standards. We conclude with a review of our error analysis from which uncertainties were assigned to the LTSC calibration functions.
A hybrid life cycle inventory of nano-scale semiconductor manufacturing.
Krishnan, Nikhil; Boyd, Sarah; Somani, Ajay; Raoux, Sebastien; Clark, Daniel; Dornfeld, David
2008-04-15
The manufacturing of modern semiconductor devices involves a complex set of nanoscale fabrication processes that are energy and resource intensive, and generate significant waste. It is important to understand and reduce the environmental impacts of semiconductor manufacturing because these devices are ubiquitous components in electronics. Furthermore, the fabrication processes used in the semiconductor industry are finding increasing application in other products, such as microelectromechanical systems (MEMS), flat panel displays, and photovoltaics. In this work we develop a library of typical gate-to-gate materials and energy requirements, as well as emissions associated with a complete set of fabrication process models used in manufacturing a modern microprocessor. In addition, we evaluate upstream energy requirements associated with chemicals and materials using both existing process life cycle assessment (LCA) databases and an economic input-output (EIO) model. The result is a comprehensive data set and methodology that may be used to estimate and improve the environmental performance of a broad range of electronics and other emerging applications that involve nano and micro fabrication.
Evaluating Fatigue in Operational Settings: The NASA Ames Fatigue Countermeasures Program
NASA Technical Reports Server (NTRS)
Rosekind, Mark R.; Gregory, Kevin; Miller, Donna; Webbon, Lissa; Oyung, Ray
1996-01-01
In response to a 1980 Congressional request, NASA Ames initiated a program to examine fatigue in flight operations. The Program objectives are to examine fatigue, sleep loss, and circadian disruption in flight operations, determine the effects of these factors on flight crew performance, and the development of fatigue countermeasures. The NASA Ames Fatigue Countermeasures Program conducts controlled laboratory experiments, full-mission flight simulations, and field studies. A range of subjective, behavioral, performance, physiological, and environmental measures are used depending on study objectives. The Program has developed substantial expertise in gathering data during actual flight operations and in other work settings. This has required the development of ambulatory and other measures that can be carried throughout the world and used at 41,000 feet in aircraft cockpits. The NASA Ames Fatigue Countermeasures Program has examined fatigue in shorthaul, longhaul, overnight cargo, and helicopter operations. A recent study of planned cockpit rest periods demonstrated the effectiveness of a brief inflight nap to improve pilot performance and alertness. This study involved inflight reaction time/vigilance performance testing and EEG/EOG measures of physiological alertness. The NASA Ames Fatigue Countermeasures Program has applied scientific findings to the development of education and training materials on fatigue countermeasures, input to federal regulatory activities on pilot flight, duty, and rest requirements, and support of National Transportation Safety Board accident investigations. Current activities are examining fatigue in nonaugmented longhaul flights, regional/commuter flight operations, corporate/business aviation, and psychophysiological variables related to performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheib, J.; Pless, S.; Torcellini, P.
NREL experienced a significant increase in employees and facilities on our 327-acre main campus in Golden, Colorado over the past five years. To support this growth, researchers developed and demonstrated a new building acquisition method that successfully integrates energy efficiency requirements into the design-build requests for proposals and contracts. We piloted this energy performance based design-build process with our first new construction project in 2008. We have since replicated and evolved the process for large office buildings, a smart grid research laboratory, a supercomputer, a parking structure, and a cafeteria. Each project incorporated aggressive efficiency strategies using contractual energy usemore » requirements in the design-build contracts, all on typical construction budgets. We have found that when energy efficiency is a core project requirement as defined at the beginning of a project, innovative design-build teams can integrate the most cost effective and high performance efficiency strategies on typical construction budgets. When the design-build contract includes measurable energy requirements and is set up to incentivize design-build teams to focus on achieving high performance in actual operations, owners can now expect their facilities to perform. As NREL completed the new construction in 2013, we have documented our best practices in training materials and a how-to guide so that other owners and owner's representatives can replicate our successes and learn from our experiences in attaining market viable, world-class energy performance in the built environment.« less
Capsule Performance Optimization for the National Ignition Facility
NASA Astrophysics Data System (ADS)
Landen, Otto
2009-11-01
The overall goal of the capsule performance optimization campaign is to maximize the probability of ignition by experimentally correcting for likely residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. This will be accomplished using a variety of targets that will set key laser, hohlraum and capsule parameters to maximize ignition capsule implosion velocity, while minimizing fuel adiabat, core shape asymmetry and ablator-fuel mix. The targets include high Z re-emission spheres setting foot symmetry through foot cone power balance [1], liquid Deuterium-filled ``keyhole'' targets setting shock speed and timing through the laser power profile [2], symmetry capsules setting peak cone power balance and hohlraum length [3], and streaked x-ray backlit imploding capsules setting ablator thickness [4]. We will show how results from successful tuning technique demonstration shots performed at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design meet the required sensitivity and accuracy. We will also present estimates of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors, and show that these get reduced after a number of shots and iterations to meet an acceptable level of residual uncertainty. Finally, we will present results from upcoming tuning technique validation shots performed at NIF at near full-scale. Prepared by LLNL under Contract DE-AC52-07NA27344. [4pt] [1] E. Dewald, et. al. Rev. Sci. Instrum. 79 (2008) 10E903. [0pt] [2] T.R. Boehly, et. al., Phys. Plasmas 16 (2009) 056302. [0pt] [3] G. Kyrala, et. al., BAPS 53 (2008) 247. [0pt] [4] D. Hicks, et. al., BAPS 53 (2008) 2.
Comparative Study of Sport Mental Toughness between Soccer Officials
ERIC Educational Resources Information Center
Miçoogullari, Bülent Okan; Gümüsdag, Hayrettin; Ödek, Ugur; Beyaz, Özkan
2017-01-01
Gucciardi et al. (2009) suggest that mental toughness is more a function of environment than domains, and as such, mental toughness is potentially important in any environment that requires performance setting, challenges, and adversities. Due to vital importance of mental toughness in sports and particularly in soccer, this paper focused on the…
System Concept in Education. Professional Paper No. 20-74.
ERIC Educational Resources Information Center
Smith, Robert G., Jr.
In its most general sense, a system is a group of components integrated to accomplish a purpose. The heart of an educational system is the instructional system. An instructional system is an integrated set of media, equipment, methods, and personnel performing efficiently those functions required to accomplish one or more learning objectives. An…
Modeling Human Performance in Restless Bandits with Particle Filters
ERIC Educational Resources Information Center
Yi, Sheng Kung M.; Steyvers, Mark; Lee, Michael
2009-01-01
Bandit problems provide an interesting and widely-used setting for the study of sequential decision-making. In their most basic form, bandit problems require people to choose repeatedly between a small number of alternatives, each of which has an unknown rate of providing reward. We investigate restless bandit problems, where the distributions of…
Assessing Collaborative Learning: Big Data, Analytics and University Futures
ERIC Educational Resources Information Center
Williams, Peter
2017-01-01
Assessment in higher education has focused on the performance of individual students. This focus has been a practical as well as an epistemic one: methods of assessment are constrained by the technology of the day, and in the past they required the completion by individuals under controlled conditions of set-piece academic exercises. Recent…
The Wisdom of the Crowd in Combinatorial Problems
ERIC Educational Resources Information Center
Yi, Sheng Kung Michael; Steyvers, Mark; Lee, Michael D.; Dry, Matthew J.
2012-01-01
The "wisdom of the crowd" phenomenon refers to the finding that the aggregate of a set of proposed solutions from a group of individuals performs better than the majority of individual solutions. Most often, wisdom of the crowd effects have been investigated for problems that require single numerical estimates. We investigate whether the effect…
The Motivational Effects of Participation Versus Goal Setting on Performance.
1982-01-01
the premeasure. Upon completing the two circles the subjects was asked to reload the stapler to ensure that it would be full1 for the pretest and to...34, and immediately reload the stapler and con- tinue working. The experimenter would then automatically stop and restart the stopwatch as required so that
Using the Visual and Performing Arts to Complement Young Adolescents' "Close Reading" of Texts
ERIC Educational Resources Information Center
McDermott, Peter; Falk-Ross, Francine; Medow, Sharon
2017-01-01
The educational needs of young adolescents require that curricula include a more expanded set of multiple integrative approaches, including new literacies, and that it be "challenging, exploratory, integrative, and relevant" (National Middle School Association, 2010). Although educators are now focusing on the addition of digital formats…
The Influence of Task Instruction on Action Coding: Constraint Setting or Direct Coding?
ERIC Educational Resources Information Center
Wenke, Dorit; Frensch, Peter A.
2005-01-01
In 3 experiments, the authors manipulated response instructions for 2 concurrently performed tasks. Specifically, the authors' instructions described left and right keypresses on a manual task either as left versus right or as blue versus green keypresses and required either "left" versus "right" or "blue" versus "green" concurrent verbalizations.…
Degree Progress Measures for Community Colleges: Analyzing the Maryland Model
ERIC Educational Resources Information Center
Boughan, Karl; Clagett, Craig
2008-01-01
Over a two-year period beginning in March 2004, community colleges in Maryland developed a revised set of accountability indicators for a state-mandated Performance Accountability Report first required by a 1988 statute. A major innovation was a new model for assessing student degree progress. This article explains the development and components…
Optimal Tikhonov regularization for DEER spectroscopy
NASA Astrophysics Data System (ADS)
Edwards, Thomas H.; Stoll, Stefan
2018-03-01
Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.
Ortiz-Domínguez, Maki E; Garrido-Latorre, Francisco; Orozco, Ricardo; Pineda-Pérez, Dayana; Rodríguez-Salgado, Marlenne
2011-01-01
To assess health care quality provided to type-2 diabetic and hypertensive patients in primary care settings from the Mexican Ministry of Health and to evaluate whether accredited clinics providing services to the Mexican Seguro Popular performed better in terms of metabolic control of those patients compared to the non-accredited. Cross-sectional study performed on 2008. Previous year clinical measures were obtained from 5 444 diabetic and 5 827 hypertensive patient's clinical records. Adequate metabolic control (glucose <110 mg/dl for diabetes and blood pressure <140/90 mmHg for hypertension) associated factors were assessed by multiple-multilevel logistic regression methods. Patients attending accredited clinics were more likely to be controlled, however, metabolic control was not constant over time of accreditation. Additional efforts are required to monitor accredited clinics' performance in order to maintain both metabolic control and clinical assessment of patients.
Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination
NASA Astrophysics Data System (ADS)
Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel
2012-06-01
The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.
SIproc: an open-source biomedical data processing platform for large hyperspectral images.
Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David
2017-04-10
There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.
Fuel governor for controlled autoignition engines
Jade, Shyam; Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li
2016-06-28
Methods and systems for controlling combustion performance of an engine are provided. A desired fuel quantity for a first combustion cycle is determined. One or more engine actuator settings are identified that would be required during a subsequent combustion cycle to cause the engine to approach a target combustion phasing. If the identified actuator settings are within a defined acceptable operating range, the desired fuel quantity is injected during the first combustion cycle. If not, an attenuated fuel quantity is determined and the attenuated fuel quantity is injected during the first combustion cycle.
NECAP: NASA's Energy-Cost Analysis Program. Part 1: User's manual
NASA Technical Reports Server (NTRS)
Henninger, R. H. (Editor)
1975-01-01
The NECAP is a sophisticated building design and energy analysis tool which has embodied within it all of the latest ASHRAE state-of-the-art techniques for performing thermal load calculation and energy usage predictions. It is a set of six individual computer programs which include: response factor program, data verification program, thermal load analysis program, variable temperature program, system and equipment simulation program, and owning and operating cost program. Each segment of NECAP is described, and instructions are set forth for preparing the required input data and for interpreting the resulting reports.
A miniature Hopkinson experiment device based on multistage reluctance coil electromagnetic launch.
Huang, Wenkai; Huan, Shi; Xiao, Ying
2017-09-01
A set of seven-stage reluctance miniaturized Hopkinson bar electromagnetic launcher has been developed in this paper. With the characteristics of high precision, small size, and little noise pollution, the device complies with the requirements of miniaturized Hopkinson bar for high strain rate. The launcher is a seven-stage accelerating device up to 65.5 m/s. A high performance microcontroller is used to control accurately the discharge of capacitor sets, by means of which the outlet velocity of the projectile can be controlled within a certain velocity range.
A miniature Hopkinson experiment device based on multistage reluctance coil electromagnetic launch
NASA Astrophysics Data System (ADS)
Huang, Wenkai; Huan, Shi; Xiao, Ying
2017-09-01
A set of seven-stage reluctance miniaturized Hopkinson bar electromagnetic launcher has been developed in this paper. With the characteristics of high precision, small size, and little noise pollution, the device complies with the requirements of miniaturized Hopkinson bar for high strain rate. The launcher is a seven-stage accelerating device up to 65.5 m/s. A high performance microcontroller is used to control accurately the discharge of capacitor sets, by means of which the outlet velocity of the projectile can be controlled within a certain velocity range.
NSTAR Ion Thrusters and Power Processors
NASA Technical Reports Server (NTRS)
Bond, T. A.; Christensen, J. A.
1999-01-01
The purpose of the NASA Solar Electric Propulsion Technology Applications Readiness (NSTAR) project is to validate ion propulsion technology for use on future NASA deep space missions. This program, which was initiated in September 1995, focused on the development of two sets of flight quality ion thrusters, power processors, and controllers that provided the same performance as engineering model hardware and also met the dynamic and environmental requirements of the Deep Space 1 Project. One of the flight sets was used for primary propulsion for the Deep Space 1 spacecraft which was launched in October 1998.
Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Arnegard, Ruth J.; Comstock, J. R., Jr.
1991-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
The multi-attribute task battery for human operator workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Arnegard, Ruth J.
1992-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to use nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
Power and Performance Trade-offs for Space Time Adaptive Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino
Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less
Modelling machine ensembles with discrete event dynamical system theory
NASA Technical Reports Server (NTRS)
Hunter, Dan
1990-01-01
Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Etingov, Pavel V.; Ren, Huiying
This paper describes a probabilistic look-ahead contingency analysis application that incorporates smart sampling and high-performance computing (HPC) techniques. Smart sampling techniques are implemented to effectively represent the structure and statistical characteristics of uncertainty introduced by different sources in the power system. They can significantly reduce the data set size required for multiple look-ahead contingency analyses, and therefore reduce the time required to compute them. High-performance-computing (HPC) techniques are used to further reduce computational time. These two techniques enable a predictive capability that forecasts the impact of various uncertainties on potential transmission limit violations. The developed package has been tested withmore » real world data from the Bonneville Power Administration. Case study results are presented to demonstrate the performance of the applications developed.« less
Contour-based object orientation estimation
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel
2016-04-01
Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.
Building an Evaluation Scale using Item Response Theory.
Lalor, John P; Wu, Hao; Yu, Hong
2016-11-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.
Building an Evaluation Scale using Item Response Theory
Lalor, John P.; Wu, Hao; Yu, Hong
2016-01-01
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.1 PMID:28004039
Mohr, Johannes A; Jain, Brijnesh J; Obermayer, Klaus
2008-09-01
Quantitative structure activity relationship (QSAR) analysis is traditionally based on extracting a set of molecular descriptors and using them to build a predictive model. In this work, we propose a QSAR approach based directly on the similarity between the 3D structures of a set of molecules measured by a so-called molecule kernel, which is independent of the spatial prealignment of the compounds. Predictors can be build using the molecule kernel in conjunction with the potential support vector machine (P-SVM), a recently proposed machine learning method for dyadic data. The resulting models make direct use of the structural similarities between the compounds in the test set and a subset of the training set and do not require an explicit descriptor construction. We evaluated the predictive performance of the proposed method on one classification and four regression QSAR datasets and compared its results to the results reported in the literature for several state-of-the-art descriptor-based and 3D QSAR approaches. In this comparison, the proposed molecule kernel method performed better than the other QSAR methods.
Using dynamic programming to improve fiducial marker localization
NASA Astrophysics Data System (ADS)
Wan, Hanlin; Ge, Jiajia; Parikh, Parag
2014-04-01
Fiducial markers are used in a wide range of medical imaging applications. In radiation therapy, they are often implanted near tumors and used as motion surrogates that are tracked with fluoroscopy. We propose a novel and robust method based on dynamic programming (DP) for retrospectively localizing radiopaque fiducial markers in fluoroscopic images. Our method was compared to template matching (TM) algorithms on 407 data sets from 24 patients. We found that the performance of TM varied dramatically depending on the template used (ranging from 47% to 92% of data sets with a mean error <1 mm). DP by itself requires no template and performed as well as the best TM method, localizing the markers in 91% of the data sets with a mean error <1 mm. Finally, by combining DP and TM, we were able to localize the markers in 99% of the data sets with a mean error <1 mm, regardless of the template used. Our results show that DP can be a powerful tool for analyzing tumor motion, capable of accurately locating fiducial markers in fluoroscopic images regardless of marker type, shape, and size.
Requirements for a Hydrogen Powered All-Electric Manned Helicopter
NASA Technical Reports Server (NTRS)
Datta, Anubhav
2012-01-01
The objective of this paper is to set propulsion system targets for an all-electric manned helicopter of ultra-light utility class to achieve performance comparable to combustion engines. The approach is to begin with a current two-seat helicopter (Robinson R 22 Beta II-like), design an all-electric power plant as replacement for its existing piston engine, and study performance of the new all-electric aircraft. The new power plant consists of high-pressure Proton Exchange Membrane fuel cells, hydrogen stored in 700 bar type-4 tanks, lithium-ion batteries, and an AC synchronous permanent magnet motor. The aircraft and the transmission are assumed to remain the same. The paper surveys the state of the art in each of these areas, synthesizes a power plant using best available technologies in each, examines the performance achievable by such a power plant, identifies key barriers, and sets future technology targets to achieve performance at par with current internal combustion engines.
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Zeng-Treitler, Qing; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty.
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
Objective This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Design Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Measurements Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. Results The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. Conclusion For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty. PMID:22707743
Trading Robustness Requirements in Mars Entry Trajectory Design
NASA Technical Reports Server (NTRS)
Lafleur, Jarret M.
2009-01-01
One of the most important metrics characterizing an atmospheric entry trajectory in preliminary design is the size of its predicted landing ellipse. Often, requirements for this ellipse are set early in design and significantly influence both the expected scientific return from a particular mission and the cost of development. Requirements typically specify a certain probability level (6-level) for the prescribed ellipse, and frequently this latter requirement is taken at 36. However, searches for the justification of 36 as a robustness requirement suggest it is an empirical rule of thumb borrowed from non-aerospace fields. This paper presents an investigation into the sensitivity of trajectory performance to varying robustness (6-level) requirements. The treatment of robustness as a distinct objective is discussed, and an analysis framework is presented involving the manipulation of design variables to effect trades between performance and robustness objectives. The scenario for which this method is illustrated is the ballistic entry of an MSL-class Mars entry vehicle. Here, the design variable is entry flight path angle, and objectives are parachute deploy altitude performance and error ellipse robustness. Resulting plots show the sensitivities between these objectives and trends in the entry flight path angles required to design to these objectives. Relevance to the trajectory designer is discussed, as are potential steps for further development and use of this type of analysis.
Huy, Nguyen Tien; Thao, Nguyen Thanh Hong; Tuan, Nguyen Anh; Khiem, Nguyen Tuan; Moore, Christopher C.; Thi Ngoc Diep, Doan; Hirayama, Kenji
2012-01-01
Background and Purpose Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours. Methods A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC) using the method of DeLong and McNemar test for specificity comparison. Results Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85–90%. Conclusions No clinical decision rules provided an acceptable specificity (>50%) with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule. PMID:23209715
Huy, Nguyen Tien; Thao, Nguyen Thanh Hong; Tuan, Nguyen Anh; Khiem, Nguyen Tuan; Moore, Christopher C; Thi Ngoc Diep, Doan; Hirayama, Kenji
2012-01-01
Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours. A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC) using the method of DeLong and McNemar test for specificity comparison. Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85-90%. No clinical decision rules provided an acceptable specificity (>50%) with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule.
Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines
Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram
2014-01-01
When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002
Invalid before impaired: an emerging paradox of embedded validity indicators.
Erdodi, Laszlo A; Lichtenstein, Jonathan D
Embedded validity indicators (EVIs) are cost-effective psychometric tools to identify non-credible response sets during neuropsychological testing. As research on EVIs expands, assessors are faced with an emerging contradiction: the range of credible impairment disappears between the 'normal' and 'invalid' range of performance. We labeled this phenomenon as the invalid-before-impaired paradox. This study was designed to explore the origin of this psychometric anomaly, subject it to empirical investigation, and generate potential solutions. Archival data were analyzed from a mixed clinical sample of 312 (M Age = 45.2; M Education = 13.6) patients medically referred for neuropsychological assessment. The distribution of scores on eight subtests of the third and fourth editions of Wechsler Adult Intelligence Scale (WAIS) were examined in relation to the standard normal curve and two performance validity tests (PVTs). Although WAIS subtests varied in their sensitivity to non-credible responding, they were all significant predictors of performance validity. While subtests previously identified as EVIs (Digit Span, Coding, and Symbol Search) were comparably effective at differentiating credible and non-credible response sets, their classification accuracy was driven by their base rate of low scores, requiring different cutoffs to achieve comparable specificity. Invalid performance had a global effect on WAIS scores. Genuine impairment and non-credible performance can co-exist, are often intertwined, and may be psychometrically indistinguishable. A compromise between the alpha and beta bias on PVTs based on a balanced, objective evaluation of the evidence that requires concessions from both sides is needed to maintain/restore the credibility of performance validity assessment.
Whitney, Paul; Hinson, John M; Jackson, Melinda L; Van Dongen, Hans P A
2015-05-01
To better understand the sometimes catastrophic effects of sleep loss on naturalistic decision making, we investigated effects of sleep deprivation on decision making in a reversal learning paradigm requiring acquisition and updating of information based on outcome feedback. Subjects were randomized to a sleep deprivation or control condition, with performance testing at baseline, after 2 nights of total sleep deprivation (or rested control), and following 2 nights of recovery sleep. Subjects performed a decision task involving initial learning of go and no go response sets followed by unannounced reversal of contingencies, requiring use of outcome feedback for decisions. A working memory scanning task and psychomotor vigilance test were also administered. Six consecutive days and nights in a controlled laboratory environment with continuous behavioral monitoring. Twenty-six subjects (22-40 y of age; 10 women). Thirteen subjects were randomized to a 62-h total sleep deprivation condition; the others were controls. Unlike controls, sleep deprived subjects had difficulty with initial learning of go and no go stimuli sets and had profound impairment adapting to reversal. Skin conductance responses to outcome feedback were diminished, indicating blunted affective reactions to feedback accompanying sleep deprivation. Working memory scanning performance was not significantly affected by sleep deprivation. And although sleep deprived subjects showed expected attentional lapses, these could not account for impairments in reversal learning decision making. Sleep deprivation is particularly problematic for decision making involving uncertainty and unexpected change. Blunted reactions to feedback while sleep deprived underlie failures to adapt to uncertainty and changing contingencies. Thus, an error may register, but with diminished effect because of reduced affective valence of the feedback or because the feedback is not cognitively bound with the choice. This has important implications for understanding and managing sleep loss-induced cognitive impairment in emergency response, disaster management, military operations, and other dynamic real-world settings with uncertain outcomes and imperfect information. © 2015 Associated Professional Sleep Societies, LLC.
Molgenis-impute: imputation pipeline in a box.
Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A
2015-08-19
Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.
A machine learning evaluation of an artificial immune system.
Glickman, Matthew; Balthrop, Justin; Forrest, Stephanie
2005-01-01
ARTIS is an artificial immune system framework which contains several adaptive mechanisms. LISYS is a version of ARTIS specialized for the problem of network intrusion detection. The adaptive mechanisms of LISYS are characterized in terms of their machine-learning counterparts, and a series of experiments is described, each of which isolates a different mechanism of LISYS and studies its contribution to the system's overall performance. The experiments were conducted on a new data set, which is more recent and realistic than earlier data sets. The network intrusion detection problem is challenging because it requires one-class learning in an on-line setting with concept drift. The experiments confirm earlier experimental results with LISYS, and they study in detail how LISYS achieves success on the new data set.
Empirical performance of the multivariate normal universal portfolio
NASA Astrophysics Data System (ADS)
Tan, Choon Peng; Pang, Sook Theng
2013-09-01
Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.
2013-01-01
Background The use of teams is a well-known approach in a variety of settings, including health care, in both developed and developing countries. Team performance is comprised of teamwork and task work, and ascertaining whether a team is performing as expected to achieve the desired outcome has rarely been done in health care settings in resource-limited countries. Measuring teamwork requires identifying dimensions of teamwork or processes that comprise the teamwork construct, while taskwork requires identifying specific team functions. Since 2008 a community-based project in rural Zambia has teamed community health workers (CHWs) and traditional birth attendants (TBAs), supported by Neighborhood Health Committees (NHCs), to provide essential newborn and continuous curative care for children 0–59 months. This paper describes the process of developing a measure of teamwork and taskwork for community-based health teams in rural Zambia. Methods Six group discussions and pile-sorting sessions were conducted with three NHCs and three groups of CHW-TBA teams. Each session comprised six individuals. Results We selected 17 factors identified by participants as relevant for measuring teamwork in this rural setting. Participants endorsed seven functions as important to measure taskwork. To explain team performance, we assigned 20 factors into three sub-groups: personal, community-related and service-related. Conclusion Community and culturally relevant processes, functions and factors were used to develop a tool for measuring teamwork and taskwork in this rural community and the tool was quite unique from tools used in developed countries. PMID:23802766
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Juan; Liefer, Nathan C.; Busho, Colin R.
Here, the need for improved Critical Infrastructure and Key Resource (CIKR) security is unquestioned and there has been minimal emphasis on Level-0 (PHY Process) improvements. Wired Signal Distinct Native Attribute (WS-DNA) Fingerprinting is investigated here as a non-intrusive PHY-based security augmentation to support an envisioned layered security strategy. Results are based on experimental response collections from Highway Addressable Remote Transducer (HART) Differential Pressure Transmitter (DPT) devices from three manufacturers (Yokogawa, Honeywell, Endress+Hauer) installed in an automated process control system. Device discrimination is assessed using Time Domain (TD) and Slope-Based FSK (SB-FSK) fingerprints input to Multiple Discriminant Analysis, Maximum Likelihood (MDA/ML)more » and Random Forest (RndF) classifiers. For 12 different classes (two devices per manufacturer at two distinct set points), both classifiers performed reliably and achieved an arbitrary performance benchmark of average cross-class percent correct of %C > 90%. The least challenging cross-manufacturer results included near-perfect %C ≈ 100%, while the more challenging like-model (serial number) discrimination results included 90%< %C < 100%, with TD Fingerprinting marginally outperforming SB-FSK Fingerprinting; SB-FSK benefits from having less stringent response alignment and registration requirements. The RndF classifier was most beneficial and enabled reliable selection of dimensionally reduced fingerprint subsets that minimize data storage and computational requirements. The RndF selected feature sets contained 15% of the full-dimensional feature sets and only suffered a worst case %CΔ = 3% to 4% performance degradation.« less
NASA Technical Reports Server (NTRS)
Vane, Gregg; Porter, Wallace M.; Reimer, John H.; Chrien, Thomas G.; Green, Robert O.
1988-01-01
Results are presented of the assessment of AVIRIS performance during the 1987 flight season by the AVIRIS project and the earth scientists who were chartered by NASA to conduct an independent data quality and sensor performance evaluation. The AVIRIS evaluation program began in late June 1987 with the sensor meeting most of its design requirements except for signal-to-noise ratio in the fourth spectrometer, which was about half of the required level. Several events related to parts failures and design flaws further reduced sensor performance over the flight season. Substantial agreement was found between the assessments by the project and the independent investigators of the effects of these various factors. A summary of the engineering work that is being done to raise AVIRIS performance to its required level is given. In spite of degrading data quality over the flight season, several exciting scientific results were obtained from the data. These include the mapping of the spatial variation of atmospheric precipitable water, detection of environmentally-induced shifts in the spectral red edge of stressed vegetation, detection of spectral features related to pigment, leaf water and ligno-cellulose absorptions in plants, and the identification of many diagnostic mineral absorption features in a variety of geological settings.
Advanced life support control/monitor instrumentation concepts for flight application
NASA Technical Reports Server (NTRS)
Heppner, D. B.; Dahlhausen, M. J.; Fell, R. B.
1986-01-01
Development of regenerative Environmental Control/Life Support Systems requires instrumentation characteristics which evolve with successive development phases. As the development phase moves toward flight hardware, the system availability becomes an important design aspect which requires high reliability and maintainability. This program was directed toward instrumentation designs which incorporate features compatible with anticipated flight requirements. The first task consisted of the design, fabrication and test of a Performance Diagnostic Unit. In interfacing with a subsystem's instrumentation, the Performance Diagnostic Unit is capable of determining faulty operation and components within a subsystem, perform on-line diagnostics of what maintenance is needed and accept historical status on subsystem performance as such information is retained in the memory of a subsystem's computerized controller. The second focus was development and demonstration of analog signal conditioning concepts which reduce the weight, power, volume, cost and maintenance and improve the reliability of this key assembly of advanced life support instrumentation. The approach was to develop a generic set of signal conditioning elements or cards which can be configured to fit various subsystems. Four generic sensor signal conditioning cards were identified as being required to handle more than 90 percent of the sensors encountered in life support systems. Under company funding, these were detail designed, built and successfully tested.
Evaluation of a Conductive Elastomer Seal for Spacecraft
NASA Technical Reports Server (NTRS)
Daniels, C. C.; Mather, J. L.; Oravec, H. A.; Dunlap, P. H., Jr.
2016-01-01
An electrically conductive elastomer was evaluated as a material candidate for a spacecraft seal. The elastomer used electrically conductive constituents as a means to reduce the resistance between mating interfaces of a sealed joint to meet spacecraft electrical bonding requirements. The compound's outgassing levels were compared against published NASA requirements. The compound was formed into a hollow O-ring seal and its compression set was measured. The O-ring seal was placed into an interface and the electrical resistance and leak rate were quantified. The amount of force required to fully compress the test article in the sealing interface and the force needed to separate the joint were also measured. The outgassing and resistance measurements were below the maximum allowable levels. The room temperature compression set and leak rates were fairly high when compared against other typical spacecraft seal materials, but were not excessive. The compression and adhesion forces were desirably low. Overall, the performance of the elastomer compound was sufficient to be considered for future spacecraft seal applications.
Validation of a short-term memory test for the recognition of people and faces.
Leyk, D; Sievert, A; Heiss, A; Gorges, W; Ridder, D; Alexander, T; Wunderlich, M; Ruther, T
2008-08-01
Memorising and processing faces is a short-term memory dependent task of utmost importance in the security domain, in which constant and high performance is a must. Especially in access or passport control-related tasks, the timely identification of performance decrements is essential, margins of error are narrow and inadequate performance may have grave consequences. However, conventional short-term memory tests frequently use abstract settings with little relevance to working situations. They may thus be unable to capture task-specific decrements. The aim of the study was to devise and validate a new test, better reflecting job specifics and employing appropriate stimuli. After 1.5 s (short) or 4.5 s (long) presentation, a set of seven portraits of faces had to be memorised for comparison with two control stimuli. Stimulus appearance followed 2 s (first item) and 8 s (second item) after set presentation. Twenty eight subjects (12 male, 16 female) were tested at seven different times of day, 3 h apart. Recognition rates were above 60% even for the least favourable condition. Recognition was significantly better in the 'long' condition (+10%) and for the first item (+18%). Recognition time showed significant differences (10%) between items. Minor effects of learning were found for response latencies only. Based on occupationally relevant metrics, the test displayed internal and external validity, consistency and suitability for further use in test/retest scenarios. In public security, especially where access to restricted areas is monitored, margins of error are narrow and operator performance must remain high and level. Appropriate schedules for personnel, based on valid test results, are required. However, task-specific data and performance tests, permitting the description of task specific decrements, are not available. Commonly used tests may be unsuitable due to undue abstraction and insufficient reference to real-world conditions. Thus, tests are required that account for task-specific conditions and neurophysiological characteristics.
The carry-over effect of competition in task-sharing: evidence from the joint Simon task.
Iani, Cristina; Anelli, Filomena; Nicoletti, Roberto; Rubichi, Sandro
2014-01-01
The Simon effect, that is the advantage of the spatial correspondence between stimulus and response locations when stimulus location is a task-irrelevant dimension, occurs even when the task is performed together by two participants, each performing a go/no-go task. Previous studies showed that this joint Simon effect, considered by some authors as a measure of self-other integration, does not emerge when during task performance co-actors are required to compete. The present study investigated whether and for how long competition experienced during joint performance of one task can affect performance in a following joint Simon task. In two experiments, we required pairs of participants to perform together a joint Simon task, before and after jointly performing together an unrelated non-spatial task (the Eriksen flanker task). In Experiment 1, participants always performed the joint Simon task under neutral instructions, before and after performing the joint flanker task in which they were explicitly required either to cooperate with (i.e., cooperative condition) or to compete against a co-actor (i.e., competitive condition). In Experiment 2, they were required to compete during the joint flanker task and to cooperate during the subsequent joint Simon task. Competition experienced in one task affected the way the subsequent joint task was performed, as revealed by the lack of the joint Simon effect, even though, during the Simon task participants were not required to compete (Experiment 1). However, prior competition no longer affected subsequent performance if a new goal that created positive interdependence between the two agents was introduced (Experiment 2). These results suggest that the emergence of the joint Simon effect is significantly influenced by how the goals of the co-acting individuals are related, with the effect of competition extending beyond the specific competitive setting and affecting subsequent interactions.
Piloted Well Clear Performance Evaluation of Detect and Avoid Systems with Suggestive Guidance
NASA Technical Reports Server (NTRS)
Mueller, Eric; Santiago, Confesor; Watza, Spencer
2016-01-01
Regulations to establish operational and performance requirements for unmanned aircraft systems (UAS) are being developed by a consortium of government, industry and academic institutions (RTCA, 2013). Those requirements will apply to the new detect-and-avoid (DAA) systems and other equipment necessary to integrate UAS with the United States (U.S) National Airspace System (NAS) and will be determined according to their contribution to the overall safety case. That safety case requires demonstration that DAA-equipped UAS collectively operating in the NAS meet an airspace safety threshold (AST). Several key gaps must be closed in order to link equipment requirements to an airspace safety case. Foremost among these is calculation of the systems risk ratio, the degree to which a particular system mitigates violation of an aircraft separation standard (FAA, 2013). The risk ratio of a DAA system, in combination with risk ratios of other collision mitigation mechanisms, will determine the overall safety of the airspace measured in terms of the number of collisions per flight hour. It is not known what the effectiveness is of a pilot-in-the-loop DAA system or even what parameters of the DAA system most improve the pilots ability to maintain separation. The relationship between the DAA system design and the overall effectiveness of the DAA system that includes the pilot, expressed as a risk ratio, must be determined before DAA operational and performance requirements can be finalized. Much research has been devoted to integrating UAS into non-segregated airspace (Dalamagkidis, 2009, Ostwald, 2007, Gillian, 2012, Hesselink, 2011, Santiago, 2015, Rorie 2015 and 2016). Several traffic displays intended for use as part of a DAA system have gone through human-in-the-loop simulation and flight-testing. Most of these evaluations were part of development programs to produce a deployable system, so it is unclear how to generalize particular aspects of those designs to general requirements for future traffic displays (Calhoun, 2014). Other displays have undergone testing to collect data that may generalize to new displays, but have not been evaluated in the context of the development of an overall safety case for UAS equipped with DAA systems in the NAS (Bell, 2012). Other research efforts focus on DAA surveillance performance and separation standards. Together with this work, they are expected to facilitate validation of the airspace safety case (Park, 2014 and Johnson, 2015). The contribution of the present work is to quantify the effectiveness of the pilot-automation system to remain well clear as a function of display features and surveillance sensor error. This quantification will help enable selection of a minimum set of DAA design features that meets the AST, a set that may not be unique for all UAS platforms. A second objective is to collect and analyze pilot performance parameters that will improve the modeling of overall DAA system performance in non-human-in-the-loop simulations. Simulating the DAA-equipped UAS in such batch experiments will allow investigation of a much larger number of encounters than is possible in human simulations. This capability is necessary to demonstrate that a particular set of DAA requirements meets the AST under all foreseeable operational conditions.
Norfolk, Tim; Siriwardena, A Niroshan
2013-01-01
This discussion paper describes a new and comprehensive model for diagnosing the causes of individual medical performance problems: SKIPE (skills, knowledge, internal, past and external factors). This builds on a previous paper describing a unifying theory of clinical practice, the RDM-p model, which captures the primary skill sets required for effective medical performance (relationship, diagnostics and management), and the professionalism that needs to underpin them. The SKIPE model is currently being used, in conjunction with the RDM-p model, for the in-depth assessment and management of doctors whose performance is a cause for concern.
1991-05-01
or may not bypass the editing function. At present, editing rules beyond those required for translation have not been stipulated. 2When explicit... editing rules become defined, the editor at a site LGN may perform two levels of edit checking: warning, which would insert blanks or pass as submitted...position image transactions into a transaction set. This low-level edit checking is performed at the site LGN to reduce transmission costs and to
Thermal and structural analysis of the GOES scan mirror's on orbit performance
NASA Technical Reports Server (NTRS)
Zurmehly, G. E.; Hookman, R. A.
1991-01-01
The on-orbit performance of the GOES satellite's scan mirror has been predicted by means of thermal, structural, and optical models. A simpler-than-conventional thermal model was used to reduce the time required to obtain orbital predictions, and the structural model was used to predict on-earth gravity sag and on-orbit distortions. The transfer of data from the thermal model to the structural model was automated for a given set of thermal nodes and structural grids.
Capsule performance optimization in the National Ignition Campaigna)
NASA Astrophysics Data System (ADS)
Landen, O. L.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Glenzer, S. H.; Hamza, A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kirkwood, R. K.; Kyrala, G. A.; Michel, P.; Milovich, J.; Munro, D. H.; Nikroo, A.; Olson, R. E.; Robey, H. F.; Spears, B. K.; Thomas, C. A.; Weber, S. V.; Wilson, D. C.; Marinak, M. M.; Suter, L. J.; Hammel, B. A.; Meyerhofer, D. D.; Atherton, J.; Edwards, J.; Haan, S. W.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.
2010-05-01
A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.
Capsule performance optimization in the National Ignition Campaign
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landen, O. L.; Bradley, D. K.; Braun, D. G.
2010-05-15
A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition designmore » and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less
Advanced Wet Tantalum Capacitors: Design, Specifications and Performance
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2016-01-01
Insertion of new types of commercial, high volumetric efficiency wet tantalum capacitors in space systems requires reassessment of the existing quality assurance approaches that have been developed for capacitors manufactured to MIL-PRF-39006 requirements. The specifics of wet electrolytic capacitors is that leakage currents flowing through electrolyte can cause gas generation resulting in building up of internal gas pressure and rupture of the case. The risk associated with excessive leakage currents and increased pressure is greater for high value advanced wet tantalum capacitors, but it has not been properly evaluated yet. This presentation gives a review of specifics of the design, performance, and potential reliability risks associated with advanced wet tantalum capacitors. Problems related to setting adequate requirements for DPA, leakage currents, hermeticity, stability at low and high temperatures, ripple currents for parts operating in vacuum, and random vibration testing are discussed. Recommendations for screening and qualification to reduce risks of failures have been suggested.
Advanced Wet Tantalum Capacitors: Design, Specifications and Performance
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2017-01-01
Insertion of new types of commercial, high volumetric efficiency wet tantalum capacitors in space systems requires reassessment of the existing quality assurance approaches that have been developed for capacitors manufactured to MIL-PRF-39006 requirements. The specifics of wet electrolytic capacitors is that leakage currents flowing through electrolyte can cause gas generation resulting in building up of internal gas pressure and rupture of the case. The risk associated with excessive leakage currents and increased pressure is greater for high value advanced wet tantalum capacitors, but it has not been properly evaluated yet. This presentation gives a review of specifics of the design, performance, and potential reliability risks associated with advanced wet tantalum capacitors. Problems related to setting adequate requirements for DPA, leakage currents, hermeticity, stability at low and high temperatures, ripple currents for parts operating in vacuum, and random vibration testing are discussed. Recommendations for screening and qualification to reduce risks of failures have been suggested.
Strategic adaptation to performance objectives in a dual-task setting.
Janssen, Christian P; Brumby, Duncan P
2010-11-01
How do people interleave attention when multitasking? One dominant account is that the completion of a subtask serves as a cue to switch tasks. But what happens if switching solely at subtask boundaries led to poor performance? We report a study in which participants manually dialed a UK-style telephone number while driving a simulated vehicle. If the driver were to exclusively return his or her attention to driving after completing a subtask (i.e., using the single break in the xxxxx-xxxxxx representational structure of the number), then we would expect to see a relatively poor driving performance. In contrast, our results show that drivers choose to return attention to steering control before the natural subtask boundary. A computational modeling analysis shows that drivers had to adopt this strategy to meet the required performance objective of maintaining an acceptable lateral position in the road while dialing. Taken together these results support the idea that people can strategically control the allocation of attention in multitask settings to meet specific performance criteria. Copyright © 2010 Cognitive Science Society, Inc.
Patterson, Olga V; Forbush, Tyler B; Saini, Sameer D; Moser, Stephanie E; DuVall, Scott L
2015-01-01
In order to measure the level of utilization of colonoscopy procedures, identifying the primary indication for the procedure is required. Colonoscopies may be utilized not only for screening, but also for diagnostic or therapeutic purposes. To determine whether a colonoscopy was performed for screening, we created a natural language processing system to identify colonoscopy reports in the electronic medical record system and extract indications for the procedure. A rule-based model and three machine-learning models were created using 2,000 manually annotated clinical notes of patients cared for in the Department of Veterans Affairs. Performance of the models was measured and compared. Analysis of the models on a test set of 1,000 documents indicates that the rule-based system performance stays fairly constant as evaluated on training and testing sets. However, the machine learning model without feature selection showed significant decrease in performance. Therefore, rule-based classification system appears to be more robust than a machine-learning system in cases when no feature selection is performed.
Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David
2012-08-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David
2012-01-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035
Enhancing instruction scheduling with a block-structured ISA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melvin, S.; Patt, Y.
It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Performance comparison of optical interference cancellation system architectures.
Lu, Maddie; Chang, Matt; Deng, Yanhua; Prucnal, Paul R
2013-04-10
The performance of three optics-based interference cancellation systems are compared and contrasted with each other, and with traditional electronic techniques for interference cancellation. The comparison is based on a set of common performance metrics that we have developed for this purpose. It is shown that thorough evaluation of our optical approaches takes into account the traditional notions of depth of cancellation and dynamic range, along with notions of link loss and uniformity of cancellation. Our evaluation shows that our use of optical components affords performance that surpasses traditional electronic approaches, and that the optimal choice for an optical interference canceller requires taking into account the performance metrics discussed in this paper.
Shuttle on-orbit contamination and environmental effects
NASA Technical Reports Server (NTRS)
Leger, L. J.; Jacobs, S.; Ehlers, H. K. F.; Miller, E.
1985-01-01
Ensuring the compatibility of the space shuttle system with payloads and payload measurements is discussed. An extensive set of quantitative requirements and goals was developed and implemented by the space shuttle program management. The performance of the Shuttle system as measured by these requirements and goals was assessed partly through the use of the induced environment contamination monitor on Shuttle flights 2, 3, and 4. Contamination levels are low and generally within the requirements and goals established. Additional data from near-term payloads and already planned contamination measurements will complete the environment definition and allow for the development of contamination avoidance procedures as necessary for any payload.
Oregon Elks Children's Eye Clinic vision screening results for astigmatism.
Vaughan, Joannah; Dale, Talitha; Herrera, Daniel; Karr, Daniel
2018-04-19
In the Elks Preschool Vision Screening program, which uses the plusoptiX S12 to screen children 36-60 months of age, the most common reason for over-referral, using the 1.50 D referral criterion, was found to be astigmatism. The goal of this study was to compare the accuracy of the 2.25 D referral criterion for astigmatism to the 1.50 D referral criterion using screening data from 2013-2014. Vision screenings were conducted on Head Start children 36-72 months of age by Head Start teachers and Elks Preschool Vision Screening staff using the plusoptiX S12. Data on 4,194 vision screenings in 2014 and 4,077 in 2013 were analyzed. Area under the curve (AUC) and receiver operating characteristic curve (ROC) analysis were performed to determine the optimal referral criteria. A t test and scatterplot analysis were performed to compare how many children required treatment using the different criteria. The medical records of 136 (2.25 D) and 117 children (1.50 D) who were referred by the plusoptiX screening for potential astigmatism and received dilated eye examinations from their local eye doctors were reviewed retrospectively. Mean subject age was 4 years. Treatment for astigmatism was prescribed to 116 of 136 using the 2.25 D setting compared to 60 of 117 using the 1.50 D setting. In 2013 the program used the 1.50 D setting for astigmatism. Changing the astigmatism setting to 2.25 D; , 85% of referrals required treatment, reducing false positives by 34%. Copyright © 2018. Published by Elsevier Inc.
Leandro, G; Rolando, N; Gallus, G; Rolles, K; Burroughs, A
2005-01-01
Background: Monitoring clinical interventions is an increasing requirement in current clinical practice. The standard CUSUM (cumulative sum) charts are used for this purpose. However, they are difficult to use in terms of identifying the point at which outcomes begin to be outside recommended limits. Objective: To assess the Bernoulli CUSUM chart that permits not only a 100% inspection rate, but also the setting of average expected outcomes, maximum deviations from these, and false positive rates for the alarm signal to trigger. Methods: As a working example this study used 674 consecutive first liver transplant recipients. The expected one year mortality set at 24% from the European Liver Transplant Registry average. A standard CUSUM was compared with Bernoulli CUSUM: the control value mortality was therefore 24%, maximum accepted mortality 30%, and average number of observations to signal was 500—that is, likelihood of false positive alarm was 1:500. Results: The standard CUSUM showed an initial descending curve (nadir at patient 215) then progressively ascended indicating better performance. The Bernoulli CUSUM gave three alarm signals initially, with easily recognised breaks in the curve. There were no alarms signals after patient 143 indicating satisfactory performance within the criteria set. Conclusions: The Bernoulli CUSUM is more easily interpretable graphically and is more suitable for monitoring outcomes than the standard CUSUM chart. It only requires three parameters to be set to monitor any clinical intervention: the average expected outcome, the maximum deviation from this, and the rate of false positive alarm triggers. PMID:16210461
Gurd, Brendon J; Patel, Jugal; Edgett, Brittany A; Scribbans, Trisha D; Quadrilatero, Joe; Fischer, Steven L
2018-05-28
Whole body sprint-interval training (WB-SIT) represents a mode of exercise training that is both time-efficient and does not require access to an exercise facility. The current study examined the feasibility of implementing a WB-SIT intervention in a workplace setting. A total of 747 employees from a large office building were invited to participate with 31 individuals being enrolled in the study. Anthropometrics, aerobic fitness, core and upper body strength, and lower body mobility were assessed before and after a 12-week exercise intervention consisting of 2-4 training sessions per week. Each training session required participants to complete 8, 20-second intervals (separated by 10 seconds of rest) of whole body exercise. Proportion of participation was 4.2% while the response rate was 35% (11/31 participants completed post training testing). In responders, compliance to prescribed training was 83±17%, and significant (p < 0.05) improvements were observed for aerobic fitness, push-up performance and lower body mobility. These results demonstrate the efficacy of WB-FIT for improving fitness and mobility in an office setting, but highlight the difficulties in achieving high rates of participation and response in this setting.
Quality control for federal clean water act and safe drinking water act regulatory compliance.
Askew, Ed
2013-01-01
QC sample results are required in order to have confidence in the results from analytical tests. Some of the AOAC water methods include specific QC procedures, frequencies, and acceptance criteria. These are considered to be the minimum controls needed to perform the method successfully. Some regulatory programs, such as those in 40 CFR Part 136.7, require additional QC or have alternative acceptance limits. Essential QC measures include method calibration, reagent standardization, assessment of each analyst's capabilities, analysis of blind check samples, determination of the method's sensitivity (method detection level or quantification limit), and daily evaluation of bias, precision, and the presence of laboratory contamination or other analytical interference. The details of these procedures, their performance frequency, and expected ranges of results are set out in this manuscript. The specific regulatory requirements of 40 CFR Part 136.7 for the Clean Water Act, the laboratory certification requirements of 40 CFR Part 141 for the Safe Drinking Water Act, and the ISO 17025 accreditation requirements under The NELAC Institute are listed.
NASA Astrophysics Data System (ADS)
Hübener, H.; Pérez-Osorio, M. A.; Ordejón, P.; Giustino, F.
2012-09-01
We present a systematic study of the performance of numerical pseudo-atomic orbital basis sets in the calculation of dielectric matrices of extended systems using the self-consistent Sternheimer approach of [F. Giustino et al., Phys. Rev. B 81, 115105 (2010)]. In order to cover a range of systems, from more insulating to more metallic character, we discuss results for the three semiconductors diamond, silicon, and germanium. Dielectric matrices of silicon and diamond calculated using our method fall within 1% of reference planewaves calculations, demonstrating that this method is promising. We find that polarization orbitals are critical for achieving good agreement with planewaves calculations, and that only a few additional ζ's are required for obtaining converged results, provided the split norm is properly optimized. Our present work establishes the validity of local orbital basis sets and the self-consistent Sternheimer approach for the calculation of dielectric matrices in extended systems, and prepares the ground for future studies of electronic excitations using these methods.
Ambrosini, Emilia; Ferrante, Simona; Schauer, Thomas; Ferrigno, Giancarlo; Molteni, Franco; Pedrocchi, Alessandra
2014-01-01
Cycling induced by Functional Electrical Stimulation (FES) training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury) were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.
Contingency and similarity in response selection.
Prinz, Wolfgang
2018-05-09
This paper explores issues of task representation in choice reaction time tasks. How is it possible, and what does it take, to represent such a task in a way that enables a performer to do the task in line with the prescriptions entailed in the instructions? First, a framework for task representation is outlined which combines the implementation of task sets and their use for performance with different kinds of representational operations (pertaining to feature compounds for event codes and code assemblies for task sets, respectively). Then, in a second step, the framework is itself embedded in the bigger picture of the classical debate on the roles of contingency and similarity for the formation of associations. The final conclusion is that both principles are needed and that the operation of similarity at the level of task sets requires and presupposes the operation of contingency at the level of event codes. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.
Interpreting the ASTM 'content standard for digital geospatial metadata'
Nebert, Douglas D.
1996-01-01
ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.
Visualization of unsteady computational fluid dynamics
NASA Astrophysics Data System (ADS)
Haimes, Robert
1994-11-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
Visualization of unsteady computational fluid dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1994-01-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
Evaluation of four methods for estimating leaf area of isolated trees
P.J. Peper; E.G. McPherson
2003-01-01
The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...
Samuel V. Glass; Stanley D. Gatland II; Kohta Ueno; Christopher J. Schumacher
2017-01-01
ASHRAE Standard 160, Criteria for Moisture-Control Design Analysis in Buildings, was published in 2009. The standard sets criteria for moisture design loads, hygrothermal analysis methods, and satisfactory moisture performance of the building envelope. One of the evaluation criteria specifies conditions necessary to avoid mold growth. The current standard requires that...
Retrieving Essential Material at the End of Lectures Improves Performance on Statistics Exams
ERIC Educational Resources Information Center
Lyle, Keith B.; Crawford, Nicole A.
2011-01-01
At the end of each lecture in a statistics for psychology course, students answered a small set of questions that required them to retrieve information from the same day's lecture. These exercises constituted retrieval practice for lecture material subsequently tested on four exams throughout the course. This technique is called the PUREMEM…
Nickel cadmium cell designs negative to positive material ratio and precharge levels
NASA Technical Reports Server (NTRS)
Gross, S.
1977-01-01
A review is made of the factors affecting the choices of negative-to-positive materials ratio and negative precharge in nickel-cadmium cells. The effects of these variables on performance are given, and the different methods for setting precharge are evaluated. The effects of special operating requirements on the design are also discussed.
ERIC Educational Resources Information Center
Laboratory Design Notes, 1966
1966-01-01
A collection of laboratory design notes to set forth minimum criteria required in the design of basic medical research laboratory buildings. Recommendations contained are primarily concerned with features of design which affect quality of performance and future flexibility of facility systems. Subjects of economy and safety are discussed where…
2009 Navy ManTech Project Book
2009-01-01
pieces which are welded together, filled with syntactic foam , and welded to the sail and hull structure. The ManTech project was successful in...cladding has demonstrated the required performance characteristics . The testing demonstrated manufacturability of optical fibers with enhanced hard...using Liquid Injection Molding Simulation (LIMS) and Polyworx software tools for infusion set-up optimization. Test articles fabricated are
Code of Federal Regulations, 2011 CFR
2011-04-01
... FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IN VITRO DIAGNOSTIC PRODUCTS FOR HUMAN USE Requirements for Manufacturers and Producers § 809.40... set forth in this section. (b) Sample testing shall be performed in a laboratory using screening tests...
Code of Federal Regulations, 2010 CFR
2010-04-01
... FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES IN VITRO DIAGNOSTIC PRODUCTS FOR HUMAN USE Requirements for Manufacturers and Producers § 809.40... set forth in this section. (b) Sample testing shall be performed in a laboratory using screening tests...
Factors Responsible for Performance on the Day-Night Task: Response Set or Semantics?
ERIC Educational Resources Information Center
Simpson, Andrew; Riggs, Kevin J.
2005-01-01
In a recent study Diamond, Kirkham and Amso (2002) obtained evidence consistent with the claim that the day-night task requires inhibition because the picture and its corresponding conflicting response are semantically related. In their study children responded more accurately in a dog-pig condition (see /day picture/ say "dog"; see /night…
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
Ex Ovo Model for Directly Visualizing Chick Embryo Development
ERIC Educational Resources Information Center
Dorrell, Michael I.; Marcacci, Michael; Bravo, Stephen; Kurz, Troy; Tremblay, Jacob; Rusing, Jack C.
2012-01-01
We describe a technique for removing and growing chick embryos in culture that utilizes relatively inexpensive materials and requires little space. It can be readily performed in class by university, high school, or junior high students, and teachers of any grade level should be able to set it up for their students. Students will be able to…
Competency Based Training Program for Department Chairpersons and Other Resource Personnel.
ERIC Educational Resources Information Center
Bingen, Frances N.; And Others
The Competency Based Training Program is a three part, three phase package. It contains: (1) a research document; (2) a set of 18 programed units and; (3) cassettes to accompany two specific units. The program phases require that: (1) the participant and a training advisor jointly perform a needs assessment activity and select appropriate units…
Masking release for words in amplitude-modulated noise as a function of modulation rate and task
Buss, Emily; Whittle, Lisa N.; Grose, John H.; Hall, Joseph W.
2009-01-01
For normal-hearing listeners, masked speech recognition can improve with the introduction of masker amplitude modulation. The present experiments tested the hypothesis that this masking release is due in part to an interaction between the temporal distribution of cues necessary to perform the task and the probability of those cues temporally coinciding with masker modulation minima. Stimuli were monosyllabic words masked by speech-shaped noise, and masker modulation was introduced via multiplication with a raised sinusoid of 2.5–40 Hz. Tasks included detection, three-alternative forced-choice identification, and open-set identification. Overall, there was more masking release associated with the closed than the open-set tasks. The best rate of modulation also differed as a function of task; whereas low modulation rates were associated with best performance for the detection and three-alternative identification tasks, performance improved with modulation rate in the open-set task. This task-by-rate interaction was also observed when amplitude-modulated speech was presented in a steady masker, and for low- and high-pass filtered speech presented in modulated noise. These results were interpreted as showing that the optimal rate of amplitude modulation depends on the temporal distribution of speech cues and the information required to perform a particular task. PMID:19603883