Sample records for event execution reliability

  1. Parallelized reliability estimation of reconfigurable computer networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Das, Subhendu; Palumbo, Dan

    1990-01-01

    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

  2. System control of an autonomous planetary mobile spacecraft

    NASA Technical Reports Server (NTRS)

    Dias, William C.; Zimmerman, Barbara A.

    1990-01-01

    The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.

  3. A Comparison of Independent Event-Related Desynchronization Responses in Motor-Related Brain Areas to Movement Execution, Movement Imagery, and Movement Observation.

    PubMed

    Duann, Jeng-Ren; Chiou, Jin-Chern

    2016-01-01

    Electroencephalographic (EEG) event-related desynchronization (ERD) induced by movement imagery or by observing biological movements performed by someone else has recently been used extensively for brain-computer interface-based applications, such as applications used in stroke rehabilitation training and motor skill learning. However, the ERD responses induced by the movement imagery and observation might not be as reliable as the ERD responses induced by movement execution. Given that studies on the reliability of the EEG ERD responses induced by these activities are still lacking, here we conducted an EEG experiment with movement imagery, movement observation, and movement execution, performed multiple times each in a pseudorandomized order in the same experimental runs. Then, independent component analysis (ICA) was applied to the EEG data to find the common motor-related EEG source activity shared by the three motor tasks. Finally, conditional EEG ERD responses associated with the three movement conditions were computed and compared. Among the three motor conditions, the EEG ERD responses induced by motor execution revealed the alpha power suppression with highest strengths and longest durations. The ERD responses of the movement imagery and movement observation only partially resembled the ERD pattern of the movement execution condition, with slightly better detectability for the ERD responses associated with the movement imagery and faster ERD responses for movement observation. This may indicate different levels of involvement in the same motor-related brain circuits during different movement conditions. In addition, because the resulting conditional EEG ERD responses from the ICA preprocessing came with minimal contamination from the non-related and/or artifactual noisy components, this result can play a role of the reference for devising a brain-computer interface using the EEG ERD features of movement imagery or observation.

  4. Discrete event command and control for networked teams with multiple missions

    NASA Astrophysics Data System (ADS)

    Lewis, Frank L.; Hudas, Greg R.; Pang, Chee Khiang; Middleton, Matthew B.; McMurrough, Christopher

    2009-05-01

    During mission execution in military applications, the TRADOC Pamphlet 525-66 Battle Command and Battle Space Awareness capabilities prescribe expectations that networked teams will perform in a reliable manner under changing mission requirements, varying resource availability and reliability, and resource faults. In this paper, a Command and Control (C2) structure is presented that allows for computer-aided execution of the networked team decision-making process, control of force resources, shared resource dispatching, and adaptability to change based on battlefield conditions. A mathematically justified networked computing environment is provided called the Discrete Event Control (DEC) Framework. DEC has the ability to provide the logical connectivity among all team participants including mission planners, field commanders, war-fighters, and robotic platforms. The proposed data management tools are developed and demonstrated on a simulation study and an implementation on a distributed wireless sensor network. The results show that the tasks of multiple missions are correctly sequenced in real-time, and that shared resources are suitably assigned to competing tasks under dynamically changing conditions without conflicts and bottlenecks.

  5. A common neural code for similar conscious experiences in different individuals

    PubMed Central

    Naci, Lorina; Cusack, Rhodri; Anello, Mimma; Owen, Adrian M.

    2014-01-01

    The interpretation of human consciousness from brain activity, without recourse to speech or action, is one of the most provoking and challenging frontiers of modern neuroscience. We asked whether there is a common neural code that underpins similar conscious experiences, which could be used to decode these experiences in the absence of behavior. To this end, we used richly evocative stimulation (an engaging movie) portraying real-world events to elicit a similar conscious experience in different people. Common neural correlates of conscious experience were quantified and related to measurable, quantitative and qualitative, executive components of the movie through two additional behavioral investigations. The movie’s executive demands drove synchronized brain activity across healthy participants’ frontal and parietal cortices in regions known to support executive function. Moreover, the timing of activity in these regions was predicted by participants’ highly similar qualitative experience of the movie’s moment-to-moment executive demands, suggesting that synchronization of activity across participants underpinned their similar experience. Thus we demonstrate, for the first time to our knowledge, that a neural index based on executive function reliably predicted every healthy individual’s similar conscious experience in response to real-world events unfolding over time. This approach provided strong evidence for the conscious experience of a brain-injured patient, who had remained entirely behaviorally nonresponsive for 16 y. The patient’s executive engagement and moment-to-moment perception of the movie content were highly similar to that of every healthy participant. These findings shed light on the common basis of human consciousness and enable the interpretation of conscious experience in the absence of behavior. PMID:25225384

  6. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  7. Questionnaire-based assessment of executive functioning: Psychometrics.

    PubMed

    Castellanos, Irina; Kronenberger, William G; Pisoni, David B

    2018-01-01

    The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.

  8. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  9. FTC - THE FAULT-TREE COMPILER (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.

  10. FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.

  11. State recovery and lockstep execution restart in a system with multiprocessor pairing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switchmore » or a bus. Each selectively paired processor core is includes a transactional execution facility, whereing the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility.« less

  12. Causal simulation and sensor planning in predictive monitoring

    NASA Technical Reports Server (NTRS)

    Doyle, Richard J.

    1989-01-01

    Two issues are addressed which arise in the task of detecting anomalous behavior in complex systems with numerous sensor channels: how to adjust alarm thresholds dynamically, within the changing operating context of the system, and how to utilize sensors selectively, so that nominal operation can be verified reliably without processing a prohibitive amount of sensor data. The approach involves simulation of a causal model of the system, which provides information on expected sensor values, and on dependencies between predicted events, useful in assessing the relative importance of events so that sensor resources can be allocated effectively. The potential applicability of this work to the execution monitoring of robot task plans is briefly discussed.

  13. Eye-witness memory and suggestibility in children with Asperger syndrome.

    PubMed

    McCrory, Eamon; Henry, Lucy A; Happé, Francesca

    2007-05-01

    Individuals with autism spectrum disorders (ASD) present with a particular profile of memory deficits, executive dysfunction and impaired social interaction that may raise concerns about their recall and reliability in forensic and legal contexts. Extant studies of memory shed limited light on this issue as they involved either laboratory-based tasks or protocols that varied between participants. The current study used a live classroom event to investigate eye-witness recall and suggestibility in children with Asperger syndrome (AS group; N = 24) and typically developing children (TD group; N = 27). All participants were aged between 11 and 14 years and were interviewed using a structured protocol. Two measures of executive functioning were also administered. The AS group were found to be no more suggestible and no less accurate than their peers. However, free recall elicited less information, including gist, in the AS group. TD, but not AS, participants tended to focus on the socially salient aspects of the scene in their free recall. Both general and specific questioning elicited similar numbers of new details in both groups. Significant correlations were found between memory recall and executive functioning performance in the AS group only. The present study indicates that children with AS can act as reliable witnesses but they may be more reliant on questioning to facilitate recall. Our findings also provide evidence for poor gist memory. It is speculated that such differences stem from weak central coherence and lead to a reliance on generic cognitive processes, such as executive functions, during recall. Future studies are required to investigate possible differences in compliance, rates of forgetting and false memory.

  14. Integrated Hardware and Software for No-Loss Computing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    When an algorithm is distributed across multiple threads executing on many distinct processors, a loss of one of those threads or processors can potentially result in the total loss of all the incremental results up to that point. When implementation is massively hardware distributed, then the probability of a hardware failure during the course of a long execution is potentially high. Traditionally, this problem has been addressed by establishing checkpoints where the current state of some or part of the execution is saved. Then in the event of a failure, this state information can be used to recompute that point in the execution and resume the computation from that point. A serious problem arises when one distributes a problem across multiple threads and physical processors is that one increases the likelihood of the algorithm failing due to no fault of the scientist but as a result of hardware faults coupled with operating system problems. With good reason, scientists expect their computing tools to serve them and not the other way around. What is novel here is a unique combination of hardware and software that reformulates an application into monolithic structure that can be monitored in real-time and dynamically reconfigured in the event of a failure. This unique reformulation of hardware and software will provide advanced aeronautical technologies to meet the challenges of next-generation systems in aviation, for civilian and scientific purposes, in our atmosphere and in atmospheres of other worlds. In particular, with respect to NASA s manned flight to Mars, this technology addresses the critical requirements for improving safety and increasing reliability of manned spacecraft.

  15. The role of test-retest reliability in measuring individual and group differences in executive functioning.

    PubMed

    Paap, Kenneth R; Sawi, Oliver

    2016-12-01

    Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  17. Reliability techniques for computer executive programs

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Computer techniques for increasing the stability and reliability of executive and supervisory systems were studied. Program segmentation characteristics are discussed along with a validation system which is designed to retain the natural top down outlook in coding. An analysis of redundancy techniques and roll back procedures is included.

  18. NPTool: Towards Scalability and Reliability of Business Process Management

    NASA Astrophysics Data System (ADS)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  19. Event dependence in U.S. executions

    PubMed Central

    Baumgartner, Frank R.; Box-Steffensmeier, Janet M.

    2018-01-01

    Since 1976, the United States has seen over 1,400 judicial executions, and these have been highly concentrated in only a few states and counties. The number of executions across counties appears to fit a stretched distribution. These distributions are typically reflective of self-reinforcing processes where the probability of observing an event increases for each previous event. To examine these processes, we employ two-pronged empirical strategy. First, we utilize bootstrapped Kolmogorov-Smirnov tests to determine whether the pattern of executions reflect a stretched distribution, and confirm that they do. Second, we test for event-dependence using the Conditional Frailty Model. Our tests estimate the monthly hazard of an execution in a given county, accounting for the number of previous executions, homicides, poverty, and population demographics. Controlling for other factors, we find that the number of prior executions in a county increases the probability of the next execution and accelerates its timing. Once a jurisdiction goes down a given path, the path becomes self-reinforcing, causing the counties to separate out into those never executing (the vast majority of counties) and those which use the punishment frequently. This finding is of great legal and normative concern, and ultimately, may not be consistent with the equal protection clause of the U.S. Constitution. PMID:29293583

  20. The reliability and validity of the Complex Task Performance Assessment: A performance-based assessment of executive function.

    PubMed

    Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2017-07-01

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p  = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.

  1. Ratings of Everyday Executive Functioning (REEF): A parent-report measure of preschoolers' executive functioning skills.

    PubMed

    Nilsen, Elizabeth S; Huyder, Vanessa; McAuley, Tara; Liebermann, Dana

    2017-01-01

    Executive functioning (EF) facilitates the development of academic, cognitive, and social-emotional skills and deficits in EF are implicated in a broad range of child psychopathologies. Although EF has clear implications for early development, the few questionnaires that assess EF in preschoolers tend to ask parents for global judgments of executive dysfunction and thus do not cover the full range of EF within the preschool age group. Here we present a new measure of preschoolers' EF-the Ratings of Everyday Executive Functioning (REEF)-that capitalizes on parents' observations of their preschoolers' (i.e., 3- to 5-year-olds) behavior in specific, everyday contexts. Over 4 studies, items comprising the REEF were refined and the measure's reliability and validity were evaluated. Factor analysis of the REEF yielded 1 factor, with items showing strong internal reliability. More important, children's scores on the REEF related to both laboratory measures of EF and another parent-report EF questionnaire. Moreover, reflecting divergent validity, the REEF was more strongly related to measures of EF as opposed to measures of affective styles. The REEF also captured differences in children's executive skills across the preschool years, and norms at 6-month intervals are reported. In summary, the REEF is a new parent-report measure that provides researchers with an efficient, valid, and reliable means of assessing preschoolers' executive functioning. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. An Empirically Keyed Scale for Measuring Managerial Attitudes toward Women Executives.

    ERIC Educational Resources Information Center

    Dubno, Peter; And Others

    1979-01-01

    A scale (Managerial Attitudes toward Women Executives Scale -- MATWES) provides reliability and validity measures regarding managerial attitudes toward women executives. It employs a projective test for item generation and uses a panel of women executives as Q-sorters to select items. The Scale and its value in minimizing researcher bias in its…

  3. WE-G-BRA-02: SafetyNet: Automating Radiotherapy QA with An Event Driven Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, S; Kessler, M; Litzenberg, D

    2015-06-15

    Purpose: Quality assurance is an essential task in radiotherapy that often requires many manual tasks. We investigate the use of an event driven framework in conjunction with software agents to automate QA and eliminate wait times. Methods: An in house developed subscription-publication service, EventNet, was added to the Aria OIS to be a message broker for critical events occurring in the OIS and software agents. Software agents operate without user intervention and perform critical QA steps. The results of the QA are documented and the resulting event is generated and passed back to EventNet. Users can subscribe to those eventsmore » and receive messages based on custom filters designed to send passing or failing results to physicists or dosimetrists. Agents were developed to expedite the following QA tasks: Plan Revision, Plan 2nd Check, SRS Winston-Lutz isocenter, Treatment History Audit, Treatment Machine Configuration. Results: Plan approval in the Aria OIS was used as the event trigger for plan revision QA and Plan 2nd check agents. The agents pulled the plan data, executed the prescribed QA, stored the results and updated EventNet for publication. The Winston Lutz agent reduced QA time from 20 minutes to 4 minutes and provided a more accurate quantitative estimate of radiation isocenter. The Treatment Machine Configuration agent automatically reports any changes to the Treatment machine or HDR unit configuration. The agents are reliable, act immediately, and execute each task identically every time. Conclusion: An event driven framework has inverted the data chase in our radiotherapy QA process. Rather than have dosimetrists and physicists push data to QA software and pull results back into the OIS, the software agents perform these steps immediately upon receiving the sentinel events from EventNet. Mr Keranen is an employee of Varian Medical Systems. Dr. Moran’s institution receives research support for her effort for a linear accelerator QA project from Varian Medical Systems. Other quality projects involving her effort are funded by Blue Cross Blue Shield of Michigan, Breast Cancer Research Foundation, and the NIH.« less

  4. Absolute and relative reliability of acute effects of aerobic exercise on executive function in seniors.

    PubMed

    Donath, Lars; Ludyga, Sebastian; Hammes, Daniel; Rossmeissl, Anja; Andergassen, Nadin; Zahner, Lukas; Faude, Oliver

    2017-10-25

    Aging is accompanied by a decline of executive function. Aerobic exercise training induces moderate improvements of cognitive domains (i.e., attention, processing, executive function, memory) in seniors. Most conclusive data are obtained from studies with dementia or cognitive impairment. Confident detection of exercise training effects requires adequate between-day reliability and low day-to-day variability obtained from acute studies, respectively. These absolute and relative reliability measures have not yet been examined for a single aerobic training session in seniors. Twenty-two healthy and physically active seniors (age: 69 ± 3 y, BMI: 24.8 ± 2.2, VO 2peak : 32 ± 6 mL/kg/bodyweight) were enrolled in this randomized controlled cross-over study. A repeated between-day comparison [i.e., day 1 (habituation) vs. day 2 & day 2 vs. day 3] of executive function testing (Eriksen-Flanker-Test, Stroop-Color-Test, Digit-Span, Five-Point-Test) before and after aerobic cycling exercise at 70% of the heart rate reserve [0.7 × (HR max - HR rest )] was conducted. Reliability measures were calculated for pre, post and change scores. Large between-day differences between day 1 and 2 were found for reaction times (Flanker- and Stroop Color testing) and completed figures (Five-Point test) at pre and post testing (0.002 < p < 0.05, 0.16 < ɳ p 2  < 0.38). These differences notably declined when comparing day 2 and 3. Absolute between days variability (CoV) dropped from 10 to 5% when comparing day 2 vs. day 3 instead of day 1 vs. day 2. Also ICC ranges increased from day 1 vs. day 2 (0.65 < ICC < 0.87) to day 2 vs. day 3 (0.40 < ICC < 0.93). Interestingly, reliability measures for pre-post change scores were low (0.02 < ICC < 0.71). These data did not improve when comparing day 2 with day 3. During inhibition tests, reaction times showed excellent reliability values compared to the poor to fair reliability of accuracy. Notable habituation to the whole testing procedure should be considered as it increased the reliability of different executive function tests. Change scores of executive function after acute aerobic exercise cannot be detected reliably. Large intra- and inter-individual of responses to acute aerobic exercise in seniors can be presumed.

  5. 76 FR 71011 - Reliability Technical Conference Agenda

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ... Reliability Technical Conference. Docket No. AD12-1-000 North American Electric Docket No. RC11-6-000... Chief Executive Officer, North American Electric Reliability Corporation (NERC) Kevin Burke, Chairman... and Reliability, American Public Power Association (APPA); NERC Standards Committee Chairman Deborah...

  6. Development of Internet-Based Tasks for the Executive Function Performance Test.

    PubMed

    Rand, Debbie; Lee Ben-Haim, Keren; Malka, Rachel; Portnoy, Sigal

    The Executive Function Performance Test (EFPT) is a reliable and valid performance-based tool to assess executive functions (EFs). This study's objective was to develop and verify two Internet-based tasks for the EFPT. A cross-sectional study assessed the alternate-form reliability of the Internet-based bill-paying and telephone-use tasks in healthy adults and people with subacute stroke (Study 1). It also sought to establish the tasks' criterion reliability for assessing EF deficits by correlating performance with that on the Trail Making Test in five groups: healthy young adults, healthy older adults, people with subacute stroke, people with chronic stroke, and young adults with attention deficit hyperactivity disorder (Study 2). The alternative-form reliability and initial construct validity for the Internet-based bill-paying task were verified. Criterion validity was established for both tasks. The Internet-based tasks are comparable to the original EFPT tasks and can be used for assessment of EF deficits. Copyright © 2018 by the American Occupational Therapy Association, Inc.

  7. MAX - An advanced parallel computer for space applications

    NASA Technical Reports Server (NTRS)

    Lewis, Blair F.; Bunker, Robert L.

    1991-01-01

    MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

  8. Effect of clinical information and previous exam execution on observer agreement and reliability in the analysis of hysteroscopic video-recordings.

    PubMed

    Martinho, Margarida Suzel Lopes; da Costa Santos, Cristina Maria Nogueira; Silva Carvalho, João Luís Mendonça; Bernardes, João Francisco Montenegro Andrade Lima

    2018-02-01

    Inter-observer agreement and reliability in hysteroscopic image assessment remain uncertain and the type of factors that may influence it has only been studied in relation to the experience of hysteroscopists. We aim to assess the effect of clinical information and previous exam execution on observer agreement and reliability in the analysis of hysteroscopic video-recordings. Ninety hysteroscopies were video-recorded and randomized into a group without (Group 1) and with clinical information (Group 2). The videos were independently analyzed by three hysteroscopists, regarding lesion location, dimension, and type, as well as decision to perform a biopsy. One of the hysteroscopists had executed all the exams before. Proportions of agreement (PA) and kappa statistics (κ) with 95% confidence intervals (95% CI) were used. In Group 2, there was a higher proportion of a normal diagnosis (p < 0.001) and a lower proportion of biopsies recommended (p = 0.027). Observer agreement and reliability were better in Group 2, with the PA and κ ranging, respectively, from 0.73 (95% CI 0.62, 0.83) and 0.44 (95% CI 0.26, 0.63), for image quality, to 0.94 (95% CI 0.88, 0.99) and 0.85 (95% CI 0.65, 0.95), for the decision to perform a biopsy. Execution of the exams before the analysis of the video-recordings did not significantly affect the results. With clinical information, agreement and reliability in the overall analysis of hysteroscopic video-recordings may reach almost perfect results and this was not significantly affected by the execution of the exams before the analysis. However, there is still uncertainty in the analysis of specific endometrial cavity abnormalities.

  9. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols.

    PubMed

    Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N

    2018-05-01

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.

  10. Aerobic Fitness and Cognitive Development: Event-Related Brain Potential and Task Performance Indices of Executive Control in Preadolescent Children

    ERIC Educational Resources Information Center

    Hillman, Charles H.; Buck, Sarah M.; Themanson, Jason R.; Pontifex, Matthew B.; Castelli, Darla M.

    2009-01-01

    The relationship between aerobic fitness and executive control was assessed in 38 higher- and lower-fit children (M[subscript age] = 9.4 years), grouped according to their performance on a field test of aerobic capacity. Participants performed a flanker task requiring variable amounts of executive control while event-related brain potential…

  11. Analysis of Alerting System Failures in Commercial Aviation Accidents

    NASA Technical Reports Server (NTRS)

    Mumaw, Randall J.

    2017-01-01

    The role of an alerting system is to make the system operator (e.g., pilot) aware of an impending hazard or unsafe state so the hazard can be avoided or managed successfully. A review of 46 commercial aviation accidents (between 1998 and 2014) revealed that, in the vast majority of events, either the hazard was not alerted or relevant hazard alerting occurred but failed to aid the flight crew sufficiently. For this set of events, alerting system failures were placed in one of five phases: Detection, Understanding, Action Selection, Prioritization, and Execution. This study also reviewed the evolution of alerting system schemes in commercial aviation, which revealed naive assumptions about pilot reliability in monitoring flight path parameters; specifically, pilot monitoring was assumed to be more effective than it actually is. Examples are provided of the types of alerting system failures that have occurred, and recommendations are provided for alerting system improvements.

  12. Virtual Sensor Web Architecture

    NASA Astrophysics Data System (ADS)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  13. Leveraging the BPEL Event Model to Support QoS-aware Process Execution

    NASA Astrophysics Data System (ADS)

    Zaid, Farid; Berbner, Rainer; Steinmetz, Ralf

    Business processes executed using compositions of distributed Web Services are susceptible to different fault types. The Web Services Business Process Execution Language (BPEL) is widely used to execute such processes. While BPEL provides fault handling mechanisms to handle functional faults like invalid message types, it still lacks a flexible native mechanism to handle non-functional exceptions associated with violations of QoS levels that are typically specified in a governing Service Level Agreement (SLA), In this paper, we present an approach to complement BPEL's fault handling, where expected QoS levels and necessary recovery actions are specified declaratively in form of Event-Condition-Action (ECA) rules. Our main contribution is leveraging BPEL's standard event model which we use as an event space for the created ECA rules. We validate our approach by an extension to an open source BPEL engine.

  14. Sense and reliability. A conversation with celebrated psychologist Karl E. Weick. Interview by Diane L. Coutu.

    PubMed

    Weick, Karl E

    2003-04-01

    Most of us see the organizations we operate in--our schools or companies, for instance--as monolithic and predictable, subjecting us to deadening routines and demanding dehumanizing conformity. But companies are more unpredictable and more alive than we imagine, according to Karl Weick, a psychology professor at the University of Michigan and an expert on organizational behavior. Weick says executives can learn a lot about managing the unexpected from organizations that can't afford surprises in the workplace--nuclear plants, firefighting units, or emergency rooms, for instance. In this conversation with HBR senior editor Diane Coutu, Weick examines the characteristics of these high-reliability organizations (HROs) and suggests ways that other organizations can implement their practices and philosophies. The key difference between high-reliability organizations and other companies is the mindfulness with which people in most HROs react to even very weak signs that some kind of change or danger is approaching. For instance, nuclear-plant workers Weick has studied immediately readjust dials and system commands when an automated system doesn't respond as expected. Weick contrasts this with Ford's inability to pick up on weak signs in the 1970s that there were lethal problems with the design of the Pinto gas tank. HROs are fixated on failure. They eschew plans and blueprints, looking instead for the details that might be missing. And they refuse to simplify reality, Weick says. Indeed, by cultivating broad work experiences and enlarging their repertoires, generalist executives can avoid getting paralyzed by "cosmology episodes"--events that make people feel as though the universe is no longer a rational, orderly system.

  15. Temporal Precision of Neuronal Information in a Rapid Perceptual Judgment

    PubMed Central

    Ghose, Geoffrey M.; Harrison, Ian T.

    2009-01-01

    In many situations, such as pedestrians crossing a busy street or prey evading predators, rapid decisions based on limited perceptual information are critical for survival. The brevity of these perceptual judgments constrains how neuronal signals are integrated or pooled over time because the underlying sequence of processes, from sensation to perceptual evaluation to motor planning and execution, all occur within several hundred milliseconds. Because most previous physiological studies of these processes have relied on tasks requiring considerably longer temporal integration, the neuronal basis of such rapid decisions remains largely unexplored. In this study, we examine the temporal precision of neuronal activity associated with a rapid perceptual judgment. We find that the activity of individual neurons over tens of milliseconds can reliably convey information about sensory events and was well correlated with the animals' judgments. There was a strong correlation between sensory reliability and the correlation with behavioral choice, suggesting that rapid decisions were preferentially based on the most reliable sensory signals. We also find that a simple model in which the responses of a small number of individual neurons (<5) are summed can completely explain behavioral performance. These results suggest that neuronal circuits are sufficiently precise to allow for cognitive decisions to be based on small numbers of action potentials from highly reliable neurons. PMID:19109454

  16. Temporal precision of neuronal information in a rapid perceptual judgment.

    PubMed

    Ghose, Geoffrey M; Harrison, Ian T

    2009-03-01

    In many situations, such as pedestrians crossing a busy street or prey evading predators, rapid decisions based on limited perceptual information are critical for survival. The brevity of these perceptual judgments constrains how neuronal signals are integrated or pooled over time because the underlying sequence of processes, from sensation to perceptual evaluation to motor planning and execution, all occur within several hundred milliseconds. Because most previous physiological studies of these processes have relied on tasks requiring considerably longer temporal integration, the neuronal basis of such rapid decisions remains largely unexplored. In this study, we examine the temporal precision of neuronal activity associated with a rapid perceptual judgment. We find that the activity of individual neurons over tens of milliseconds can reliably convey information about sensory events and was well correlated with the animals' judgments. There was a strong correlation between sensory reliability and the correlation with behavioral choice, suggesting that rapid decisions were preferentially based on the most reliable sensory signals. We also find that a simple model in which the responses of a small number of individual neurons (<5) are summed can completely explain behavioral performance. These results suggest that neuronal circuits are sufficiently precise to allow for cognitive decisions to be based on small numbers of action potentials from highly reliable neurons.

  17. Joint Exercise Program: DOD Needs to Take Steps to Improve the Quality of Funding Data

    DTIC Science & Technology

    2017-02-01

    technology systems—the Joint Training Information Management System (JTIMS) and the Execution Management System—to manage the execution of the Joint...Exercise Program, but does not have assurance that funding execution data in the Execution Management System are reliable. JTIMS is the system of record...for the Joint Exercise Program that combatant commanders use to plan and manage their joint training exercises. GAO observed significant variation

  18. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  19. Design and implementation of the GLIF3 guideline execution engine.

    PubMed

    Wang, Dongwen; Peleg, Mor; Tu, Samson W; Boxwala, Aziz A; Ogunyemi, Omolola; Zeng, Qing; Greenes, Robert A; Patel, Vimla L; Shortliffe, Edward H

    2004-10-01

    We have developed the GLIF3 Guideline Execution Engine (GLEE) as a tool for executing guidelines encoded in the GLIF3 format. In addition to serving as an interface to the GLIF3 guideline representation model to support the specified functions, GLEE provides defined interfaces to electronic medical records (EMRs) and other clinical applications to facilitate its integration with the clinical information system at a local institution. The execution model of GLEE takes the "system suggests, user controls" approach. A tracing system is used to record an individual patient's state when a guideline is applied to that patient. GLEE can also support an event-driven execution model once it is linked to the clinical event monitor in a local environment. Evaluation has shown that GLEE can be used effectively for proper execution of guidelines encoded in the GLIF3 format. When using it to execute each guideline in the evaluation, GLEE's performance duplicated that of the reference systems implementing the same guideline but taking different approaches. The execution flexibility and generality provided by GLEE, and its integration with a local environment, need to be further evaluated in clinical settings. Integration of GLEE with a specific event-monitoring and order-entry environment is the next step of our work to demonstrate its use for clinical decision support. Potential uses of GLEE also include quality assurance, guideline development, and medical education.

  20. Biomedical engineers and participation in judicial executions: capital punishment as a technical problem.

    PubMed

    Doyle, John

    2007-01-01

    This paper discusses the topic of judicial execution from the perspective of the intersection of the technological issues and the professional ethics issues. Although physicians are generally ethically forbidden from any involvement in the judicial execution process, this does not appear to be the case for engineering professionals. This creates an interesting but controversial opportunity for the engineering community (especially biomedical engineers) to improve the humaneness and reliability of the judicial execution process.

  1. Systematic behavioural observation of executive performance after brain injury.

    PubMed

    Lewis, Mark W; Babbage, Duncan R; Leathem, Janet M

    2017-01-01

    To develop an ecologically valid measure of executive functioning (i.e. Planning and Organization, Executive Memory, Initiation, Cognitive Shifting, Impulsivity, Sustained and Directed Attention, Error Detection, Error Correction and Time Management) during a functional chocolate brownie cooking task. In Study 1, the inter-rater reliability of a novel behavioural observation assessment method was assessed with 10 people with traumatic brain injury (TBI). In Study 2, 27 people with TBI and 16 healthy controls completed the functional task along with other measures of executive functioning to assess validity. Intraclass correlation coefficients for six of the nine aspects of executive functioning ranged from .54 to 1.00. Percentage agreements for the remaining aspects ranged from 70% to 90%. Significant and non-significant, moderate, correlations were found between the functional cooking task and standard neuropsychological measures. The healthy control group performed better than the TBI group in six areas (d = 0.56 to 1.23). In this initial trial of a novel assessment method, adequate inter-rater reliability was found. The measure was associated with standard neuropsychological measures, and our healthy control group performed better than the TBI group. The measure appears to be an ecologically valid measure of executive functioning.

  2. 22 CFR 213.28 - Execution of releases.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and execute a release on behalf of the United States. In the event a mutual release is not executed... all claims and causes of action against USAID and its officials related to the transaction giving rise...

  3. 22 CFR 213.28 - Execution of releases.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... and execute a release on behalf of the United States. In the event a mutual release is not executed... all claims and causes of action against USAID and its officials related to the transaction giving rise...

  4. A Differential Deficit in Time- versus Event-based Prospective Memory in Parkinson's Disease

    PubMed Central

    Raskin, Sarah A.; Woods, Steven Paul; Poquette, Amelia J.; McTaggart, April B.; Sethna, Jim; Williams, Rebecca C.; Tröster, Alexander I.

    2010-01-01

    Objective The aim of the current study was to clarify the nature and extent of impairment in time- versus event-based prospective memory in Parkinson's disease (PD). Prospective memory is thought to involve cognitive processes that are mediated by prefrontal systems and are executive in nature. Given that individuals with PD frequently show executive dysfunction, it is important to determine whether these individuals may have deficits in prospective memory that could impact daily functions, such as taking medications. Although it has been reported that individuals with PD evidence impairment in prospective memory, it is still unclear whether they show a greater deficit for time- versus event-based cues. Method Fifty-four individuals with PD and 34 demographically similar healthy adults were administered a standardized measure of prospective memory that allows for a direct comparison of time-based and event-based cues. In addition, participants were administered a series of standardized measures of retrospective memory and executive functions. Results Individuals with PD demonstrated impaired prospective memory performance compared to the healthy adults, with a greater impairment demonstrated for the time-based tasks. Time-based prospective memory performance was moderately correlated with measures of executive functioning, but only the Stroop Neuropsychological Screening Test emerged as a unique predictor in a linear regression. Conclusions Findings are interpreted within the context of McDaniel and Einstein's (2000) multi-process theory to suggest that individuals with PD experience particular difficulty executing a future intention when the cue to execute the prescribed intention requires higher levels of executive control. PMID:21090895

  5. Bioterror events: preemptive strategies for healthcare executives.

    PubMed

    Zinkovich, Lisa; Malvey, Donna; Hamby, Eileen; Fottler, Myron

    2005-01-01

    Today's healthcare executives face challenges that their predecessors have never known: bioterror events. To prepare their organizations to cope with new and emerging strategic threats of bioterrorism, these executives must consider preemptive strategies. The authors present courses of action to assist executives' internal, external, and cross-sectional organizational preparedness. For example, stakeholder groups, internal resources, and competencies that combine and align efforts efficiently are identified. Twelve preemptive strategies are provided to guide healthcare executives in meeting these formidable and unprecedented challenges. The reputation of the healthcare organization (HCO) is at risk if a bioterror event is not properly handled, resulting in severe disadvantages for future operations. Justifiably, healthcare executives are contemplating the value of prioritizing bioterror preparedness, taking into account the immediate realities of decreasing reimbursement, increasing numbers of uninsured patients, and staffing shortages. Resources must be focused on the most valid concerns and must maximize the return on investment. Healthcare organizations can reap the benefits of a win-win approach by optimizing available resources, planning, and training. Bioterror preparedness will transcend the boundaries of bioterrorism and prepare for myriad mass healthcare incidents such as the looming potential for an avian (bird) influenza pandemic.

  6. Acute stress affects prospective memory functions via associative memory processes.

    PubMed

    Szőllősi, Ágnes; Pajkossy, Péter; Demeter, Gyula; Kéri, Szabolcs; Racsmány, Mihály

    2018-01-01

    Recent findings suggest that acute stress can improve the execution of delayed intentions (prospective memory, PM). However, it is unclear whether this improvement can be explained by altered executive control processes or by altered associative memory functioning. To investigate this issue, we used physical-psychosocial stressors to induce acute stress in laboratory settings. Then participants completed event- and time-based PM tasks requiring the different contribution of control processes and a control task (letter fluency) frequently used to measure executive functions. According to our results, acute stress had no impact on ongoing task performance, time-based PM, and verbal fluency, whereas it enhanced event-based PM as measured by response speed for the prospective cues. Our findings indicate that, here, acute stress did not affect executive control processes. We suggest that stress affected event-based PM via associative memory processes. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. An Overview of the Runtime Verification Tool Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into efficient automata, which check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.

  8. Partnerships With Aviation: Promoting a Culture of Safety in Health Care.

    PubMed

    Skinner, Lori; Tripp, Terrance R; Scouler, David; Pechacek, Judith M

    2015-01-01

    According to the Institute of Medicine (IOM, 1999, p. 1), "Medical errors can be defined as the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim." The current health care culture is disjointed, as evidenced by a lack of consistent reporting standards for all providers; provider licensing pays little attention to errors, and there are no financial incentives to improve safety (IOM, 1999). Many errors in health care are preventable. "Near misses" and adverse events that do occur can offer insight on how to improve practice and prevent future events. The aim of this article is to better understand underreporting of errors in health care, to present a model of change that increases voluntary error reporting, and to discuss the role nurse executives play in creating a culture of safety. This article explores how high reliability organizations such as aviation improve safety through enhanced error reporting, culture change, and teamwork.

  9. The Prodiguer Messaging Platform

    NASA Astrophysics Data System (ADS)

    Denvil, S.; Greenslade, M. A.; Carenton, N.; Levavasseur, G.; Raciazek, J.

    2015-12-01

    CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French global climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output are some of the complexities that CONVERGENCE aims to resolve.At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of French High Performance Computing (HPC) environments. The IPSL's simulation execution runtime libIGCM (library for IPSL Global Climate Modeling group) has recently been enhanced so as to support hitherto impossible realtime use cases such as simulation monitoring, data publication, metrics collection, simulation control, visualizations … etc. At the core of this enhancement is Prodiguer: an AMQP (Advanced Message Queue Protocol) based event driven asynchronous distributed messaging platform. libIGCM now dispatches copious amounts of information, in the form of messages, to the platform for remote processing by Prodiguer software agents at IPSL servers in Paris. Such processing takes several forms: Persisting message content to database(s); Launching rollback jobs upon simulation failure; Notifying downstream applications; Automation of visualization pipelines; We will describe and/or demonstrate the platform's: Technical implementation; Inherent ease of scalability; Inherent adaptiveness in respect to supervising simulations; Web portal receiving simulation notifications in realtime.

  10. A Case Study in Design Thinking Applied Through Aviation Mission Support Tactical Advancements for the Next Generation (TANG)

    DTIC Science & Technology

    2017-12-01

    This is an examination of the research, execution, and follow- on developments supporting the Design Thinking event explored through Case Study ...research, execution, and follow- on developments supporting the Design Thinking event explored through case study methods. Additionally, the lenses of...total there have been two Naval Postgraduate School (NPS) case study theses on U.S. Navy innovation events as well as other works examining the

  11. Test-retest reliability of jump execution variables using mechanography: A comparison of jump protocols

    USDA-ARS?s Scientific Manuscript database

    Mechanography during the vertical jump test allows for evaluation of force-time variables reflecting jump execution, which may enhance screening for functional deficits that reduce physical performance and determining mechanistic causes underlying performance changes. However, utility of jump mechan...

  12. Comparison of Children With and Without ADHD on a New Pictorial Self-Assessment of Executive Functions.

    PubMed

    Bar-Ilan, Ruthie Traub; Cohen, Noa; Maeir, Adina

    We examined the Pictorial Interview of Children's Metacognition and Executive Functions' (PIC-ME's) reliability and validity, targeting children's appraisal of their executive function (EF) in daily life. One hundred children with attention deficit hyperactivity disorder (ADHD) and 44 typically developing children (ages 5-10 yr) completed the PIC-ME. Parents completed the PIC-ME and Behavior Rating Inventory of Executive Function (BRIEF). Cronbach's α for the child PIC-ME was .914. A high correlation was found between the parent PIC-ME total and the BRIEF (r = .724). Comparisons between groups revealed significant differences on the parent PIC-ME (p < .0001) but none on the child PIC-ME. Children with ADHD identified a median of eight EF challenges they wanted to set as treatment goals. Results support the PIC-ME's initial reliability and validity among children with ADHD. Children were able to identify several EF challenges and engage in goal setting. Copyright © 2018 by the American Occupational Therapy Association, Inc.

  13. On the Information Content of Program Traces

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Program traces are used for analysis of program performance, memory utilization, and communications as well as for program debugging. The trace contains records of execution events generated by monitoring units inserted into the program. The trace size limits the resolution of execution events and restricts the user's ability to analyze the program execution. We present a study of the information content of program traces and develop a coding scheme which reduces the trace size to the limit given by the trace entropy. We apply the coding to the traces of AIMS instrumented programs executed on the IBM SPA and the SCSI Power Challenge and compare it with other coding methods. Our technique shows size of the trace can be reduced by more than a factor of 5.

  14. Suggestibility under Pressure: Theory of Mind, Executive Function, and Suggestibility in Preschoolers

    ERIC Educational Resources Information Center

    Karpinski, Aryn C.; Scullin, Matthew H.

    2009-01-01

    Eighty preschoolers, ages 3 to 5 years old, completed a 4-phase study in which they experienced a live event and received a pressured, suggestive interview about the event a week later. Children were also administered batteries of theory of mind and executive function tasks, as well as the Video Suggestibility Scale for Children (VSSC), which…

  15. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  16. Managed traffic evacuation using distributed sensor processing

    NASA Astrophysics Data System (ADS)

    Ramuhalli, Pradeep; Biswas, Subir

    2005-05-01

    This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.

  17. Access to Presidential Materials.

    ERIC Educational Resources Information Center

    Tyler, John Edward

    The Supreme Court's decision regarding executive privilege in the case of the United States v. Richard Nixon focused on specifics and left the greater issues of executive privilege untouched. This report summarizes the events leading up to Nixon's confrontation with the Supreme Court and examines the future of executive privilege. Questions raised…

  18. Assessment of Alternative Interfaces for Manual Commanding of Spacecraft Systems: Compatibility with Flexible Allocation Policies

    NASA Technical Reports Server (NTRS)

    Billman, Dorrit Owen; Schreckenghost, Debra; Miri, Pardis

    2014-01-01

    Astronauts will be responsible for executing a much larger body of procedures as human exploration moves further from Earth and Mission Control. Efficient, reliable methods for executing these procedures, including manual, automated, and mixed execution will be important. Our interface integrates step-by-step instruction with the means for execution. The research reported here compared manual execution using the new system to a system analogous to the manual-only system currently in use on the International Space Station, to assess whether user performance in manual operations would be as good or better with the new than with the legacy system. The system used also allows flexible automated execution. The system and our data lay the foundation for integrating automated execution into the flow of procedures designed for humans. In our formative study, we found speed and accuracy of manual procedure execution was better using the new, integrated interface over the legacy design.

  19. A modular telerobotic task execution system

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Tso, Kam S.; Hayati, Samad; Lee, Thomas S.

    1990-01-01

    A telerobot task execution system is proposed to provide a general parametrizable task execution capability. The system includes communication with the calling system, e.g., a task planning system, and single- and dual-arm sensor-based task execution with monitoring and reflexing. A specific task is described by specifying the parameters to various available task execution modules including trajectory generation, compliance control, teleoperation, monitoring, and sensor fusion. Reflex action is achieved by finding the corresponding reflex action in a reflex table when an execution event has been detected with a monitor.

  20. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  1. Care 3 model overview and user's guide, first revision

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Petersen, P. L.

    1985-01-01

    A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.

  2. Grid Task Execution

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2007-01-01

    IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.

  3. Software reliability: Application of a reliability model to requirements error analysis

    NASA Technical Reports Server (NTRS)

    Logan, J.

    1980-01-01

    The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.

  4. Processing communications events in parallel active messaging interface by awakening thread from wait state

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  5. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  6. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE PAGES

    Romano, Paul K.; Siegel, Andrew R.

    2017-07-01

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  7. Development and preliminary reliability of a multitasking assessment for executive functioning after concussion.

    PubMed

    Smith, Laurel B; Radomski, Mary Vining; Davidson, Leslie Freeman; Finkelstein, Marsha; Weightman, Margaret M; McCulloch, Karen L; Scherer, Matthew R

    2014-01-01

    OBJECTIVES. Executive functioning deficits may result from concussion. The Charge of Quarters (CQ) Duty Task is a multitask assessment designed to assess executive functioning in servicemembers after concussion. In this article, we discuss the rationale and process used in the development of the CQ Duty Task and present pilot data from the preliminary evaluation of interrater reliability (IRR). METHOD. Three evaluators observed as 12 healthy participants performed the CQ Duty Task and measured performance using various metrics. Intraclass correlation coefficient (ICC) quantified IRR. RESULTS. The ICC for task completion was .94. ICCs for other assessment metrics were variable. CONCLUSION. Preliminary IRR data for the CQ Duty Task are encouraging, but further investigation is needed to improve IRR in some domains. Lessons learned in the development of the CQ Duty Task could benefit future test development efforts with populations other than the military. Copyright © 2014 by the American Occupational Therapy Association, Inc.

  8. Development and Preliminary Reliability of a Multitasking Assessment for Executive Functioning After Concussion

    PubMed Central

    Radomski, Mary Vining; Davidson, Leslie Freeman; Finkelstein, Marsha; Weightman, Margaret M.; McCulloch, Karen L.; Scherer, Matthew R.

    2014-01-01

    OBJECTIVES. Executive functioning deficits may result from concussion. The Charge of Quarters (CQ) Duty Task is a multitask assessment designed to assess executive functioning in servicemembers after concussion. In this article, we discuss the rationale and process used in the development of the CQ Duty Task and present pilot data from the preliminary evaluation of interrater reliability (IRR). METHOD. Three evaluators observed as 12 healthy participants performed the CQ Duty Task and measured performance using various metrics. Intraclass correlation coefficient (ICC) quantified IRR. RESULTS. The ICC for task completion was .94. ICCs for other assessment metrics were variable. CONCLUSION. Preliminary IRR data for the CQ Duty Task are encouraging, but further investigation is needed to improve IRR in some domains. Lessons learned in the development of the CQ Duty Task could benefit future test development efforts with populations other than the military. PMID:25005507

  9. An Execution Service for Grid Computing

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Hu, Chaumin

    2004-01-01

    This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.

  10. Executable Architecture Research at Old Dominion University

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  11. Do strategic processes contribute to the specificity of future simulation in depression?

    PubMed

    Addis, Donna Rose; Hach, Sylvia; Tippett, Lynette J

    2016-06-01

    The tendency to generate overgeneral past or future events is characteristic of individuals with a history of depression. Although much research has investigated the contribution of rumination and avoidance to the reduced specificity of past events, comparatively little research has examined (1) whether the specificity of future events is differentially reduced in depression and (2) the role of executive functions in this phenomenon. Our study aimed to redress this imbalance. Participants with either current or past experience of depressive symptoms ('depressive group'; N = 24) and matched controls ('control group'; N = 24) completed tests of avoidance, rumination, and executive functions. A modified Autobiographical Memory Test was administered to assess the specificity of past and future events. The depressive group were more ruminative and avoidant than controls, but did not exhibit deficits in executive function. Although overall the depressive group generated significantly fewer specific events than controls, this reduction was driven by a significant group difference in future event specificity. Strategic retrieval processes were correlated with both past and future specificity, and predictive of the future specificity, whereas avoidance and rumination were not. Our findings demonstrate that future simulation appears to be particularly vulnerable to disruption in individuals with current or past experience of depressive symptoms, consistent with the notion that future simulation is more cognitively demanding than autobiographical memory retrieval. Moreover, our findings suggest that even subtle changes in executive functions such as strategic processes may impact the ability to imagine specific future events. Future simulation may be particularly vulnerable to executive dysfunction in individuals with current/previous depressive symptoms, with evidence of a differential reduction in the specificity of future events. Strategic retrieval abilities were associated with the degree of future event specificity whereas levels of rumination and avoidance were not. Given that the ability to generate specific simulations of the future is associated with enhanced psychological wellbeing, problem solving and coping behaviours, understanding how to increase the specificity of future simulations in depression is an important direction for future research and clinical practice. Interventions focusing on improving the ability to engage strategic processes may be a fruitful avenue for increasing the ability to imagine specific future events in depression. The autobiographical event tasks have somewhat limited ecological validity as they do not account for the many social and environmental cues present in everyday life; the development of more clinically-relevant tasks may be of benefit to this area of study. © 2016 The British Psychological Society.

  12. An Early Years Toolbox for Assessing Early Executive Function, Language, Self-Regulation, and Social Development: Validity, Reliability, and Preliminary Norms

    ERIC Educational Resources Information Center

    Howard, Steven J.; Melhuish, Edward

    2017-01-01

    Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years…

  13. Perceptions of Role Related Stress in Senior Educational Executives and Its Relationship to Their Health.

    ERIC Educational Resources Information Center

    Bergin, Mel; Solman, Robert

    1988-01-01

    This study was conducted to determine prevalence of self-reported role related stress in senior educational executives based on their personal characteristics, and to examine sources of the perceived stress and evidence of ill health or other negative coping processes. Four reliable stress factors were identified: teacher assessment, time…

  14. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for themore » context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.« less

  15. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols

    USDA-ARS?s Scientific Manuscript database

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes for functional deficits that reduce physical performance. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the tes...

  16. Intergenerational Transmission of Neuropsychological Executive Functioning

    PubMed Central

    Jester, Jennifer M.; Nigg, Joel T.; Puttler, Leon I.; Long, Jeffrey C.; Fitzgerald, Hiram E.; Zucker, Robert A.

    2009-01-01

    Relationships between parent and child executive functioning were examined, controlling for the critical potential confound of IQ, in a family study involving 434 children (130 girls, 304 boys) and 376 parents from 204 community recruited families at high risk for the development of substance use disorder. Structural equation modeling found evidence of separate executive functioning and intelligence (IQ) latent variables. Mother’s and father’s executive functioning were associated with child’s executive functioning (beta = 0.34 for father-child, 0.51 for mother-child), independently of parental IQ, which as expected was associated with child’s IQ (beta = 0.52 for father-child, 0.54 for mother-child). Familial correlations also showed a significant relationship of executive functioning between parents and offspring. These findings clarify that key elements of the executive functioning construct are reliably differentiable from IQ, and are transmitted in families. This work supports the utility of the construct of executive function in further study of the mechanisms and etiology of externalizing psychopathologies. PMID:19243871

  17. A high performance sensorimotor beta rhythm-based brain computer interface associated with human natural motor behavior

    NASA Astrophysics Data System (ADS)

    Bai, Ou; Lin, Peter; Vorbach, Sherry; Floeter, Mary Kay; Hattori, Noriaki; Hallett, Mark

    2008-03-01

    To explore the reliability of a high performance brain-computer interface (BCI) using non-invasive EEG signals associated with human natural motor behavior does not require extensive training. We propose a new BCI method, where users perform either sustaining or stopping a motor task with time locking to a predefined time window. Nine healthy volunteers, one stroke survivor with right-sided hemiparesis and one patient with amyotrophic lateral sclerosis (ALS) participated in this study. Subjects did not receive BCI training before participating in this study. We investigated tasks of both physical movement and motor imagery. The surface Laplacian derivation was used for enhancing EEG spatial resolution. A model-free threshold setting method was used for the classification of motor intentions. The performance of the proposed BCI was validated by an online sequential binary-cursor-control game for two-dimensional cursor movement. Event-related desynchronization and synchronization were observed when subjects sustained or stopped either motor execution or motor imagery. Feature analysis showed that EEG beta band activity over sensorimotor area provided the largest discrimination. With simple model-free classification of beta band EEG activity from a single electrode (with surface Laplacian derivation), the online classifications of the EEG activity with motor execution/motor imagery were: >90%/~80% for six healthy volunteers, >80%/~80% for the stroke patient and ~90%/~80% for the ALS patient. The EEG activities of the other three healthy volunteers were not classifiable. The sensorimotor beta rhythm of EEG associated with human natural motor behavior can be used for a reliable and high performance BCI for both healthy subjects and patients with neurological disorders. Significance: The proposed new non-invasive BCI method highlights a practical BCI for clinical applications, where the user does not require extensive training.

  18. Probabilistic durability assessment of concrete structures in marine environments: Reliability and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Ning, Chao-lie; Li, Bing

    2017-03-01

    A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.

  19. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  20. Emergency Manuals Improved Novice Physician Performance During Simulated ICU Emergencies

    PubMed Central

    Wang, Jacob; Stiegler, Marjorie P.; Nguyen, Dung; Rebel, Annette; Isaak, Robert S.

    2017-01-01

    Background Emergency manuals, which are safety essentials in non-medical high-reliability organizations (e.g., aviation), have recently gained acceptance in critical medical environments. Of the existing emergency manuals in anesthesiology, most are geared towards intraoperative settings. Additionally, most evidence supporting their efficacy focuses on the study of physicians with at least some meaningful experience as a physician. Our aim was to evaluate whether an emergency manual would improve the performance of novice physicians (post-graduate year [PGY] 1 or first year resident) in managing a critical event in the intensive care unit (ICU). Methods PGY1 interns (n=41) were assessed on the management of a simulated critical event (unstable bradycardia) in the ICU. Participants underwent a group allocation process to either a control group (n=18) or an intervention group (emergency manual provided, n=23). The number of successfully executed treatment and diagnostic interventions completed was evaluated over a ten minute (600 seconds) simulation for each participant. Results The participants using the emergency manual averaged 9.9/12 (83%) interventions, compared to an average of 7.1/12 (59%) interventions (p < 0.01) in the control group. Conclusions The use of an emergency manual was associated with a significant improvement in critical event management by individual novice physicians in a simulated ICU patient (23% average increase). PMID:29600255

  1. Emergency Manuals Improved Novice Physician Performance During Simulated ICU Emergencies.

    PubMed

    Kazior, Michael R; Wang, Jacob; Stiegler, Marjorie P; Nguyen, Dung; Rebel, Annette; Isaak, Robert S

    2017-01-01

    Emergency manuals, which are safety essentials in non-medical high-reliability organizations (e.g., aviation), have recently gained acceptance in critical medical environments. Of the existing emergency manuals in anesthesiology, most are geared towards intraoperative settings. Additionally, most evidence supporting their efficacy focuses on the study of physicians with at least some meaningful experience as a physician. Our aim was to evaluate whether an emergency manual would improve the performance of novice physicians (post-graduate year [PGY] 1 or first year resident) in managing a critical event in the intensive care unit (ICU). PGY1 interns (n=41) were assessed on the management of a simulated critical event (unstable bradycardia) in the ICU. Participants underwent a group allocation process to either a control group (n=18) or an intervention group (emergency manual provided, n=23). The number of successfully executed treatment and diagnostic interventions completed was evaluated over a ten minute (600 seconds) simulation for each participant. The participants using the emergency manual averaged 9.9/12 (83%) interventions, compared to an average of 7.1/12 (59%) interventions (p < 0.01) in the control group. The use of an emergency manual was associated with a significant improvement in critical event management by individual novice physicians in a simulated ICU patient (23% average increase).

  2. Foundations for Streaming Model Transformations by Complex Event Processing.

    PubMed

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  3. Patient adherence to prescribed antimicrobial drug dosing regimens.

    PubMed

    Vrijens, Bernard; Urquhart, John

    2005-05-01

    The aim of this article is to review current knowledge about the clinical impact of patients' variable adherence to prescribed anti-infective drug dosing regimens, with the aim of renewing interest and exploration of this important but largely neglected area of therapeutics. Central to the estimation of a patient's adherence to a prescribed drug regimen is a reliably compiled drug dosing history. Electronic monitoring methods have emerged as the virtual 'gold standard' for compiling drug dosing histories in ambulatory patients. Reliably compiled drug dosing histories are consistently downwardly skewed, with varying degrees of under-dosing. In particular, the consideration of time intervals between protease inhibitor doses has revealed that ambulatory patients' variable execution of prescribed dosing regimens is a leading source of variance in viral response. Such analyses reveal the need for a new discipline, called pharmionics, which is the study of how ambulatory patients use prescription drugs. Properly analysed, reliable data on the time-course of patients' actual intake of prescription drugs can eliminate a major source of unallocated variance in drug responses, including the non-response that occurs and is easily misinterpreted when a patient's complete non-execution of a prescribed drug regimen is unrecognized clinically. As such, reliable compilation of ambulatory patients' drug dosing histories has the promise of being a key step in reducing unallocated variance in drug response and in improving the informational yield of clinical trials. It is also the basis for sound, measurement-guided steps taken to improve a patient's execution of a prescribed dosing regimen.

  4. Validating independent ratings of executive functioning following acquired brain injury using Rasch analysis.

    PubMed

    Simblett, Sara K; Badham, Rachel; Greening, Kate; Adlam, Anna; Ring, Howard; Bateman, Andrew

    2012-01-01

    Assessment of everyday problems with executive functioning following acquired brain injury (ABI) is greatly valued by neurorehabilitation services. Reliance on self-report measures alone is problematic within this client group who may experience difficulties with awareness and memory. The construct validity and reliability of independent ratings (i.e., ratings provided by a carer/relative) on the Dysexecutive Questionnaire (DEX-I) was explored in this study. Consistent with the results recently reported on the self-rated version of the DEX (DEX-S; Simblett & Bateman, 2011 ), Rasch analysis completed on 271 responses to the DEX-I revealed that the scale did not fit the Rasch model and did not meet the assumption of unidimensionality, that is, a single underlying construct could not be found for the DEX-I that would allow development of an interval-level measure as a whole. Subscales, based on theoretical conceptualisations of executive functioning (Stuss, 2007 ) previously suggested for the DEX-S, were able to demonstrate fit to the Rasch model and unidimensionality. Reliability of independent responses to these subscales in comparison to self-reported ratings is discussed. These results contribute to a greater understanding of how assessment of executive functioning can be improved.

  5. Morning nutrition and executive function processes in preadolescents: modulation of frontal event-related theta, beta and gamma EEG oscillations during a go/ no-go task

    USDA-ARS?s Scientific Manuscript database

    Executive functions (i.e., goal-directed behavior such as inhibition and flexibility of action) have been linked to frontal brain regions and to covariations in oscillatory brain activity, e.g., theta and gamma activity. We studied the effects of morning nutritional status on executive function rel...

  6. Recovering from execution errors in SIPE

    NASA Technical Reports Server (NTRS)

    Wilkins, D. E.

    1987-01-01

    In real-world domains (e.g., a mobile robot environment), things do not always proceed as planned, so it is important to develop better execution-monitoring techniques and replanning capabilities. These capabilities in the SIPE planning system are described. The motivation behind SIPE is to place enough limitations on the representation so that planning can be done efficiently, while retaining sufficient power to still be useful. This work assumes that new information given to the execution monitor is in the form of predicates, thus avoiding the difficult problem of how to generate these predicates from information provided by sensors. The replanning module presented here takes advantage of the rich structure of SIPE plans and is intimately connected with the planner, which can be called as a subroutine. This allows the use of SIPE's capabilities to determine efficiently how unexpected events affect the plan being executed and, in many cases, to retain most of the original plan by making changes in it to avoid problems caused by these unexpected events. SIPE is also capable of shortening the original plan when serendipitous events occur. A general set of replanning actions is presented along with a general replanning capability that has been implemented by using these actions.

  7. Autonomy Architectures for a Constellation of Spacecraft

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2000-01-01

    Until the past few years, missions typically involved fairly large expensive spacecraft. Such missions have primarily favored using older proven technologies over more recently developed ones, and humans controlled spacecraft by manually generating detailed command sequences with low-level tools and then transmitting the sequences for subsequent execution on a spacecraft controller. This approach toward controlling a spacecraft has worked spectacularly on previous missions, but it has limitations deriving from communications restrictions - scheduling time to communicate with a particular spacecraft involves competing with other projects due to the limited number of deep space network antennae. This implies that a spacecraft can spend a long time just waiting whenever a command sequence fails. This is one reason why the New Millennium program has an objective to migrate parts of mission control tasks onboard a spacecraft to reduce wait time by making spacecraft more robust. The migrated software is called a "remote agent" and has 4 components: a mission manager to generate the high level goals, a planner/scheduler to turn goals into activities while reasoning about future expected situations, an executive/diagnostics engine to initiate and maintain activities while interpreting sensed events by reasoning about past and present situations, and a conventional real-time subsystem to interface with the spacecraft to implement an activity's primitive actions. In addition to needing remote planning and execution for isolated spacecraft, a trend toward multiple-spacecraft missions points to the need for remote distributed planning and execution. The past few years have seen missions with growing numbers of probes. Pathfinder has its rover (Sojourner), Cassini has its lander (Huygens), and the New Millenium Deep Space 3 (DS3) proposal involves a constellation of 3 spacecraft for interferometric mapping. This trend is expected to continue to progressively larger fleets. For example, one mission proposed to succeed DS3 would have 18 spacecraft flying in formation in order to detect earth-sized planets orbiting other stars. A proposed magnetospheric constellation would involve 5 to 500 spacecraft in Earth orbit to measure global phenomena within the magnetosphere. This work describes and compares three autonomy architectures for a system that continuously plans to control a fleet of spacecraft using collective mission goals instead of goals or command sequences for each spacecraft. A fleet of self-commanding spacecraft would autonomously coordinate itself to satisfy high level science and engineering goals in a changing partially-understood environment making feasible the operation of tens or even a hundred spacecraft (such as for interferometry or plasma physics missions). The easiest way to adapt autonomous spacecraft research to controlling constellations involves treating the constellation as a single spacecraft. Here one spacecraft directly controls the others as if they were connected. The controlling "master" spacecraft performs all autonomy reasoning, and the slaves only have real-time subsystems to execute the master's commands and transmit local telemetry/observations. The executive/diagnostics module starts actions and the master's real-time subsystem controls the action either locally or remotely through a slave. While the master/slave approach benefits from conceptual simplicity, it relies on an assumption that the master spacecraft's executive can continuously monitor the slaves' real-time subsystems, and this relies on high-bandwidth highly-reliable communications. Since unintended results occur fairly rarely, one way to relax the bandwidth requirements involves only monitoring unexpected events in spacecraft. Unfortunately, this disables the ability to monitor for unexpected events between spacecraft and leads to a host of coordination problems among the slaves. Also, failures in the communications system can result in losing slaves. The other two architectures improve robustness while reducing communications by progressively distributing more of the other three remote agent components across the constellation. In a teamwork architecture, all spacecraft have executives and real-time subsystems - only the leader has the planner/scheduler and mission manager. Finally, distributing all remote agent components leads to a peer-to-peer approach toward constellation control.

  8. Impact of High-Reliability Education on Adverse Event Reporting by Registered Nurses.

    PubMed

    McFarland, Diane M; Doucette, Jeffrey N

    Adverse event reporting is one strategy to identify risks and improve patient safety, but, historically, adverse events are underreported by registered nurses (RNs) because of fear of retribution and blame. A program was provided on high reliability to examine whether education would impact RNs' willingness to report adverse events. Although the findings were not statistically significant, they demonstrated a positive impact on adverse event reporting and support the need to create a culture of high reliability.

  9. Educational Management Organizations as High Reliability Organizations: A Study of Victory's Philadelphia High School Reform Work

    ERIC Educational Resources Information Center

    Thomas, David E.

    2013-01-01

    This executive position paper proposes recommendations for designing reform models between public and private sectors dedicated to improving school reform work in low performing urban high schools. It reviews scholarly research about for-profit educational management organizations, high reliability organizations, American high school reform, and…

  10. Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition

    PubMed Central

    Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431

  11. Reliable execution based on CPN and skyline optimization for Web service composition.

    PubMed

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  12. Self-reported quality of life measure is reliable and valid in adult patients suffering from schizophrenia with executive impairment.

    PubMed

    Baumstarck, Karine; Boyer, Laurent; Boucekine, Mohamed; Aghababian, Valérie; Parola, Nathalie; Lançon, Christophe; Auquier, Pascal

    2013-06-01

    Impaired executive functions are among the most widely observed in patients suffering from schizophrenia. The use of self-reported outcomes for evaluating treatment and managing care of these patients has been questioned. The aim of this study was to provide new evidence about the suitability of self-reported outcome for use in this specific population by exploring the internal structure, reliability and external validity of a specific quality of life (QoL) instrument, the Schizophrenia Quality of Life questionnaire (SQoL18). cross-sectional study. age over 18 years, diagnosis of schizophrenia according to the DSM-IV criteria. sociodemographic (age, gender, and education level) and clinical data (duration of illness, Positive and Negative Syndrome Scale, Calgary Depression Scale for Schizophrenia); QoL (SQoL18); and executive performance (Stroop test, lexical and verbal fluency, and trail-making test). Non-impaired and impaired populations were defined for each of the three tests. For the six groups, psychometric properties were compared to those reported from the reference population assessed in the validation study. One hundred and thirteen consecutive patients were enrolled. The factor analysis performed in the impaired groups showed that the questionnaire structure adequately matched the initial structure of the SQoL18. The unidimensionality of the dimensions was preserved, and the internal/external validity indices were close to those of the non-impaired groups and the reference population. Our study suggests that executive dysfunction did not compromise the reliability or validity of self-reported disease-specific QoL questionnaire. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  14. Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events

    NASA Astrophysics Data System (ADS)

    DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.

  15. Survey of critical failure events in on-chip interconnect by fault tree analysis

    NASA Astrophysics Data System (ADS)

    Yokogawa, Shinji; Kunii, Kyousuke

    2018-07-01

    In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.

  16. Strategic benefits of master facility plans.

    PubMed

    Shannon, K

    1996-02-01

    In recent years, many healthcare executives have stopped developing master facility plans due to some basic misconceptions about them, namely that master facility plans are too rigid or require major capital commitment. By getting past these misconceptions, healthcare executives can help their organizations develop and implement master facility plans that serve as flexible, reliable blueprints in guiding the organizations toward achieving their strategic, operational, and financial goals.

  17. Cygnus Performance in Subcritical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. Corrow, M. Hansen, D. Henderson, S. Lutz, C. Mitton, et al.

    2008-02-01

    The Cygnus Dual Beam Radiographic Facility consists of two identical radiographic sources with the following specifications: 4-rad dose at 1 m, 1-mm spot size, 50-ns pulse length, 2.25-MeV endpoint energy. The facility is located in an underground tunnel complex at the Nevada Test Site. Here SubCritical Experiments (SCEs) are performed to study the dynamic properties of plutonium. The Cygnus sources were developed as a primary diagnostic for these tests. Since SCEs are single-shot, high-value events - reliability and reproducibility are key issues. Enhanced reliability involves minimization of failure modes through design, inspection, and testing. Many unique hardware and operational featuresmore » were incorporated into Cygnus to insure reliability. Enhanced reproducibility involves normalization of shot-to-shot output also through design, inspection, and testing. The first SCE to utilize Cygnus, Armando, was executed on May 25, 2004. A year later, April - May 2005, calibrations using a plutonium step wedge were performed. The results from this series were used for more precise interpretation of the Armando data. In the period February - May 2007 Cygnus was fielded on Thermos, which is a series of small-sample plutonium shots using a one-dimensional geometry. Pulsed power research generally dictates frequent change in hardware configuration. Conversely, SCE applications have typically required constant machine settings. Therefore, while operating during the past four years we have accumulated a large database for evaluation of machine performance under highly consistent operating conditions. Through analysis of this database Cygnus reliability and reproducibility on Armando, Step Wedge, and Thermos is presented.« less

  18. Addressing Unison and Uniqueness of Reliability and Safety for Better Integration

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Safie, Fayssal

    2015-01-01

    For a long time, both in theory and in practice, safety and reliability have not been clearly differentiated, which leads to confusion, inefficiency, and sometime counter-productive practices in executing each of these two disciplines. It is imperative to address the uniqueness and the unison of these two disciplines to help both disciplines become more effective and to promote a better integration of the two for enhancing safety and reliability in our products as an overall objective. There are two purposes of this paper. First, it will investigate the uniqueness and unison of each discipline and discuss the interrelationship between the two for awareness and clarification. Second, after clearly understanding the unique roles and interrelationship between the two in a product design and development life cycle, we offer suggestions to enhance the disciplines with distinguished and focused roles, to better integrate the two, and to improve unique sets of skills and tools of reliability and safety processes. From the uniqueness aspect, the paper identifies and discusses the respective uniqueness of reliability and safety from their roles, accountability, nature of requirements, technical scopes, detailed technical approaches, and analysis boundaries. It is misleading to equate unreliable to unsafe, since a safety hazard may or may not be related to the component, sub-system, or system functions, which are primarily what reliability addresses. Similarly, failing-to-function may or may not lead to hazard events. Examples will be given in the paper from aerospace, defense, and consumer products to illustrate the uniqueness and differences between reliability and safety. From the unison aspect, the paper discusses what the commonalities between reliability and safety are, and how these two disciplines are linked, integrated, and supplemented with each other to accomplish the customer requirements and product goals. In addition to understanding the uniqueness in reliability and safety, a better understanding of unison and commonalities will further help in understanding the interaction between reliability and safety. This paper discusses the unison and uniqueness of reliability and safety. It presents some suggestions for better integration of the two disciplines in terms of technical approaches, tools, techniques, and skills to enhance the role of reliability and safety in supporting a product design and development life cycle. The paper also discusses eliminating the redundant effort and minimizing the overlap of reliability and safety analyses for an efficient implementation of the two disciplines.

  19. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less

  20. Monitoring of the infrastructure and services used to handle and automatically produce Alignment and Calibration conditions at CMS

    NASA Astrophysics Data System (ADS)

    Sipos, Roland; Govi, Giacomo; Franzoni, Giovanni; Di Guida, Salvatore; Pfeiffer, Andreas

    2017-10-01

    The CMS experiment at CERN LHC has a dedicated infrastructure to handle the alignment and calibration data. This infrastructure is composed of several services, which take on various data management tasks required for the consumption of the non-event data (also called as condition data) in the experiment activities. The criticality of these tasks imposes tights requirements for the availability and the reliability of the services executing them. In this scope, a comprehensive monitoring and alarm generating system has been developed. The system has been implemented based on the Nagios open source industry standard for monitoring and alerting services, and monitors the database back-end, the hosting nodes and key heart-beat functionalities for all the services involved. This paper describes the design, implementation and operational experience with the monitoring system developed and deployed at CMS in 2016.

  1. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  2. Developing a new national approach to surveillance for ventilator-associated events: executive summary.

    PubMed

    Magill, Shelley S; Klompas, Michael; Balk, Robert; Burns, Suzanne M; Deutschman, Clifford S; Diekema, Daniel; Fridkin, Scott; Greene, Linda; Guh, Alice; Gutterman, David; Hammer, Beth; Henderson, David; Hess, Dean R; Hill, Nicholas S; Horan, Teresa; Kollef, Marin; Levy, Mitchell; Septimus, Edward; VanAntwerpen, Carole; Wright, Don; Lipsett, Pamela

    2013-11-01

    In September 2011, the Centers for Disease Control and Prevention (CDC) convened a Ventilator-Associated Pneumonia (VAP) Surveillance Definition Working Group to organize a formal process for leaders and experts of key stakeholder organizations to discuss the challenges of VAP surveillance definitions and to propose new approaches to VAP surveillance in adult patients (Table 1). The charges to the Working Group were to (1) critically review a draft, streamlined VAP surveillance definition developed for use in adult patients; (2) suggest modifications to enhance the reliability and credibility of the surveillance definition within the critical care and infection prevention communities; and (3) propose a final adult surveillance definition algorithm to be implemented in the CDC's National Healthcare Safety Network (NHSN), taking into consideration the potential future use of the definition algorithm in public reporting, interfacility comparisons, and pay-for-reporting and pay-for-performance programs. Published by Mosby, Inc.

  3. The Fault Tree Compiler (FTC): Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  4. Technology for Space Station Evolution. Executive summary and overview

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Office of Aeronautics and Space Technology (OAST) conducted a workshop on technology for space station evolution 16-19 Jan. 1990. The purpose of this workshop was to collect and clarify Space Station Freedom technology requirements for evolution and to describe technologies that can potentially fill those requirements. These proceedings are organized into an Executive Summary and Overview and five volumes containing the technology discipline presentations. The Executive Summary and Overview contains an executive summary for the workshop, the technology discipline summary packages, and the keynote address. The executive summary provides a synopsis of the events and results of the workshop and the technology discipline summary packages.

  5. Requirements analysis for a hardware, discrete-event, simulation engine accelerator

    NASA Astrophysics Data System (ADS)

    Taylor, Paul J., Jr.

    1991-12-01

    An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.

  6. Constructing Space-Time Views from Fixed Size Statistical Data: Getting the Best of both Worlds

    NASA Technical Reports Server (NTRS)

    Schmidt, Melisa; Yan, Jerry C.

    1997-01-01

    Many performance monitoring tools are currently available to the super-computing community. The performance data gathered and analyzed by these tools fall under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can easily fill up the entire file system during execution such that the instrumented execution may have to be terminated half way through. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. The user can trade-off tracing overhead, trace data size vs. data quality incrementally. In other words, the user will be able to limit the amount of trace collected and, at the same time, carry out some of the analysis event traces offer using space-time views for the entire execution. Two basic ideas arc employed: the use of averages to replace recording data for each instance and formulae to represent sequences associated with communication and control flow. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected vs. event traces. We found that the trace files thus obtained are, in deed, small, bounded and predictable before program execution and that the quality of the space time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture 100% of all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at run-time to learn longer sequences.

  7. Constructing Space-Time Views from Fixed Size Statistical Data: Getting the Best of Both Worlds

    NASA Technical Reports Server (NTRS)

    Schmidt, Melisa; Yan, Jerry C.; Bailey, David (Technical Monitor)

    1996-01-01

    Many performance monitoring tools are currently available to the super-computing community. The performance data gathered and analyzed by these tools fall under two categories: statistics and event traces. Statistical data is much more compact but lacks the probative power event traces offer. Event traces, on the other hand, can easily fill up the entire file system during execution such that the instrumented execution may have to be terminated half way through. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. The user can trade-off tracing overhead, trace data size vs. data quality incrementally. In other words, the user will be able to limit the amount of trace collected and, at the same time, carry out some of the analysis event traces offer using spacetime views for the entire execution. Two basic ideas are employed: the use of averages to replace recording data for each instance and "formulae" to represent sequences associated with communication and control flow. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected vs. event traces. We found that the trace files thus obtained are, in deed, small, bounded and predictable before program execution and that the quality of the space time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture 100% of all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at run-time to learn longer sequences.

  8. Effects of Mild Cognitive Impairment on the Event-Related Brain Potential Components Elicited in Executive Control Tasks.

    PubMed

    Zurrón, Montserrat; Lindín, Mónica; Cespón, Jesús; Cid-Fernández, Susana; Galdo-Álvarez, Santiago; Ramos-Goicoa, Marta; Díaz, Fernando

    2018-01-01

    We summarize here the findings of several studies in which we analyzed the event-related brain potentials (ERPs) elicited in participants with mild cognitive impairment (MCI) and in healthy controls during performance of executive tasks. The objective of these studies was to investigate the neural functioning associated with executive processes in MCI. With this aim, we recorded the brain electrical activity generated in response to stimuli in three executive control tasks (Stroop, Simon, and Go/NoGo) adapted for use with the ERP technique. We found that the latencies of the ERP components associated with the evaluation and categorization of the stimuli were longer in participants with amnestic MCI than in the paired controls, particularly those with multiple-domain amnestic MCI, and that the allocation of neural resources for attending to the stimuli was weaker in participants with amnestic MCI. The MCI participants also showed deficient functioning of the response selection and preparation processes demanded by each task.

  9. Effects of Mild Cognitive Impairment on the Event-Related Brain Potential Components Elicited in Executive Control Tasks

    PubMed Central

    Zurrón, Montserrat; Lindín, Mónica; Cespón, Jesús; Cid-Fernández, Susana; Galdo-Álvarez, Santiago; Ramos-Goicoa, Marta; Díaz, Fernando

    2018-01-01

    We summarize here the findings of several studies in which we analyzed the event-related brain potentials (ERPs) elicited in participants with mild cognitive impairment (MCI) and in healthy controls during performance of executive tasks. The objective of these studies was to investigate the neural functioning associated with executive processes in MCI. With this aim, we recorded the brain electrical activity generated in response to stimuli in three executive control tasks (Stroop, Simon, and Go/NoGo) adapted for use with the ERP technique. We found that the latencies of the ERP components associated with the evaluation and categorization of the stimuli were longer in participants with amnestic MCI than in the paired controls, particularly those with multiple-domain amnestic MCI, and that the allocation of neural resources for attending to the stimuli was weaker in participants with amnestic MCI. The MCI participants also showed deficient functioning of the response selection and preparation processes demanded by each task.

  10. A fault-tolerant intelligent robotic control system

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Tso, Kam Sing

    1993-01-01

    This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.

  11. Income, neural executive processes, and preschool children's executive control.

    PubMed

    Ruberry, Erika J; Lengua, Liliana J; Crocker, Leanna Harris; Bruce, Jacqueline; Upshaw, Michaela B; Sommerville, Jessica A

    2017-02-01

    This study aimed to specify the neural mechanisms underlying the link between low household income and diminished executive control in the preschool period. Specifically, we examined whether individual differences in the neural processes associated with executive attention and inhibitory control accounted for income differences observed in performance on a neuropsychological battery of executive control tasks. The study utilized a sample of preschool-aged children (N = 118) whose families represented the full range of income, with 32% of families at/near poverty, 32% lower income, and 36% middle to upper income. Children completed a neuropsychological battery of executive control tasks and then completed two computerized executive control tasks while EEG data were collected. We predicted that differences in the event-related potential (ERP) correlates of executive attention and inhibitory control would account for income differences observed on the executive control battery. Income and ERP measures were related to performance on the executive control battery. However, income was unrelated to ERP measures. The findings suggest that income differences observed in executive control during the preschool period might relate to processes other than executive attention and inhibitory control.

  12. Lethal Injection for Execution: Chemical Asphyxiation?

    PubMed Central

    Zimmers, Teresa A; Sheldon, Jonathan; Lubarsky, David A; López-Muñoz, Francisco; Waterman, Linda; Weisman, Richard; Koniaris, Leonidas G

    2007-01-01

    Background Lethal injection for execution was conceived as a comparatively humane alternative to electrocution or cyanide gas. The current protocols are based on one improvised by a medical examiner and an anesthesiologist in Oklahoma and are practiced on an ad hoc basis at the discretion of prison personnel. Each drug used, the ultrashort-acting barbiturate thiopental, the neuromuscular blocker pancuronium bromide, and the electrolyte potassium chloride, was expected to be lethal alone, while the combination was intended to produce anesthesia then death due to respiratory and cardiac arrest. We sought to determine whether the current drug regimen results in death in the manner intended. Methods and Findings We analyzed data from two US states that release information on executions, North Carolina and California, as well as the published clinical, laboratory, and veterinary animal experience. Execution outcomes from North Carolina and California together with interspecies dosage scaling of thiopental effects suggest that in the current practice of lethal injection, thiopental might not be fatal and might be insufficient to induce surgical anesthesia for the duration of the execution. Furthermore, evidence from North Carolina, California, and Virginia indicates that potassium chloride in lethal injection does not reliably induce cardiac arrest. Conclusions We were able to analyze only a limited number of executions. However, our findings suggest that current lethal injection protocols may not reliably effect death through the mechanisms intended, indicating a failure of design and implementation. If thiopental and potassium chloride fail to cause anesthesia and cardiac arrest, potentially aware inmates could die through pancuronium-induced asphyxiation. Thus the conventional view of lethal injection leading to an invariably peaceful and painless death is questionable. PMID:17455994

  13. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  14. openECA Platform and Analytics Alpha Test Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  15. Estimation and enhancement of real-time software reliability through mutation analysis

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.

    1992-01-01

    A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.

  16. openECA Platform and Analytics Beta Demonstration Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  17. Fast Transformation of Temporal Plans for Efficient Execution

    NASA Technical Reports Server (NTRS)

    Tsamardinos, Ioannis; Muscettola, Nicola; Morris, Paul

    1998-01-01

    Temporal plans permit significant flexibility in specifying the occurrence time of events. Plan execution can make good use of that flexibility. However, the advantage of execution flexibility is counterbalanced by the cost during execution of propagating the time of occurrence of events throughout the flexible plan. To minimize execution latency, this propagation needs to be very efficient. Previous work showed that every temporal plan can be reformulated as a dispatchable plan, i.e., one for which propagation to immediate neighbors is sufficient. A simple algorithm was given that finds a dispatchable plan with a minimum number of edges in cubic time and quadratic space. In this paper, we focus on the efficiency of the reformulation process, and improve on that result. A new algorithm is presented that uses linear space and has time complexity equivalent to Johnson s algorithm for all-pairs shortest-path problems. Experimental evidence confirms the practical effectiveness of the new algorithm. For example, on a large commercial application, the performance is improved by at least two orders of magnitude. We further show that the dispatchable plan, already minimal in the total number of edges, can also be made minimal in the maximum number of edges incoming or outgoing at any node.

  18. Investigating Executive Working Memory and Phonological Short-Term Memory in Relation to Fluency and Self-Repair Behavior in L2 Speech.

    PubMed

    Georgiadou, Effrosyni; Roehr-Brackin, Karen

    2017-08-01

    This paper reports the findings of a study investigating the relationship of executive working memory (WM) and phonological short-term memory (PSTM) to fluency and self-repair behavior during an unrehearsed oral task performed by second language (L2) speakers of English at two levels of proficiency, elementary and lower intermediate. Correlational analyses revealed a negative relationship between executive WM and number of pauses in the lower intermediate L2 speakers. However, no reliable association was found in our sample between executive WM or PSTM and self-repair behavior in terms of either frequency or type of self-repair. Taken together, our findings suggest that while executive WM may enhance performance at the conceptualization and formulation stages of the speech production process, self-repair behavior in L2 speakers may depend on factors other than working memory.

  19. Machine on Trial

    DTIC Science & Technology

    2012-06-01

    executed a concerted effort to employ reliability standards and testing from the design phase through fielding. Reliability programs remain standard...performed flight test engineer duties on several developmental flight test programs and served as Chief Engineer for a flight test squadron. Major...Quant is an acquisition professional with over 250 flight test hours in various aircraft, including the F-16, Airborne Laser, and HH-60. She holds a

  20. Repeated Measurement of the Components of Attention with Young Children Using the Attention Network Test: Stability, Isolability, Robustness, and Reliability

    ERIC Educational Resources Information Center

    Ishigami, Yoko; Klein, Raymond M.

    2015-01-01

    The current study examined the robustness, stability, reliability, and isolability of the attention network scores (alerting, orienting, and executive control) when young children experienced repeated administrations of the child version of the Attention Network Test (ANT; Rueda et al., 2004). Ten test sessions of the ANT were administered to 12…

  1. Coordinated Specialty Care Fact Sheet and Checklist

    MedlinePlus

    ... Join A Study News & Events News & Events Home Science News Meetings and Events Multimedia Social Media Press Resources Newsletters NIMH News Feeds About Us About Us Home About the Director Advisory Boards and ... of Mental Health Office of Science Policy, Planning, and Communications 6001 Executive Boulevard, Room ...

  2. Beyond reliability to profitability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, T.H.; Mitchell, J.S.

    1996-07-01

    Reliability concerns have controlled much of power generation design and operations. Emerging from a strictly regulated environment, profitability is becoming a much more important concept for today`s power generation executives. This paper discusses the conceptual advance-view power plant maintenance as a profit center, go beyond reliability, and embrace profitability. Profit Centered Maintenance begins with the premise that financial considerations, namely profitability, drive most aspects of modern process and manufacturing operations. Profit Centered Maintenance is a continuous process of reliability and administrative improvement and optimization. For the power generation executives with troublesome maintenance programs, Profit Centered Maintenance can be the blueprintmore » to increased profitability. It requires the culture change to make decisions based on value, to reengineer the administration of maintenance, and to enable the people performing and administering maintenance to make the most of available maintenance information technology. The key steps are to optimize the physical function of maintenance and to resolve recurring maintenance problems so that the need for maintenance can be reduced. Profit Centered Maintenance is more than just an attitude it is a path to profitability, be it resulting in increased profits or increased market share.« less

  3. Tips for executing exceptional conferences, meetings, and workshops

    Treesearch

    Diane L. Haase; R. Kasten Dumroese; Richard Zabel

    2017-01-01

    The three of us, combined, have organized or attended more than 500 events, including meetings, conferences, workshops, and symposia, around the world. After participating in so many events, we concluded that a guide for hosting a successful event is greatly needed. Too often, an event is negatively affected by preventable issues, such as poor planning, a terrible...

  4. Report to the President: Realizing the Full Potential of Government-Held Spectrum to Spur Economic Growth

    DTIC Science & Technology

    2012-07-01

    managing the use of the Radio Frequency (RF) spectrum to ensure reliable emergency, civil, and government communications . At that time, when the rules of...or equipment and/or radio frequencies to provide electronic communication services under standard conditions (a class license) or authorizing the...Cognitive Radio Networks.” IEEE Communications Magazine (2008). Circular A- 11 : Preparation, Submission, and Execution of the Budget. Executive Office

  5. Understanding Interrater Reliability and Validity of Risk Assessment Tools Used to Predict Adverse Clinical Events.

    PubMed

    Siedlecki, Sandra L; Albert, Nancy M

    This article will describe how to assess interrater reliability and validity of risk assessment tools, using easy-to-follow formulas, and to provide calculations that demonstrate principles discussed. Clinical nurse specialists should be able to identify risk assessment tools that provide high-quality interrater reliability and the highest validity for predicting true events of importance to clinical settings. Making best practice recommendations for assessment tool use is critical to high-quality patient care and safe practices that impact patient outcomes and nursing resources. Optimal risk assessment tool selection requires knowledge about interrater reliability and tool validity. The clinical nurse specialist will understand the reliability and validity issues associated with risk assessment tools, and be able to evaluate tools using basic calculations. Risk assessment tools are developed to objectively predict quality and safety events and ultimately reduce the risk of event occurrence through preventive interventions. To ensure high-quality tool use, clinical nurse specialists must critically assess tool properties. The better the tool's ability to predict adverse events, the more likely that event risk is mediated. Interrater reliability and validity assessment is relatively an easy skill to master and will result in better decisions when selecting or making recommendations for risk assessment tool use.

  6. Multi-INT Complex Event Processing using Approximate, Incremental Graph Pattern Search

    DTIC Science & Technology

    2012-06-01

    graph pattern search and SPARQL queries . Total execution time for 10 executions each of 5 random pattern searches in synthetic data sets...01/11 1000 10000 100000 RDF triples Time (secs) 10 20 Graph pattern algorithm SPARQL queries Initial Performance Comparisons 09/18/11 2011 Thrust Area

  7. Behavioral and Electrophysiological Differences in Executive Control between Monolingual and Bilingual Children

    ERIC Educational Resources Information Center

    Barac, Raluca; Moreno, Sylvain; Bialystok, Ellen

    2016-01-01

    This study examined executive control in sixty-two 5-year-old children who were monolingual or bilingual using behavioral and event-related potentials (ERPs) measures. All children performed equivalently on simple response inhibition (gift delay), but bilingual children outperformed monolinguals on interference suppression and complex response…

  8. Monitoring Java Programs with Java PathExplorer

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We present recent work on the development Java PathExplorer (JPAX), a tool for monitoring the execution of Java programs. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program's late code which will then omit events to an observer during its execution. The observer checks the events against user provided high level requirement specifications, for example temporal logic formulae, and against lower level error detection procedures, for example concurrency related such as deadlock and data race algorithms. High level requirement specifications together with their underlying logics are defined in the Maude rewriting logic, and then can either be directly checked using the Maude rewriting engine, or be first translated to efficient data structures and then checked in Java.

  9. Integrated System for Autonomous Science

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Sherwood, Robert; Tran, Daniel; Cichy, Benjamin; Davies, Ashley; Castano, Rebecca; Rabideau, Gregg; Frye, Stuart; Trout, Bruce; Shulman, Seth; hide

    2006-01-01

    The New Millennium Program Space Technology 6 Project Autonomous Sciencecraft software implements an integrated system for autonomous planning and execution of scientific, engineering, and spacecraft-coordination actions. A prior version of this software was reported in "The TechSat 21 Autonomous Sciencecraft Experiment" (NPO-30784), NASA Tech Briefs, Vol. 28, No. 3 (March 2004), page 33. This software is now in continuous use aboard the Earth Orbiter 1 (EO-1) spacecraft mission and is being adapted for use in the Mars Odyssey and Mars Exploration Rovers missions. This software enables EO-1 to detect and respond to such events of scientific interest as volcanic activity, flooding, and freezing and thawing of water. It uses classification algorithms to analyze imagery onboard to detect changes, including events of scientific interest. Detection of such events triggers acquisition of follow-up imagery. The mission-planning component of the software develops a response plan that accounts for visibility of targets and operational constraints. The plan is then executed under control by a task-execution component of the software that is capable of responding to anomalies.

  10. The test-retest reliability of the latent construct of executive function depends on whether tasks are represented as formative or reflective indicators.

    PubMed

    Willoughby, Michael T; Kuhn, Laura J; Blair, Clancy B; Samek, Anya; List, John A

    2017-10-01

    This study investigates the test-retest reliability of a battery of executive function (EF) tasks with a specific interest in testing whether the method that is used to create a battery-wide score would result in differences in the apparent test-retest reliability of children's performance. A total of 188 4-year-olds completed a battery of computerized EF tasks twice across a period of approximately two weeks. Two different approaches were used to create a score that indexed children's overall performance on the battery-i.e., (1) the mean score of all completed tasks and (2) a factor score estimate which used confirmatory factor analysis (CFA). Pearson and intra-class correlations were used to investigate the test-retest reliability of individual EF tasks, as well as an overall battery score. Consistent with previous studies, the test-retest reliability of individual tasks was modest (rs ≈ .60). The test-retest reliability of the overall battery scores differed depending on the scoring approach (r mean  = .72; r factor_ score  = .99). It is concluded that the children's performance on individual EF tasks exhibit modest levels of test-retest reliability. This underscores the importance of administering multiple tasks and aggregating performance across these tasks in order to improve precision of measurement. However, the specific strategy that is used has a large impact on the apparent test-retest reliability of the overall score. These results replicate our earlier findings and provide additional cautionary evidence against the routine use of factor analytic approaches for representing individual performance across a battery of EF tasks.

  11. Distributed Processor/Memory Architectures Design Program

    DTIC Science & Technology

    1975-02-01

    Event Scheduling Plo 31 Globat LAl Message Input Event Sicheduling Fhou ..... ............... 106 32 It tc Iata Representation...298 138 GEX LEX Scheduling Phlmophy ....... ...................... 300 139 Executive Comirol Herarchy... Scheduler Subroutine lnterrelatiomhips . ..... ................. 312 145 Task Scheduler Message Scatuer. . ...... ....................... 315 146

  12. Shared cognitive processes underlying past and future thinking: the impact of imagery and concurrent task demands on event specificity.

    PubMed

    Anderson, Rachel J; Dewhurst, Stephen A; Nash, Robert A

    2012-03-01

    Recent literature has argued that whereas remembering the past and imagining the future make use of shared cognitive substrates, simulating future events places heavier demands on executive resources. These propositions were explored in 3 experiments comparing the impact of imagery and concurrent task demands on speed and accuracy of past event retrieval and future event simulation. Results provide support for the suggestion that both past and future episodes can be constructed through 2 mechanisms: a noneffortful "direct" pathway and a controlled, effortful "generative" pathway. However, limited evidence emerged for the suggestion that simulating of future, compared with retrieving past, episodes places heavier demands on executive resources; only under certain conditions did it emerge as a more error prone and lengthier process. The findings are discussed in terms of how retrieval and simulation make use of the same cognitive substrates in subtly different ways. 2012 APA, all rights reserved

  13. Component processes underlying future thinking.

    PubMed

    D'Argembeau, Arnaud; Ortoleva, Claudia; Jumentier, Sabrina; Van der Linden, Martial

    2010-09-01

    This study sought to investigate the component processes underlying the ability to imagine future events, using an individual-differences approach. Participants completed several tasks assessing different aspects of future thinking (i.e., fluency, specificity, amount of episodic details, phenomenology) and were also assessed with tasks and questionnaires measuring various component processes that have been hypothesized to support future thinking (i.e., executive processes, visual-spatial processing, relational memory processing, self-consciousness, and time perspective). The main results showed that executive processes were correlated with various measures of future thinking, whereas visual-spatial processing abilities and time perspective were specifically related to the number of sensory descriptions reported when specific future events were imagined. Furthermore, individual differences in self-consciousness predicted the subjective feeling of experiencing the imagined future events. These results suggest that future thinking involves a collection of processes that are related to different facets of future-event representation.

  14. Software dependability in the Tandem GUARDIAN system

    NASA Technical Reports Server (NTRS)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  15. When memory is not enough: Electrophysiological evidence for goal-dependent use of working memory representations in guiding visual attention

    PubMed Central

    Carlisle, Nancy B.; Woodman, Geoffrey F.

    2014-01-01

    Biased competition theory proposes that representations in working memory drive visual attention to select similar inputs. However, behavioral tests of this hypothesis have led to mixed results. These inconsistent findings could be due to the inability of behavioral measures to reliably detect the early, automatic effects on attentional deployment that the memory representations exert. Alternatively, executive mechanisms may govern how working memory representations influence attention based on higher-level goals. In the present study, we tested these hypotheses using the N2pc component of participants’ event-related potentials (ERPs) to directly measure the early deployments of covert attention. Participants searched for a target in an array that sometimes contained a memory-matching distractor. In Experiments 1–3, we manipulated the difficulty of the target discrimination and the proximity of distractors, but consistently observed that covert attention was deployed to the search targets and not the memory-matching distractors. In Experiment 4, we showed that when participants’ goal involved attending to memory-matching items that these items elicited a large and early N2pc. Our findings demonstrate that working memory representations alone are not sufficient to guide early deployments of visual attention to matching inputs and that goal-dependent executive control mediates the interactions between working memory representations and visual attention. PMID:21254796

  16. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.

    1996-01-01

    A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.

  17. 78 FR 27165 - Approval and Promulgation of Implementation Plans; Utah; Revisions to Utah Rule R307-107; General...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-09

    ... grant the Utah executive secretary exclusive authority to decide whether excess emissions constituted a... requires breakdown incident reports to include the cause and nature of the event, estimated quantity of... appeared to give the executive secretary exclusive authority to determine whether excess emissions...

  18. "Between the Heavens and the Earth": Narrating the Execution of Moses Paul

    ERIC Educational Resources Information Center

    Salyer, Matt

    2012-01-01

    The 1772 execution of the Mohegan sailor Moses Paul served as the occasion for Samson Occom's popular "Sermon," reprinted in numerous editions. Recent work by Ava Chamberlain seeks to recover Paul's version of events from contemporary court records. This article argues that Paul's "firsthand" account of the case and autobiographical narrative…

  19. 12 CFR 1500.2 - What are the limitations on managing or operating a portfolio company held as a merchant banking...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... routine management or operation—(i) Executive officer interlocks at the portfolio company. A financial... other than executive officers. (2) Presumptions of routine management or operation. A financial holding... paragraph (e) of this section. (2) Covenants or other provisions regarding extraordinary events. A financial...

  20. Executable Architecture Modeling and Simulation Based on fUML

    DTIC Science & Technology

    2014-06-01

    SoS behaviors. Wang et al.[9] use SysML sequence diagram to model the behaviors and translate the models into Colored Petri Nets (CPN). Staines T.S...Renzhong and Dagli C H. An executable system architecture approach to discrete events system modeling using SysML in conjunction with colored Petri

  1. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  2. 5 CFR 2502.33 - Procedure in the event of an adverse ruling.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Procedure in the event of an adverse ruling. 2502.33 Section 2502.33 Administrative Personnel OFFICE OF ADMINISTRATION, EXECUTIVE OFFICE OF... Other Authorities § 2502.33 Procedure in the event of an adverse ruling. If the court or other authority...

  3. 5 CFR 2502.33 - Procedure in the event of an adverse ruling.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Procedure in the event of an adverse ruling. 2502.33 Section 2502.33 Administrative Personnel OFFICE OF ADMINISTRATION, EXECUTIVE OFFICE OF... Other Authorities § 2502.33 Procedure in the event of an adverse ruling. If the court or other authority...

  4. Sub-Second Parallel State Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly statemore » estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.« less

  5. Assessing Reliability of Medical Record Reviews for the Detection of Hospital Adverse Events.

    PubMed

    Ock, Minsu; Lee, Sang-il; Jo, Min-Woo; Lee, Jin Yong; Kim, Seon-Ha

    2015-09-01

    The purpose of this study was to assess the inter-rater reliability and intra-rater reliability of medical record review for the detection of hospital adverse events. We conducted two stages retrospective medical records review of a random sample of 96 patients from one acute-care general hospital. The first stage was an explicit patient record review by two nurses to detect the presence of 41 screening criteria (SC). The second stage was an implicit structured review by two physicians to identify the occurrence of adverse events from the positive cases on the SC. The inter-rater reliability of two nurses and that of two physicians were assessed. The intra-rater reliability was also evaluated by using test-retest method at approximately two weeks later. In 84.2% of the patient medical records, the nurses agreed as to the necessity for the second stage review (kappa, 0.68; 95% confidence interval [CI], 0.54 to 0.83). In 93.0% of the patient medical records screened by nurses, the physicians agreed about the absence or presence of adverse events (kappa, 0.71; 95% CI, 0.44 to 0.97). When assessing intra-rater reliability, the kappa indices of two nurses were 0.54 (95% CI, 0.31 to 0.77) and 0.67 (95% CI, 0.47 to 0.87), whereas those of two physicians were 0.87 (95% CI, 0.62 to 1.00) and 0.37 (95% CI, -0.16 to 0.89). In this study, the medical record review for detecting adverse events showed intermediate to good level of inter-rater and intra-rater reliability. Well organized training program for reviewers and clearly defining SC are required to get more reliable results in the hospital adverse event study.

  6. Executive and arousal vigilance decrement in the context of the attentional networks: The ANTI-Vea task.

    PubMed

    Luna, Fernando Gabriel; Marino, Julián; Roca, Javier; Lupiáñez, Juan

    2018-05-20

    Vigilance is generally understood as the ability to detect infrequent critical events through long time periods. In tasks like the Sustained Attention to Response Task (SART), participants tend to detect fewer events across time, a phenomenon known as "vigilance decrement". However, vigilance might also involve sustaining a tonic arousal level. In the Psychomotor Vigilance Test (PVT), the vigilance decrement corresponds to an increment across time in both mean and variability of reaction time. The present study aimed to develop a single task -Attentional Networks Test for Interactions and Vigilance - executive and arousal components (ANTI-Vea)- to simultaneously assess both components of vigilance (i.e., the executive vigilance as in the SART, and the arousal vigilance as in the PVT), while measuring the classic attentional functions (phasic alertness, orienting, and executive control). In Experiment #1, the executive vigilance decrement was found as an increment in response bias. In Experiment #2, this result was replicated, and the arousal vigilance decrement was simultaneously observed as an increment in reaction time. The ANTI-Vea solves some issues observed in the previous ANTI-V task with the executive vigilance measure (e.g., a low hit rate and no vigilance decrement). Furthermore, the new ANTI-Vea task assesses both components of vigilance together with others typical attentional functions. The new attentional networks test developed here may be useful to provide a better understanding of the human attentional system. The role of sensitivity and response bias in the executive vigilance decrement are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Managing travel for planned special events handbook : executive summary

    DOT National Transportation Integrated Search

    2007-06-01

    This report was written to communicate new and proven institutional and high-level operational techniques and strategies for achieving a coordinated, proactive approach to managing travel for all planned special events in a region in addition to faci...

  8. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  9. Analyzing System on A Chip Single Event Upset Responses using Single Event Upset Data, Classical Reliability Models, and Space Environment Data

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We are investigating the application of classical reliability performance metrics combined with standard single event upset (SEU) analysis data. We expect to relate SEU behavior to system performance requirements. Our proposed methodology will provide better prediction of SEU responses in harsh radiation environments with confidence metrics. single event upset (SEU), single event effect (SEE), field programmable gate array devises (FPGAs)

  10. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Patient Safety Leadership WalkRounds.

    PubMed

    Frankel, Allan; Graydon-Baker, Erin; Neppl, Camilla; Simmonds, Terri; Gustafson, Michael; Gandhi, Tejal K

    2003-01-01

    In the WalkRounds concept, a core group, which includes the senior executives and/or vice presidents, conducts weekly visits to different areas of the hospital. The group, joined by one or two nurses in the area and other available staff, asks specific questions about adverse events or near misses and about the factors or systems issues that led to these events. ANALYSIS OF EVENTS: Events in the Walkrounds are entered into a database and classified according to the contributing factors. The data are aggregated by contributing factors and priority scores to highlight the root issues. The priority scores are used to determine QI pilots and make best use of limited resources. Executives are surveyed quarterly about actions they have taken as a direct result of WalkRounds and are asked what they have learned from the rounds. As of September 2002, 47 Patient Safety Leadership WalkRounds visited a total of 48 different areas of the hospital, with 432 individual comments. The WalkRounds require not only knowledgeable and invested senior leadership but also a well-organized support structure. Quality and safety personnel are needed to collect data and maintain a database of confidential information, evaluate the data from a systems approach, and delineate systems-based actions to improve care delivery. Comments of frontline clinicians and executives suggested that WalkRounds helps educate leadership and frontline staff in patient safety concepts and will lead to cultural changes, as manifested in more open discussion of adverse events and an improved rate of safety-based changes.

  12. Meta-cognitive processes in executive control development: The case of reactive and proactive control

    PubMed Central

    Chevalier, Nicolas; Martis, Shaina Bailey; Curran, Tim; Munakata, Yuko

    2015-01-01

    Young children engage cognitive control reactively in response to events, rather than proactively preparing for events. Such limitations in executive control have been explained in terms of fundamental constraints on children’s cognitive capacities. Alternatively, young children might be capable of proactive control but differ from older children in their meta-cognitive decisions regarding when to engage proactive control. We examined these possibilities in three conditions of a task-switching paradigm, varying in whether task cues were available before or after target onset. Reaction times, ERPs, and pupil dilation showed that 5-year-olds did engage in advance preparation, a critical aspect of proactive control, but only when reactive control was made more difficult, whereas 10-year-olds engaged proactive control whenever possible. These findings highlight meta-cognitive processes in children’s cognitive control, an understudied aspect of executive control development. PMID:25603026

  13. Parallel discrete-event simulation of FCFS stochastic queueing networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  14. Bringing memory fMRI to the clinic: comparison of seven memory fMRI protocols in temporal lobe epilepsy.

    PubMed

    Towgood, Karren; Barker, Gareth J; Caceres, Alejandro; Crum, William R; Elwes, Robert D C; Costafreda, Sergi G; Mehta, Mitul A; Morris, Robin G; von Oertzen, Tim J; Richardson, Mark P

    2015-04-01

    fMRI is increasingly implemented in the clinic to assess memory function. There are multiple approaches to memory fMRI, but limited data on advantages and reliability of different methods. Here, we compared effect size, activation lateralisation, and between-sessions reliability of seven memory fMRI protocols: Hometown Walking (block design), Scene encoding (block design and event-related design), Picture encoding (block and event-related), and Word encoding (block and event-related). All protocols were performed on three occasions in 16 patients with temporal lobe epilepsy (TLE). Group T-maps showed activity bilaterally in medial temporal lobe for all protocols. Using ANOVA, there was an interaction between hemisphere and seizure-onset lateralisation (P = 0.009) and between hemisphere, protocol and seizure-onset lateralisation (P = 0.002), showing that the distribution of memory-related activity between left and right temporal lobes differed between protocols and between patients with left-onset and right-onset seizures. Using voxelwise intraclass Correlation Coefficient, between-sessions reliability was best for Hometown and Scenes (block and event). The between-sessions spatial overlap of activated voxels was also greatest for Hometown and Scenes. Lateralisation of activity between hemispheres was most reliable for Scenes (block and event) and Words (event). Using receiver operating characteristic analysis to explore the ability of each fMRI protocol to classify patients as left-onset or right-onset TLE, only the Words (event) protocol achieved a significantly above-chance classification of patients at all three sessions. We conclude that Words (event) protocol shows the best combination of between-sessions reliability of the distribution of activity between hemispheres and reliable ability to distinguish between left-onset and right-onset patients. © 2015 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  15. GOES-R GS Product Generation Infrastructure Operations

    NASA Astrophysics Data System (ADS)

    Blanton, M.; Gundy, J.

    2012-12-01

    GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  16. Objectivity, Reliability, and Validity of the Bent-Knee Push-Up for College-Age Women

    ERIC Educational Resources Information Center

    Wood, Heather M.; Baumgartner, Ted A.

    2004-01-01

    The revised push-up test has been found to have good validity but it produces many zero scores for women. Maybe there should be an alternative to the revised push-up test for college-age women. The purpose of this study was to determine the objectivity, reliability, and validity for the bent-knee push-up test (executed on hands and knees) for…

  17. Reducing acquisition risk through integrated systems of systems engineering

    NASA Astrophysics Data System (ADS)

    Gross, Andrew; Hobson, Brian; Bouwens, Christina

    2016-05-01

    In the fall of 2015, the Joint Staff J7 (JS J7) sponsored the Bold Quest (BQ) 15.2 event and conducted planning and coordination to combine this event into a joint event with the Army Warfighting Assessment (AWA) 16.1 sponsored by the U.S. Army. This multipurpose event combined a Joint/Coalition exercise (JS J7) with components of testing, training, and experimentation required by the Army. In support of Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)) System of Systems Engineering and Integration (SoSE&I), Always On-On Demand (AO-OD) used a system of systems (SoS) engineering approach to develop a live, virtual, constructive distributed environment (LVC-DE) to support risk mitigation utilizing this complex and challenging exercise environment for a system preparing to enter limited user test (LUT). AO-OD executed a requirements-based SoS engineering process starting with user needs and objectives from Army Integrated Air and Missile Defense (AIAMD), Patriot units, Coalition Intelligence, Surveillance and Reconnaissance (CISR), Focused End State 4 (FES4) Mission Command (MC) Interoperability with Unified Action Partners (UAP), and Mission Partner Environment (MPE) Integration and Training, Tactics and Procedures (TTP) assessment. The SoS engineering process decomposed the common operational, analytical, and technical requirements, while utilizing the Institute of Electrical and Electronics Engineers (IEEE) Distributed Simulation Engineering and Execution Process (DSEEP) to provide structured accountability for the integration and execution of the AO-OD LVC-DE. As a result of this process implementation, AO-OD successfully planned for, prepared, and executed a distributed simulation support environment that responsively satisfied user needs and objectives, demonstrating the viability of an LVC-DE environment to support multiple user objectives and support risk mitigation activities for systems in the acquisition process.

  18. Electronic device for endosurgical skills training (EDEST): study of reliability.

    PubMed

    Pagador, J B; Uson, J; Sánchez, M A; Moyano, J L; Moreno, J; Bustos, P; Mateos, J; Sánchez-Margallo, F M

    2011-05-01

    Minimally Invasive Surgery procedures are commonly used in many surgical practices, but surgeons need specific training models and devices due to its difficulty and complexity. In this paper, an innovative electronic device for endosurgical skills training (EDEST) is presented. A study on reliability for this device was performed. Different electronic components were used to compose this new training device. The EDEST was focused on two basic laparoscopic tasks: triangulation and coordination manoeuvres. A configuration and statistical software was developed to complement the functionality of the device. A calibration method was used to assure the proper work of the device. A total of 35 subjects (8 experts and 27 novices) were used to check the reliability of the system using the MTBF analysis. Configuration values for triangulation and coordination exercises were calculated as 0.5 s limit threshold and 800-11,000 lux range of light intensity, respectively. Zero errors in 1,050 executions (0%) for triangulation and 21 errors in 5,670 executions (0.37%) for coordination were obtained. A MTBF of 2.97 h was obtained. The results show that the reliability of the EDEST device is acceptable when used under previously defined light conditions. These results along with previous work could demonstrate that the EDEST device can help surgeons during first training stages.

  19. Validation of the Behaviour Rating Inventory of Executive Function - Adult Version (BRIEF-A) in the obese with and without binge eating disorder.

    PubMed

    Rouel, Melissa; Raman, Jayanthi; Hay, Phillipa; Smith, Evelyn

    2016-12-01

    Obesity and binge eating disorder (BED) are both associated with deficiencies in executive function. The Behaviour Rating Inventory of Executive Function - Adult Version (BRIEF-A) is a self-report measure that assesses executive function. This study aimed to examine the psychometric properties of the BRIEF-A in an obese population, with and without BED, and to explore the differences on the BRIEF-A in the obese, with and without BED, compared to normative sample. 98 obese participants (70 BED) completed the BRIEF-A, DASS-21 and several performance-based measures of executive function. 30 participants completed a repeat assessment two months later. There was evidence of good internal consistency and test-retest reliability, however evidence for construct and convergent validity was mixed. Additionally, it was found that obese individuals report significantly more executive function difficulties on the BRIEF-A than the normative sample. Further, obese with BED report more executive function difficulties than those without. This study shows some evidence of sound psychometric properties of the BRIEF-A in an obese sample, however more research is required to understand the nature of executive function being measured. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Generalized Symbolic Execution for Model Checking and Testing

    NASA Technical Reports Server (NTRS)

    Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)

    2003-01-01

    Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

  1. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.

    1996-12-17

    A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.

  2. Reliable multicast protocol specifications protocol operations

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd; Whetten, Brian

    1995-01-01

    This appendix contains the complete state tables for Reliable Multicast Protocol (RMP) Normal Operation, Multi-RPC Extensions, Membership Change Extensions, and Reformation Extensions. First the event types are presented. Afterwards, each RMP operation state, normal and extended, is presented individually and its events shown. Events in the RMP specification are one of several things: (1) arriving packets, (2) expired alarms, (3) user events, (4) exceptional conditions.

  3. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  4. 5 CFR 2502.32 - Procedure in the event of a demand for disclosure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Procedure in the event of a demand for disclosure. 2502.32 Section 2502.32 Administrative Personnel OFFICE OF ADMINISTRATION, EXECUTIVE OFFICE OF... Other Authorities § 2502.32 Procedure in the event of a demand for disclosure. (a) Whenever a demand is...

  5. 5 CFR 2502.32 - Procedure in the event of a demand for disclosure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Procedure in the event of a demand for disclosure. 2502.32 Section 2502.32 Administrative Personnel OFFICE OF ADMINISTRATION, EXECUTIVE OFFICE OF... Other Authorities § 2502.32 Procedure in the event of a demand for disclosure. (a) Whenever a demand is...

  6. Program For Simulation Of Trajectories And Events

    NASA Technical Reports Server (NTRS)

    Gottlieb, Robert G.

    1992-01-01

    Universal Simulation Executive (USE) program accelerates and eases generation of application programs for numerical simulation of continuous trajectories interrupted by or containing discrete events. Developed for simulation of multiple spacecraft trajectories with events as one spacecraft crossing the equator, two spacecraft meeting or parting, or firing rocket engine. USE also simulates operation of chemical batch processing factory. Written in Ada.

  7. 12 CFR 225.171 - What are the limitations on managing or operating a portfolio company held as a merchant banking...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...?—(1) Examples of routine management or operation—(i) Executive officer interlocks at the portfolio... hiring officers or employees other than executive officers. (2) Presumptions of routine management or... paragraph (e) of this section. (2) Covenants or other provisions regarding extraordinary events. A financial...

  8. [Level of reading skills as a measure of cognitive reserve in elderly adults].

    PubMed

    Soto-Añari, Marcio; Flores-Valdivia, Gilda; Fernández-Guinea, Sara

    2013-01-16

    Cognitive reserve modulates between neurodegenerative processes and the clinical manifestations of cognitive impairment and dementia. This construct is associated with the capacity to optimise the execution of tasks by recruiting neuronal networks and with the use of alternative cognitive strategies that would be mediated by formal educational processes. To analyse the level of reading skills as a measure of cognitive reserve and as a reliable predictor of performance in tests for evaluating different cognitive domains. The sample consisted of 87 healthy subjects who were asked to complete the Word Naming test as an indicator of the level of reading skills; this allowed us to divide the sample into subjects with a low and a high level of reading ability. A broad neuropsychological battery was then applied. The subjects with a low level of reading skills displayed lower general cognitive performance, reduced processing speed and cognitive deficits. Furthermore, the level of reading skills is a better predictor of performance in executive functions and general cognitive performance than the variables age, years of schooling and education. The level of reading skills has shown itself to be a good measure of cognitive reserve and a reliable predictor of executive and cognitive functioning in ageing.

  9. Identification of Deep Earthquakes

    DTIC Science & Technology

    2010-09-01

    discriminants that will reliably separate small, crustal earthquakes (magnitudes less than about 4 and depths less than about 40 to 50 km) from small...characteristics on discrimination plots designed to separate nuclear explosions from crustal earthquakes. Thus, reliably flagging these small, deep events is...Further, reliably identifying subcrustal earthquakes will allow us to eliminate deep events (previously misidentified as crustal earthquakes) from

  10. A Six-Month Cognitive-Motor and Aerobic Exercise Program Improves Executive Function in Persons with an Objective Cognitive Impairment: A Pilot Investigation Using the Antisaccade Task.

    PubMed

    Heath, Matthew; Weiler, Jeffrey; Gregory, Michael A; Gill, Dawn P; Petrella, Robert J

    2016-10-04

    Persons with an objective cognitive impairment (OCI) are at increased risk for progression to Alzheimer's disease and related dementias. The present pilot project sought to examine whether participation in a long-term exercise program involving cognitive-motor (CM) dual-task gait training and aerobic exercise training improves executive function in persons with an OCI. To accomplish our objective, individuals with an OCI (n = 12) as determined by a Montreal Cognitive Assessment (MoCA) score of less than 26 and older adults (n = 11) deemed to be cognitively healthy (i.e., control group: MoCA score ≥26) completed a six-month moderate-to-high intensity (65-85% maximum heart rate) treadmill-based CM and aerobic exercise training program wherein pre- and post-intervention executive control was examined via the antisaccade task. Notably, antisaccades require a goal-directed eye-movement mirror-symmetrical to a target and represent an ideal tool for the study of executive deficits because of its hands- and language-free nature. As well, the cortical networks mediating antisaccades represent regions associated with neuropathology in cognitive decline and dementia (e.g., dorsolateral prefrontal cortex). Results showed that antisaccade reaction times for the OCI group reliably decreased by 30 ms from pre- to post-intervention, whereas the control group did not produce a reliable pre- to post-intervention change in reaction time (i.e., 6 ms). Thus, we propose that in persons with OCI long-term CM and aerobic training improves the efficiency and effectiveness of the executive mechanisms mediating high-level oculomotor control.

  11. Alfred P. Southwick, MDS, DDS: dental practitioner, educator and originator of electrical executions.

    PubMed

    Christen, A G; Christen, J A

    2000-11-01

    The search for a modern, humane method of criminal execution was triggered by a freak accident which occurred in Buffalo, New York in 1881. Dr. Alfred P. Southwick (a former steam-boat engineer, noted dentist and dental educator) happened to witness an intoxicated man die after he inadvertently touched a live generator terminal. Southwick's initial reaction was shock. Later, as he pondered this tragic event, he concluded that electrocution was, at least, a quick and seemingly painless way to depart from this earth. As his thoughts turned to common methods of capital punishment, Alfred concluded that death by electrocution could become a more humane alternative, as compared with the more grisly methods (e.g., hanging, beheading by guillotine, garroting, suffocation and flaying). Working through the governor of New York and the state legislature, Southwick originated and successfully promoted the passage of laws which mandated electrical executions in New York and in approximately 20 other states. During 1888-1889, Southwick served on the state's three-person Electrical Death Commission, a group who reported that electrical execution was superior to all other methods. On January 1, 1889, the world's first electrical execution law went into effect. On August 6, 1890, William Francis Kemmler, who had murdered his mistress, was the first person to die in the electric chair. However, this public event became an amateurish spectacle: the initial surge of current did not cause Kemmler's immediate death and a second jolt was needed. Those who witnessed this bungled execution were stunned. Graphic and detailed criticism from both the press and the general public ran high. However, Dr. Southwick vigorously continued to support and finally achieve his goal--to humanize capital punishment through the legal use of electrical execution.

  12. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barr, Jonathan L.; Tuffner, Francis K.; Hadley, Mark D.

    This document contains the Integrated Assessment Plan (IAP) for the Phase 2 Operational Demonstration (OD) of the Smart Power Infrastructure Demonstration for Energy Reliability (SPIDERS) Joint Capability Technology Demonstration (JCTD) project. SPIDERS will be conducted over a three year period with Phase 2 being conducted at Fort Carson, Colorado. This document includes the Operational Demonstration Execution Plan (ODEP) and the Operational Assessment Execution Plan (OAEP), as approved by the Operational Manager (OM) and the Integrated Management Team (IMT). The ODEP describes the process by which the OD is conducted and the OAEP describes the process by which the data collectedmore » from the OD is processed. The execution of the OD, in accordance with the ODEP and the subsequent execution of the OAEP, will generate the necessary data for the Quick Look Report (QLR) and the Utility Assessment Report (UAR). These reports will assess the ability of the SPIDERS JCTD to meet the four critical requirements listed in the Implementation Directive (ID).« less

  14. An expert system executive for automated assembly of large space truss structures

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1993-01-01

    Langley Research Center developed a unique test bed for investigating the practical problems associated with the assembly of large space truss structures using robotic manipulators. The test bed is the result of an interdisciplinary effort that encompasses the full spectrum of assembly problems - from the design of mechanisms to the development of software. The automated structures assembly test bed and its operation are described, the expert system executive and its development are detailed, and the planned system evolution is discussed. Emphasis is on the expert system implementation of the program executive. The executive program must direct and reliably perform complex assembly tasks with the flexibility to recover from realistic system errors. The employment of an expert system permits information that pertains to the operation of the system to be encapsulated concisely within a knowledge base. This consolidation substantially reduced code, increased flexibility, eased software upgrades, and realized a savings in software maintenance costs.

  15. Monitoring robot actions for error detection and recovery

    NASA Technical Reports Server (NTRS)

    Gini, M.; Smith, R.

    1987-01-01

    Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.

  16. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  17. Automatic generation of efficient orderings of events for scheduling applications

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.

    1994-01-01

    In scheduling a set of tasks, it is often not known with certainty how long a given event will take. We call this duration uncertainty. Duration uncertainty is a primary obstacle to the successful completion of a schedule. If a duration of one task is longer than expected, the remaining tasks are delayed. The delay may result in the abandonment of the schedule itself, a phenomenon known as schedule breakage. One response to schedule breakage is on-line, dynamic rescheduling. A more recent alternative is called proactive rescheduling. This method uses statistical data about the durations of events in order to anticipate the locations in the schedule where breakage is likely prior to the execution of the schedule. It generates alternative schedules at such sensitive points, which can be then applied by the scheduler at execution time, without the delay incurred by dynamic rescheduling. This paper proposes a technique for making proactive error management more effective. The technique is based on applying a similarity-based method of clustering to the problem of identifying similar events in a set of events.

  18. Robust, Radiation Tolerant Command and Data Handling and Power System Electronics for SmallSats

    NASA Technical Reports Server (NTRS)

    Nguyen, Hanson Cao; Fraction, James

    2018-01-01

    In today's budgetary environment, there is significant interest within the National Aeronautics and Space Administration (NASA) to enable small robotic science missions that can be executed faster and cheaper than previous larger missions. To help achieve this, focus has shifted from using exclusively radiation-tolerant or radiation-hardened parts to using more commercial-off-the-shelf (COTS) components for NASA small satellite missions that can last at least one year in orbit. However, there are some portions of a spacecraft's avionics, such as the Command and Data Handling (C&DH) subsystem and the Power System Electronics (PSE) that need to have a higher level of reliability that goes beyond what is attainable with currently available COTS parts. While there are a number of COTS components that can withstand a total ionizing dose (TID) of tens or hundreds of kilorads, there is still a great deal of concern about tolerance to and mitigation of single-event effects (SEE).

  19. Effect of instructive visual stimuli on neurofeedback training for motor imagery-based brain-computer interface.

    PubMed

    Kondo, Toshiyuki; Saeki, Midori; Hayashi, Yoshikatsu; Nakayashiki, Kosei; Takata, Yohei

    2015-10-01

    Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI neurofeedback training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Surveillance data management system

    NASA Astrophysics Data System (ADS)

    Teague, Ralph

    2002-10-01

    On October 8, 2001, an Executive Order was signed creating the White House Office of Homeland Security. With its formaiton comes focused attention in setting goals and priorities for homeland security. Analysis, preparation, and implementation of strategies will hinge not only on how information is collected and analyzed, but more important, on how it is coordinated and shared. Military installations/facilities, Public safety agencies, airports, federal and local offices, public utilities, harbors, transportation and others critical areas must work either independently or as a team to ensure the safety of our citizens and visitor. In this new era of increased security, the key to interoperation is continuous information exchanged-events must be rapidly identified, reported and responded to by the appropriate agencies. For instance when a threat has been detected the security officers must be immediately alerted and must have access to the type of threat, location, movement, heading, threat size, etc to respond accordingly and the type of support required. This requires instant communications and teamwork with reliable and flexible technology.

  1. Motor resonance facilitates movement execution: an ERP and kinematic study

    PubMed Central

    Ménoret, Mathilde; Curie, Aurore; des Portes, Vincent; Nazir, Tatjana A.; Paulignan, Yves

    2013-01-01

    Action observation, simulation and execution share neural mechanisms that allow for a common motor representation. It is known that when these overlapping mechanisms are simultaneously activated by action observation and execution, motor performance is influenced by observation and vice versa. To understand the neural dynamics underlying this influence and to measure how variations in brain activity impact the precise kinematics of motor behavior, we coupled kinematics and electrophysiological recordings of participants while they performed and observed congruent or non-congruent actions or during action execution alone. We found that movement velocities and the trajectory deviations of the executed actions increased during the observation of congruent actions compared to the observation of non-congruent actions or action execution alone. This facilitation was also discernible in the motor-related potentials of the participants; the motor-related potentials were transiently more negative in the congruent condition around the onset of the executed movement, which occurred 300 ms after the onset of the observed movement. This facilitation seemed to depend not only on spatial congruency but also on the optimal temporal relationship of the observation and execution events. PMID:24133437

  2. Neural Correlates of Action Observation and Execution in 14-Month-Old Infants: An Event-Related EEG Desynchronization Study

    ERIC Educational Resources Information Center

    Marshall, Peter J.; Young, Thomas; Meltzoff, Andrew N.

    2011-01-01

    There is increasing interest in neurobiological methods for investigating the shared representation of action perception and production in early development. We explored the extent and regional specificity of EEG desynchronization in the infant alpha frequency range (6-9 Hz) during action observation and execution in 14-month-old infants.…

  3. The Impact of Religiously Affiliated Universities and Courses in Ethics and Religious Studies on Students' Attitude toward Business Ethics

    ERIC Educational Resources Information Center

    Comegys, Charles

    2010-01-01

    Unfortunate unethical events are continuing in the business arena and now more than ever these business judgmental shortcoming focus attention on the ethics of business executives. Thus, colleges and universities must continue to address business ethics as they prepare and train the next generation of executives. Educational institutions should be…

  4. Empirical Evaluation of Conservative and Optimistic Discrete Event Execution on Cloud and VM Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    Virtual machine (VM) technologies, especially those offered via Cloud platforms, present new dimensions with respect to performance and cost in executing parallel discrete event simulation (PDES) applications. Due to the introduction of overall cost as a metric, the choice of the highest-end computing configuration is no longer the most economical one. Moreover, runtime dynamics unique to VM platforms introduce new performance characteristics, and the variety of possible VM configurations give rise to a range of choices for hosting a PDES run. Here, an empirical study of these issues is undertaken to guide an understanding of the dynamics, trends and trade-offsmore » in executing PDES on VM/Cloud platforms. Performance results and cost measures are obtained from actual execution of a range of scenarios in two PDES benchmark applications on the Amazon Cloud offerings and on a high-end VM host machine. The data reveals interesting insights into the new VM-PDES dynamics that come into play and also leads to counter-intuitive guidelines with respect to choosing the best and second-best configurations when overall cost of execution is considered. In particular, it is found that choosing the highest-end VM configuration guarantees neither the best runtime nor the least cost. Interestingly, choosing a (suitably scaled) low-end VM configuration provides the least overall cost without adversely affecting the total runtime.« less

  5. Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2011-01-01

    In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less

  6. The GOES-R Product Generation Architecture - Post CDR Update

    NASA Astrophysics Data System (ADS)

    Dittberner, G.; Kalluri, S.; Weiner, A.

    2012-12-01

    The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  7. The GOES-R Product Generation Architecture

    NASA Astrophysics Data System (ADS)

    Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.

    2011-12-01

    The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  8. A Scientific Data Provenance API for Distributed Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raju, Bibi; Elsethagen, Todd O.; Stephan, Eric G.

    Data provenance has been an active area of research as a means to standardize how the origin of data, process event history, and what or who was responsible for influencing results is explained. There are two approaches to capture provenance information. The first approach is to collect observed evidence produced by an executing application using log files, event listeners, and temporary files that are used by the application or application developer. The provenance translated from these observations is an interpretation of the provided evidence. The second approach is called disclosed because the application provides a firsthand account of the provenancemore » based on the anticipated questions on data flow, process flow, and responsible agents. Most observed provenance collection systems collect lot of provenance information during an application run or workflow execution. The common trend in capturing provenance is to collect all possible information, then attempt to find relevant information, which is not efficient. Existing disclosed provenance system APIs do not work well in distributed environment and have trouble finding where to fit the individual pieces of provenance information. This work focuses on determining more reliable solutions for provenance capture. As part of the Integrated End-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project, an API was developed, called Producer API (PAPI), which can disclose application targeted provenance, designed to work in distributed environments by means of unique object identification methods. The provenance disclosure approach used adds additional metadata to the provenance information to uniquely identify the pieces and connect them together. PAPI uses a common provenance model to support this provenance integration across disclosure sources. The API also provides the flexibility to let the user decide what to do with the collected provenance. The collected provenance can be sent to a triple store using REST services or it can be logged to a file.« less

  9. Regional frontal gray matter volume associated with executive function capacity as a risk factor for vehicle crashes in normal aging adults.

    PubMed

    Sakai, Hiroyuki; Takahara, Miwa; Honjo, Naomi F; Doi, Shun'ichi; Sadato, Norihiro; Uchiyama, Yuji

    2012-01-01

    Although low executive functioning is a risk factor for vehicle crashes among elderly drivers, the neural basis of individual differences in this cognitive ability remains largely unknown. Here we aimed to examine regional frontal gray matter volume associated with executive functioning in normal aging individuals, using voxel-based morphometry (VBM). To this end, 39 community-dwelling elderly volunteers who drove a car on a daily basis participated in structural magnetic resonance imaging, and completed two questionnaires concerning executive functioning and risky driving tendencies in daily living. Consequently, we found that participants with low executive function capacity were prone to risky driving. Furthermore, VBM analysis revealed that lower executive function capacity was associated with smaller gray matter volume in the supplementary motor area (SMA). Thus, the current data suggest that SMA volume is a reliable predictor of individual differences in executive function capacity as a risk factor for vehicle crashes among elderly persons. The implication of our results is that regional frontal gray matter volume might underlie the variation in driving tendencies among elderly drivers. Therefore, detailed driving behavior assessments might be able to detect early neurodegenerative changes in the frontal lobe in normal aging adults.

  10. A new approach to power quality and electricity reliability monitoring-case study illustrations of the capabilities of the I-GridTM system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divan, Deepak; Brumsickle, William; Eto, Joseph

    2003-04-01

    This report describes a new approach for collecting information on power quality and reliability and making it available in the public domain. Making this information readily available in a form that is meaningful to electricity consumers is necessary for enabling more informed private and public decisions regarding electricity reliability. The system dramatically reduces the cost (and expertise) needed for customers to obtain information on the most significant power quality events, called voltage sags and interruptions. The system also offers widespread access to information on power quality collected from multiple sites and the potential for capturing information on the impacts ofmore » power quality problems, together enabling a wide variety of analysis and benchmarking to improve system reliability. Six case studies demonstrate selected functionality and capabilities of the system, including: Linking measured power quality events to process interruption and downtime; Demonstrating the ability to correlate events recorded by multiple monitors to narrow and confirm the causes of power quality events; and Benchmarking power quality and reliability on a firm and regional basis.« less

  11. Executive functioning impairment in women treated with chemotherapy for breast cancer: a systematic review.

    PubMed

    Yao, Christie; Bernstein, Lori J; Rich, Jill B

    2017-11-01

    Women with breast cancer have reported adverse cognitive effects following chemotherapy. Evidence is mixed on whether executive functioning is particularly impaired in women treated with chemotherapy, in part due to the wide range of tasks used to measure executive processes. We performed a systematic review of the published literature to evaluate whether some subcomponents of executive functioning are more vulnerable to impairment than others among breast cancer survivors who had been treated with chemotherapy. Studies published as of April 2017 were identified using three electronic databases (MEDLINE, PsycINFO, and Web of Science) and a manual search of relevant reference lists. The methodological quality of included studies was assessed using a checklist of predefined criteria. Of 1280 identified articles, a total of 41 were included for review. Study findings were categorized into three primary subdomains of executive functioning: inhibition, shifting, and updating. Although there was heterogeneity in the neuropsychological measures used to assess executive functioning, tests could be grouped into the subcomponents they assessed. Inhibition appears relatively spared from the effects of chemotherapy, whereas impairments in shifting and updating are more commonly found following chemotherapy. Examination of subcomponents of executive functioning is recommended to better characterize the nature of executive dysfunction in women treated with chemotherapy. Future studies should include executive functioning tasks of varying complexity, use of multiple tasks to increase reliability, and alternative indices to capture performance, such as within-person variability.

  12. Development, Reliability, and Equivalence of an Alternate Form for the CQ Duty Performance-Based Measure

    DTIC Science & Technology

    2017-10-01

    distinguishes between known- groups of healthy control soldiers and those with traumatic brain injury. As such, the CQDT shows promise in helping to inform...be reliably administered and distinguishes between known- groups of healthy control soldiers and those with traumatic brain injury. As such, the CQDT...healthy controls and SM with mild TBI. If we succeed in developing an equivalent alternate form, the CQD may be used to both identify executive

  13. The research and practice of spacecraft software engineering

    NASA Astrophysics Data System (ADS)

    Chen, Chengxin; Wang, Jinghua; Xu, Xiaoguang

    2017-06-01

    In order to ensure the safety and reliability of spacecraft software products, it is necessary to execute engineering management. Firstly, the paper introduces the problems of unsystematic planning, uncertain classified management and uncontinuous improved mechanism in domestic and foreign spacecraft software engineering management. Then, it proposes a solution for software engineering management based on system-integrated ideology in the perspective of spacecraft system. Finally, a application result of spacecraft is given as an example. The research can provides a reference for executing spacecraft software engineering management and improving software product quality.

  14. Robustness of reliable change indices to variability in Parkinson's disease with mild cognitive impairment.

    PubMed

    Turner, T H; Renfroe, J B; Elm, J; Duppstadt-Delambo, A; Hinson, V K

    2016-01-01

    Ability to identify change is crucial for measuring response to interventions and tracking disease progression. Beyond psychometrics, investigations of Parkinson's disease with mild cognitive impairment (PD-MCI) must consider fluctuating medication, motor, and mental status. One solution is to employ 90% reliable change indices (RCIs) from test manuals to account for account measurement error and practice effects. The current study examined robustness of 90% RCIs for 19 commonly used executive function tests in 14 PD-MCI subjects assigned to the placebo arm of a 10-week randomized controlled trial of atomoxetine in PD-MCI. Using 90% RCIs, the typical participant showed spurious improvement on one measure, and spurious decline on another. Reliability estimates from healthy adults standardization samples and PD-MCI were similar. In contrast to healthy adult samples, practice effects were minimal in this PD-MCI group. Separate 90% RCIs based on the PD-MCI sample did not further reduce error rate. In the present study, application of 90% RCIs based on healthy adults in standardization samples effectively reduced misidentification of change in a sample of PD-MCI. Our findings support continued application of 90% RCIs when using executive function tests to assess change in neurological populations with fluctuating status.

  15. Event specific qualitative and quantitative polymerase chain reaction detection of genetically modified MON863 maize based on the 5'-transgene integration sequence.

    PubMed

    Yang, Litao; Xu, Songci; Pan, Aihu; Yin, Changsong; Zhang, Kewei; Wang, Zhenying; Zhou, Zhigang; Zhang, Dabing

    2005-11-30

    Because of the genetically modified organisms (GMOs) labeling policies issued in many countries and areas, polymerase chain reaction (PCR) methods were developed for the execution of GMO labeling policies, such as screening, gene specific, construct specific, and event specific PCR detection methods, which have become a mainstay of GMOs detection. The event specific PCR detection method is the primary trend in GMOs detection because of its high specificity based on the flanking sequence of the exogenous integrant. This genetically modified maize, MON863, contains a Cry3Bb1 coding sequence that produces a protein with enhanced insecticidal activity against the coleopteran pest, corn rootworm. In this study, the 5'-integration junction sequence between the host plant DNA and the integrated gene construct of the genetically modified maize MON863 was revealed by means of thermal asymmetric interlaced-PCR, and the specific PCR primers and TaqMan probe were designed based upon the revealed 5'-integration junction sequence; the conventional qualitative PCR and quantitative TaqMan real-time PCR detection methods employing these primers and probes were successfully developed. In conventional qualitative PCR assay, the limit of detection (LOD) was 0.1% for MON863 in 100 ng of maize genomic DNA for one reaction. In the quantitative TaqMan real-time PCR assay, the LOD and the limit of quantification were eight and 80 haploid genome copies, respectively. In addition, three mixed maize samples with known MON863 contents were detected using the established real-time PCR systems, and the ideal results indicated that the established event specific real-time PCR detection systems were reliable, sensitive, and accurate.

  16. Deep Long-period Seismicity Beneath the Executive Committee Range, Marie Byrd Land, Antarctica, Studied Using Subspace Detection

    NASA Astrophysics Data System (ADS)

    Aster, R. C.; McMahon, N. D.; Myers, E. K.; Lough, A. C.

    2015-12-01

    Lough et al. (2014) first detected deep sub-icecap magmatic events beneath the Executive Committee Range volcanoes of Marie Byrd Land. Here, we extend the identification and analysis of these events in space and time utilizing subspace detection. Subspace detectors provide a highly effective methodology for studying events within seismic swarms that have similar moment tensor and Green's function characteristics and are particularly effective for identifying low signal-to-noise events. Marie Byrd Land (MBL) is an extremely remote continental region that is nearly completely covered by the West Antarctic Ice Sheet (WAIS). The southern extent of Marie Byrd Land lies within the West Antarctic Rift System (WARS), which includes the volcanic Executive Committee Range (ECR). The ECR shows north-to-south progression of volcanism across the WARS during the Holocene. In 2013, the POLENET/ANET seismic data identified two swarms of seismic activity in 2010 and 2011. These events have been interpreted as deep, long-period (DLP) earthquakes based on depth (25-40 km) and low frequency content. The DLP events in MBL lie beneath an inferred sub-WAIS volcanic edifice imaged with ice penetrating radar and have been interpreted as a present location of magmatic intrusion. The magmatic swarm activity in MBL provides a promising target for advanced subspace detection and temporal, spatial, and event size analysis of an extensive deep long period earthquake swarm using a remote seismographic network. We utilized a catalog of 1,370 traditionally identified DLP events to construct subspace detectors for the six nearest stations and analyzed two years of data spanning 2010-2011. Association of these detections into events resulted in an approximate ten-fold increase in number of locatable earthquakes. In addition to the two previously identified swarms during early 2010 and early 2011, we find sustained activity throughout the two years of study that includes several previously unidentified periods of heightened activity. Correlation with large global earthquakes suggests that the DLP activity is not sensitive to remote teleseismic triggering.

  17. 29 CFR 18.803 - Hearsay exceptions; availability of declarant immaterial.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., or immediately thereafter. (2) Excited utterance. A statement relating to a startling event or condition made while the declarant was under the stress of excitement caused by the event or condition. (3... remembered or believed unless it relates to the execution, revocation, identification, or terms of declarant...

  18. The effect of early life stress on the cognitive phenotype of children with an extra X chromosome (47,XXY/47,XXX).

    PubMed

    van Rijn, Sophie; Barneveld, Petra; Descheemaeker, Mie-Jef; Giltay, Jacques; Swaab, Hanna

    2018-02-01

    Studies on gene-environment interactions suggest that some individuals may be more susceptible to life adversities than others due to their genetic profile. This study assesses whether or not children with an extra X chromosome are more vulnerable to the negative impact of early life stress on cognitive functioning than typically-developing children. A total of 50 children with an extra X chromosome and 103 non-clinical controls aged 9 to 18 years participated in the study. Cognitive functioning in domains of language, social cognition and executive functioning were assessed. Early life stress was measured with the Questionnaire of Life Events. High levels of early life stress were found to be associated with compromised executive functioning in the areas of mental flexibility and inhibitory control, irrespective of group membership. In contrast, the children with an extra X chromosome were found to be disproportionally vulnerable to deficits in social cognition on top of executive dysfunction, as compared to typically-developing children. Within the extra X group the number of negative life events is significantly correlated with more problems in inhibition, mental flexibility and social cognition. It is concluded that children with an extra X chromosome are vulnerable to adverse life events, with social cognition being particularly impacted in addition to the negative effects on executive functioning. The findings that developmental outcome is codependent on early environmental factors in genetically vulnerable children also underscores opportunities for training and support to positively influence the course of development.

  19. Reliability of risk-adjusted outcomes for profiling hospital surgical quality.

    PubMed

    Krell, Robert W; Hozain, Ahmed; Kao, Lillian S; Dimick, Justin B

    2014-05-01

    Quality improvement platforms commonly use risk-adjusted morbidity and mortality to profile hospital performance. However, given small hospital caseloads and low event rates for some procedures, it is unclear whether these outcomes reliably reflect hospital performance. To determine the reliability of risk-adjusted morbidity and mortality for hospital performance profiling using clinical registry data. A retrospective cohort study was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program, 2009. Participants included all patients (N = 55,466) who underwent colon resection, pancreatic resection, laparoscopic gastric bypass, ventral hernia repair, abdominal aortic aneurysm repair, and lower extremity bypass. Outcomes included risk-adjusted overall morbidity, severe morbidity, and mortality. We assessed reliability (0-1 scale: 0, completely unreliable; and 1, perfectly reliable) for all 3 outcomes. We also quantified the number of hospitals meeting minimum acceptable reliability thresholds (>0.70, good reliability; and >0.50, fair reliability) for each outcome. For overall morbidity, the most common outcome studied, the mean reliability depended on sample size (ie, how high the hospital caseload was) and the event rate (ie, how frequently the outcome occurred). For example, mean reliability for overall morbidity was low for abdominal aortic aneurysm repair (reliability, 0.29; sample size, 25 cases per year; and event rate, 18.3%). In contrast, mean reliability for overall morbidity was higher for colon resection (reliability, 0.61; sample size, 114 cases per year; and event rate, 26.8%). Colon resection (37.7% of hospitals), pancreatic resection (7.1% of hospitals), and laparoscopic gastric bypass (11.5% of hospitals) were the only procedures for which any hospitals met a reliability threshold of 0.70 for overall morbidity. Because severe morbidity and mortality are less frequent outcomes, their mean reliability was lower, and even fewer hospitals met the thresholds for minimum reliability. Most commonly reported outcome measures have low reliability for differentiating hospital performance. This is especially important for clinical registries that sample rather than collect 100% of cases, which can limit hospital case accrual. Eliminating sampling to achieve the highest possible caseloads, adjusting for reliability, and using advanced modeling strategies (eg, hierarchical modeling) are necessary for clinical registries to increase their benchmarking reliability.

  20. Test-retest reliability of the trauma and life events self-report inventory.

    PubMed

    Hovens, J E; Bramsen, I; van der Ploeg, H M; Reuling, I E

    2000-12-01

    Three groups of first-year male and female medical students (total N = 90) completed the Trauma and Life Events Self-report Inventory twice. Test-retest reliability for the three different time periods was .82, .89, and .75, respectively.

  1. Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.

    IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less

  2. Reliability of Memories Protected by Multibit Error Correction Codes Against MBUs

    NASA Astrophysics Data System (ADS)

    Ming, Zhu; Yi, Xiao Li; Chang, Liu; Wei, Zhang Jian

    2011-02-01

    As technology scales, more and more memory cells can be placed in a die. Therefore, the probability that a single event induces multiple bit upsets (MBUs) in adjacent memory cells gets greater. Generally, multibit error correction codes (MECCs) are effective approaches to mitigate MBUs in memories. In order to evaluate the robustness of protected memories, reliability models have been widely studied nowadays. Instead of irradiation experiments, the models can be used to quickly evaluate the reliability of memories in the early design. To build an accurate model, some situations should be considered. Firstly, when MBUs are presented in memories, the errors induced by several events may overlap each other, which is more frequent than single event upset (SEU) case. Furthermore, radiation experiments show that the probability of MBUs strongly depends on angles of the radiation event. However, reliability models which consider the overlap of multiple bit errors and angles of radiation event have not been proposed in the present literature. In this paper, a more accurate model of memories with MECCs is presented. Both the overlap of multiple bit errors and angles of event are considered in the model, which produces a more precise analysis in the calculation of mean time to failure (MTTF) for memory systems under MBUs. In addition, memories with scrubbing and nonscrubbing are analyzed in the proposed model. Finally, we evaluate the reliability of memories under MBUs in Matlab. The simulation results verify the validity of the proposed model.

  3. Promise-based management: the essence of execution.

    PubMed

    Sull, Donald N; Spinosa, Charles

    2007-04-01

    Critical initiatives stall for a variety of reasons--employee disengagement, a lack of coordination between functions, complex organizational structures that obscure accountability, and so on. To overcome such obstacles, managers must fundamentally rethink how work gets done. Most of the challenges stem from broken or poorly crafted commitments. That's because every company is, at its heart, a dynamic network of promises made between employees and colleagues, customers, outsourcing partners, or other stakeholders. Executives can overcome many problems in the short-term and foster productive, reliable workforces for the long-term by practicing what the authors call "promise-based management," which involves cultivating and coordinating commitments in a systematic way. Good promises share five qualities: They are public, active, voluntary, explicit, and mission based. To develop and execute an effective promise, the "provider" and the "customer" in the deal should go through three phases of conversation. The first, achieving a meeting of minds, entails exploring the fundamental questions of coordinated effort: What do you mean? Do you understand what I mean? What should I do? What will you do? Who else should we talk to? In the next phase, making it happen, the provider executes on the promise. In the final phase, closing the loop, the customer publicly declares that the provider has either delivered the goods or failed to do so. Leaders must weave and manage their webs of promises with great care-encouraging iterative conversation and making sure commitments are fulfilled reliably. If they do, they can enhance coordination and cooperation among colleagues, build the organizational agility required to seize new business opportunities, and tap employees' entrepreneurial energies.

  4. System on chip module configured for event-driven architecture

    DOEpatents

    Robbins, Kevin; Brady, Charles E.; Ashlock, Tad A.

    2017-10-17

    A system on chip (SoC) module is described herein, wherein the SoC modules comprise a processor subsystem and a hardware logic subsystem. The processor subsystem and hardware logic subsystem are in communication with one another, and transmit event messages between one another. The processor subsystem executes software actors, while the hardware logic subsystem includes hardware actors, the software actors and hardware actors conform to an event-driven architecture, such that the software actors receive and generate event messages and the hardware actors receive and generate event messages.

  5. Critical Assessment of the Foundations of Power Transmission and Distribution Reliability Metrics and Standards.

    PubMed

    Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan

    2016-01-01

    The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.

  6. Neural correlates of tactile perception during pre-, peri-, and post-movement.

    PubMed

    Juravle, Georgiana; Heed, Tobias; Spence, Charles; Röder, Brigitte

    2016-05-01

    Tactile information is differentially processed over the various phases of goal-directed movements. Here, event-related potentials (ERPs) were used to investigate the neural correlates of tactile and visual information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimulation (100 ms) was presented in separate trials during the different phases of the movement (i.e. preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or resting hand. In a control condition, the participants only performed the movement, while omission (i.e. movement-only) ERPs were recorded. Participants were instructed to ignore the presence or absence of any sensory events and to concentrate solely on the execution of the movement. Enhanced ERPs were observed 80-200 ms after tactile stimulation, as well as 100-250 ms after visual stimulation: These modulations were greatest during the execution of the goal-directed movement, and they were effector based (i.e. significantly more negative for stimuli presented to the moving hand). Furthermore, ERPs revealed enhanced sensory processing during goal-directed movements for visual stimuli as well. Such enhanced processing of both tactile and visual information during the execution phase suggests that incoming sensory information is continuously monitored for a potential adjustment of the current motor plan. Furthermore, the results reported here also highlight a tight coupling between spatial attention and the execution of motor actions.

  7. Urban Maglev Technology Development Program : Colorado Maglev Project : part 1 : executive summary of final report

    DOT National Transportation Integrated Search

    2004-06-01

    The overall objective of the urban maglev transit technology development program is to develop magnetic levitation technology that is a cost effective, reliable, and environmentally sound transit option for urban mass transportation in the United Sta...

  8. 48 CFR 41.202 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and/or distribution service, quality assurance, system reliability, system operation and maintenance... CONTRACTING ACQUISITION OF UTILITY SERVICES Acquiring Utility Services 41.202 Procedures. (a) Prior to executing a utility service contract, the contracting officer shall comply with parts 6 and 7 and 41.201 (d...

  9. 76 FR 64330 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ... talks on HPC Reliability, Diffusion on Complex Networks, and Reversible Software Execution Systems Report from Applied Math Workshop on Mathematics for the Analysis, Simulation, and Optimization of Complex Systems Report from ASCR-BES Workshop on Data Challenges from Next Generation Facilities Public...

  10. Discrete event simulation tool for analysis of qualitative models of continuous processing systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)

    1990-01-01

    An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.

  11. ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.

  12. Reliability and validity of neurobehavioral function on the Psychology Experimental Building Language test battery in young adults

    PubMed Central

    Mueller, Shane T.; Geerken, Alexander R.; Dixon, Kyle L.; Kroliczak, Gregory; Olsen, Reid H.J.; Miller, Jeremy K.

    2015-01-01

    Background. The Psychology Experiment Building Language (PEBL) software consists of over one-hundred computerized tests based on classic and novel cognitive neuropsychology and behavioral neurology measures. Although the PEBL tests are becoming more widely utilized, there is currently very limited information about the psychometric properties of these measures. Methods. Study I examined inter-relationships among nine PEBL tests including indices of motor-function (Pursuit Rotor and Dexterity), attention (Test of Attentional Vigilance and Time-Wall), working memory (Digit Span Forward), and executive-function (PEBL Trail Making Test, Berg/Wisconsin Card Sorting Test, Iowa Gambling Test, and Mental Rotation) in a normative sample (N = 189, ages 18–22). Study II evaluated test–retest reliability with a two-week interest interval between administrations in a separate sample (N = 79, ages 18–22). Results. Moderate intra-test, but low inter-test, correlations were observed and ceiling/floor effects were uncommon. Sex differences were identified on the Pursuit Rotor (Cohen’s d = 0.89) and Mental Rotation (d = 0.31) tests. The correlation between the test and retest was high for tests of motor learning (Pursuit Rotor time on target r = .86) and attention (Test of Attentional Vigilance response time r = .79), intermediate for memory (digit span r = .63) but lower for the executive function indices (Wisconsin/Berg Card Sorting Test perseverative errors = .45, Tower of London moves = .15). Significant practice effects were identified on several indices of executive function. Conclusions. These results are broadly supportive of the reliability and validity of individual PEBL tests in this sample. These findings indicate that the freely downloadable, open-source PEBL battery (http://pebl.sourceforge.net) is a versatile research tool to study individual differences in neurocognitive performance. PMID:26713233

  13. Reliability and validity of neurobehavioral function on the Psychology Experimental Building Language test battery in young adults.

    PubMed

    Piper, Brian J; Mueller, Shane T; Geerken, Alexander R; Dixon, Kyle L; Kroliczak, Gregory; Olsen, Reid H J; Miller, Jeremy K

    2015-01-01

    Background. The Psychology Experiment Building Language (PEBL) software consists of over one-hundred computerized tests based on classic and novel cognitive neuropsychology and behavioral neurology measures. Although the PEBL tests are becoming more widely utilized, there is currently very limited information about the psychometric properties of these measures. Methods. Study I examined inter-relationships among nine PEBL tests including indices of motor-function (Pursuit Rotor and Dexterity), attention (Test of Attentional Vigilance and Time-Wall), working memory (Digit Span Forward), and executive-function (PEBL Trail Making Test, Berg/Wisconsin Card Sorting Test, Iowa Gambling Test, and Mental Rotation) in a normative sample (N = 189, ages 18-22). Study II evaluated test-retest reliability with a two-week interest interval between administrations in a separate sample (N = 79, ages 18-22). Results. Moderate intra-test, but low inter-test, correlations were observed and ceiling/floor effects were uncommon. Sex differences were identified on the Pursuit Rotor (Cohen's d = 0.89) and Mental Rotation (d = 0.31) tests. The correlation between the test and retest was high for tests of motor learning (Pursuit Rotor time on target r = .86) and attention (Test of Attentional Vigilance response time r = .79), intermediate for memory (digit span r = .63) but lower for the executive function indices (Wisconsin/Berg Card Sorting Test perseverative errors = .45, Tower of London moves = .15). Significant practice effects were identified on several indices of executive function. Conclusions. These results are broadly supportive of the reliability and validity of individual PEBL tests in this sample. These findings indicate that the freely downloadable, open-source PEBL battery (http://pebl.sourceforge.net) is a versatile research tool to study individual differences in neurocognitive performance.

  14. Teleradiology in Southeast Iran: Evaluating the Views of Senior Executives and Radiologists.

    PubMed

    Sadoughi, Farahnaz; Erfannia, Leila; Sancholi, Mahboobe; Salmani, Fatemeh; Sarsarshahi, Aida

    Teleradiology is considered as one of the important forms of telemedicine. Positive views of the users and providers of these services play an important role in its successful implementations. The aim of this study was to investigate the views of radiologists used in the radiology departments of teaching hospitals in the Zahedan University of Medical Sciences through teleradiology, as well as evaluate the executive possibility of teleradiology in these hospitals by the views of chief executive officer and comparison between these two views. The current cross-sectional research was performed in 2014 at Zahedan teaching hospitals. The views of 13 chief executive officers on the possibility of the execution of teleradiology and 26 radiologists on the teleradiology process were evaluated by means of two valid and reliable questionnaires. The results of the research revealed that most of the radiologists had knowledge of and positive opinions about teleradiology. Conversely, the view by chief executive officers was that implementation of these processes was not possible in the studied hospitals. Dealing with some issues including data security, controlling or restricting access to clinical information of patients during the process of teleradiology, the possibility of legal protection for the participating radiologists, constitution of executive teams in the organization along with the financial supports, and, subsequently, invitation of the supports from the chief executive officers as the main sponsors of teleradiology implementation in the teaching hospitals are all guidelines for improvement of the successful implementation of teleradiology.

  15. Verb bias and verb-specific competition effects on sentence production

    PubMed Central

    Thothathiri, Malathi; Evans, Daniel G.; Poudel, Sonali

    2017-01-01

    How do speakers choose between structural options for expressing a given meaning? Overall preference for some structures over others as well as prior statistical association between specific verbs and sentence structures (“verb bias”) are known to broadly influence language use. However, the effects of prior statistical experience on the planning and execution of utterances and the mechanisms that facilitate structural choice for verbs with different biases have not been fully explored. In this study, we manipulated verb bias for English double-object (DO) and prepositional-object (PO) dative structures: some verbs appeared solely in the DO structure (DO-only), others solely in PO (PO-only) and yet others equally in both (Equi). Structural choices during subsequent free-choice sentence production revealed the expected dispreference for DO overall but critically also a reliable linear trend in DO production that was consistent with verb bias (DO-only > Equi > PO-only). Going beyond the general verb bias effect, three results suggested that Equi verbs, which were associated equally with the two structures, engendered verb-specific competition and required additional resources for choosing the dispreferred DO structure. First, DO production with Equi verbs but not the other verbs correlated with participants’ inhibition ability. Second, utterance duration prior to the choice of a DO structure showed a quadratic trend (DO-only < Equi > PO-only) with the longest durations for Equi verbs. Third, eye movements consistent with reimagining the event also showed a quadratic trend (DO-only < Equi > PO-only) prior to choosing DO, suggesting that participants used such recall particularly for Equi verbs. Together, these analyses of structural choices, utterance durations, eye movements and individual differences in executive functions shed light on the effects of verb bias and verb-specific competition on sentence production and the role of different executive functions in choosing between sentence structures. PMID:28672009

  16. Characterization of System Level Single Event Upset (SEU) Responses using SEU Data, Classical Reliability Models, and Space Environment Data

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  17. Multigroup confirmatory factor analysis and structural invariance with age of the Behavior Rating Inventory of Executive Function (BRIEF)--French version.

    PubMed

    Fournet, Nathalie; Roulin, Jean-Luc; Monnier, Catherine; Atzeni, Thierry; Cosnefroy, Olivier; Le Gall, Didier; Roy, Arnaud

    2015-01-01

    The parent and teacher forms of the French version of the Behavioral Rating Inventory of Executive Function (BRIEF) were used to evaluate executive function in everyday life in a large sample of healthy children (N = 951) aged between 5 and 18. Several psychometric methods were applied, with a view to providing clinicians with tools for score interpretation. The parent and teacher forms of the BRIEF were acceptably reliable. Demographic variables (such as age and gender) were found to influence the BRIEF scores. Confirmatory factor analysis was then used to test five competing models of the BRIEF's latent structure. Two of these models (a three-factor model and a two-factor model, both based on a nine-scale structure) had a good fit. However, structural invariance with age was only obtained with the two-factor model. The French version of the BRIEF provides a useful measure of everyday executive function and can be recommended for use in clinical research and practice.

  18. Modeling Criterion Shifts and Target Checking in Prospective Memory Monitoring

    ERIC Educational Resources Information Center

    Horn, Sebastian S.; Bayen, Ute J.

    2015-01-01

    Event-based prospective memory (PM) involves remembering to perform intended actions after a delay. An important theoretical issue is whether and how people monitor the environment to execute an intended action when a target event occurs. Performing a PM task often increases the latencies in ongoing tasks. However, little is known about the…

  19. Volunteer Motivations at a National Special Olympics Event

    ERIC Educational Resources Information Center

    Khoo, Selina; Engelhorn, Rich

    2011-01-01

    Understanding the motivations for people to volunteer with the management and execution of major sporting events is important for the recruitment and retention of the volunteers. This research investigated volunteer motivations at the first National Special Olympics held in Ames, Iowa, USA in July 2006. A total of 289 participants completed the 28…

  20. Temporal and Resource Reasoning for Planning, Scheduling and Execution in Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Hunsberger, Luke; Tsamardinos, Ioannis

    2005-01-01

    This viewgraph slide tutorial reviews methods for planning and scheduling events. The presentation reviews several methods and uses several examples of scheduling events for the successful and timely completion of the overall plan. Using constraint based models the presentation reviews planning with time, time representations in problem solving and resource reasoning.

  1. An Advanced Simulation Framework for Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Li, P. P.; Tyrrell, R. Yeung D.; Adhami, N.; Li, T.; Henry, H.

    1994-01-01

    Discrete-event simulation (DEVS) users have long been faced with a three-way trade-off of balancing execution time, model fidelity, and number of objects simulated. Because of the limits of computer processing power the analyst is often forced to settle for less than desired performances in one or more of these areas.

  2. TraceContract

    NASA Technical Reports Server (NTRS)

    Kavelund, Klaus; Barringer, Howard

    2012-01-01

    TraceContract is an API (Application Programming Interface) for trace analysis. A trace is a sequence of events, and can, for example, be generated by a running program, instrumented appropriately to generate events. An event can be any data object. An example of a trace is a log file containing events that a programmer has found important to record during a program execution. Trace - Contract takes as input such a trace together with a specification formulated using the API and reports on any violations of the specification, potentially calling code (reactions) to be executed when violations are detected. The software is developed as an internal DSL (Domain Specific Language) in the Scala programming language. Scala is a relatively new programming language that is specifically convenient for defining such internal DSLs due to a number of language characteristics. This includes Scala s elegant combination of object-oriented and functional programming, a succinct notation, and an advanced type system. The DSL offers a combination of data-parameterized state machines and temporal logic, which is novel. As an extension of Scala, it is a very expressive and convenient log file analysis framework.

  3. Determining minimum staffing levels during snowstorms using an integrated simulation, regression, and reliability model.

    PubMed

    Kunkel, Amber; McLay, Laura A

    2013-03-01

    Emergency medical services (EMS) provide life-saving care and hospital transport to patients with severe trauma or medical conditions. Severe weather events, such as snow events, may lead to adverse patient outcomes by increasing call volumes and service times. Adequate staffing levels during such weather events are critical for ensuring that patients receive timely care. To determine staffing levels that depend on weather, we propose a model that uses a discrete event simulation of a reliability model to identify minimum staffing levels that provide timely patient care, with regression used to provide the input parameters. The system is said to be reliable if there is a high degree of confidence that ambulances can immediately respond to a given proportion of patients (e.g., 99 %). Four weather scenarios capture varying levels of snow falling and snow on the ground. An innovative feature of our approach is that we evaluate the mitigating effects of different extrinsic response policies and intrinsic system adaptation. The models use data from Hanover County, Virginia to quantify how snow reduces EMS system reliability and necessitates increasing staffing levels. The model and its analysis can assist in EMS preparedness by providing a methodology to adjust staffing levels during weather events. A key observation is that when it is snowing, intrinsic system adaptation has similar effects on system reliability as one additional ambulance.

  4. Modulation of Higher-Order Olfaction Components on Executive Functions in Humans.

    PubMed

    Fagundo, Ana B; Jiménez-Murcia, Susana; Giner-Bartolomé, Cristina; Islam, Mohammed Anisul; de la Torre, Rafael; Pastor, Antoni; Casanueva, Felipe F; Crujeiras, Ana B; Granero, Roser; Baños, Rosa; Botella, Cristina; Fernández-Real, Jose M; Frühbeck, Gema; Gómez-Ambrosi, Javier; Menchón, José M; Tinahones, Francisco J; Fernández-Aranda, Fernando

    2015-01-01

    The prefrontal (PFC) and orbitofrontal cortex (OFC) appear to be associated with both executive functions and olfaction. However, there is little data relating olfactory processing and executive functions in humans. The present study aimed at exploring the role of olfaction on executive functioning, making a distinction between primary and more cognitive aspects of olfaction. Three executive tasks of similar difficulty were used. One was used to assess hot executive functions (Iowa Gambling Task-IGT), and two as a measure of cold executive functioning (Stroop Colour and Word Test-SCWT and Wisconsin Card Sorting Test-WCST). Sixty two healthy participants were included: 31 with normosmia and 31 with hyposmia. Olfactory abilities were assessed using the ''Sniffin' Sticks'' test and the olfactory threshold, odour discrimination and odour identification measures were obtained. All participants were female, aged between 18 and 60. Results showed that participants with hyposmia displayed worse performance in decision making (IGT; Cohen's-d = 0.91) and cognitive flexibility (WCST; Cohen's-d between 0.54 and 0.68) compared to those with normosmia. Multiple regression adjusted by the covariates participants' age and education level showed a positive association between odour identification and the cognitive inhibition response (SCWT-interference; Beta = 0.29; p = .034). The odour discrimination capacity was not a predictor of the cognitive executive performance. Our results suggest that both hot and cold executive functions seem to be associated with higher-order olfactory functioning in humans. These results robustly support the hypothesis that olfaction and executive measures have a common neural substrate in PFC and OFC, and suggest that olfaction might be a reliable cognitive marker in psychiatric and neurologic disorders.

  5. Night-to-night arousal variability and interscorer reliability of arousal measurements.

    PubMed

    Loredo, J S; Clausen, J L; Ancoli-Israel, S; Dimsdale, J E

    1999-11-01

    Measurement of arousals from sleep is clinically important, however, their definition is not well standardized, and little data exist on reliability. The purpose of this study is to determine factors that affect arousal scoring reliability and night-to-night arousal variability. The night-to-night arousal variability and interscorer reliability was assessed in 20 subjects with and without obstructive sleep apnea undergoing attended polysomnography during two consecutive nights. Five definitions of arousal were studied, assessing duration of electroencephalographic (EEG) frequency changes, increases in electromyographic (EMG) activity and leg movement, association with respiratory events, as well as the American Sleep Disorders Association (ASDA) definition of arousals. NA. NA. NA. Interscorer reliability varied with the definition of arousal and ranged from an Intraclass correlation (ICC) of 0.19 to 0.92. Arousals that included increases in EMG activity or leg movement had the greatest reliability, especially when associated with respiratory events (ICC 0.76 to 0.92). The ASDA arousal definition had high interscorer reliability (ICC 0.84). Reliability was lowest for arousals consisting of EEG changes lasting <3 seconds (ICC 0.19 to 0.37). The within subjects night-to-night arousal variability was low for all arousal definitions In a heterogeneous population, interscorer arousal reliability is enhanced by increases in EMG activity, leg movements, and respiratory events and decreased by short duration EEG arousals. The arousal index night-to-night variability was low for all definitions.

  6. On-time reliability impacts of advanced traveler information services (ATIS) : Washington, DC case study, executive summary

    DOT National Transportation Integrated Search

    1999-05-01

    This report documents the development and testing of a Surveillance and Delay Advisory System (SDAS) for application in congested rural areas. SDAS included several techniques that could be used on rural highways to give travelers advance information...

  7. An Early Years Toolbox for Assessing Early Executive Function, Language, Self-Regulation, and Social Development: Validity, Reliability, and Preliminary Norms

    PubMed Central

    Howard, Steven J.; Melhuish, Edward

    2016-01-01

    Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years Toolbox (EYT) offers substantial advantages for early assessment of language, EF, self-regulation, and social development. In the current study, results of our large-scale administration of this toolbox to 1,764 preschool and early primary school students indicated very good reliability, convergent validity with existing measures, and developmental sensitivity. Results were also suggestive of better capture of children’s emerging abilities relative to comparison measures. Preliminary norms are presented, showing a clear developmental trajectory across half-year age groups. The accessibility of the EYT, as well as its advantages over existing measures, offers considerably enhanced opportunities for objective measurement of young children’s abilities to enable research and educational applications. PMID:28503022

  8. Analyzing Test-As-You-Fly Single Event Upset (SEU) Responses using SEU Data, Classical Reliability Models, and Space Environment Data

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  9. Characterization of System on a Chip (SoC) Single Event Upset (SEU) Responses Using SEU Data, Classical Reliability Models, and Space Environment Data

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  10. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  11. Architecture Framework for Trapped-Ion Quantum Computer based on Performance Simulation Tool

    NASA Astrophysics Data System (ADS)

    Ahsan, Muhammad

    The challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance. Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology, we find optimal designs for different types of quantum adders. Conclusively, we show that 2,048-bit Shor's algorithm can be reliably executed within the resource budget of 1.5 million qubits.

  12. A mechanism for efficient debugging of parallel programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, B.P.; Choi, J.D.

    1988-01-01

    This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). The authors describe the use of flowback analysis to provide information on causal relationships between events in a program's execution without re-executing the program for debugging. The authors introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. The extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions ofmore » the co-operating processes.« less

  13. Validation of the PedsQL Epilepsy Module: A pediatric epilepsy-specific health-related quality of life measure.

    PubMed

    Modi, Avani C; Junger, Katherine F; Mara, Constance A; Kellermann, Tanja; Barrett, Lauren; Wagner, Janelle; Mucci, Grace A; Bailey, Laurie; Almane, Dace; Guilfoyle, Shanna M; Urso, Lauryn; Hater, Brooke; Hustzi, Heather; Smith, Gigi; Herrmann, Bruce; Perry, M Scott; Zupanc, Mary; Varni, James W

    2017-11-01

    To validate a brief and reliable epilepsy-specific, health-related quality of life (HRQOL) measure in children with various seizure types, treatments, and demographic characteristics. This national validation study was conducted across five epilepsy centers in the United States. Youth 5-18 years and caregivers of youth 2-18 years diagnosed with epilepsy completed the PedsQL Epilepsy Module and additional questionnaires to establish reliability and validity of the epilepsy-specific HRQOL instrument. Demographic and medical data were collected through chart reviews. Factor analysis was conducted, and internal consistency (Cronbach's alphas), test-retest reliability, and construct validity were assessed. Questionnaires were analyzed from 430 children with epilepsy (M age = 9.9 years; range 2-18 years; 46% female; 62% white: non-Hispanic; 76% monotherapy, 54% active seizures) and their caregivers. The final PedsQL Epilepsy Module is a 29-item measure with five subscales (i.e., Impact, Cognitive, Sleep, Executive Functioning, and Mood/Behavior) with parallel child and caregiver reports. Internal consistency coefficients ranged from 0.70-0.94. Construct validity and convergence was demonstrated in several ways, including strong relationships with seizure outcomes, antiepileptic drug (AED) side effects, and well-established measures of executive, cognitive, and emotional/behavioral functioning. The PedsQL Epilepsy Module is a reliable measure of HRQOL with strong evidence of its validity across the epilepsy spectrum in both clinical and research settings. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  14. Persian version of frontal assessment battery: Correlations with formal measures of executive functioning and providing normative data for Persian population.

    PubMed

    Asaadi, Sina; Ashrafi, Farzad; Omidbeigi, Mahmoud; Nasiri, Zahra; Pakdaman, Hossein; Amini-Harandi, Ali

    2016-01-05

    Cognitive impairment in patients with Parkinson's disease (PD) mainly involves executive function (EF). The frontal assessment battery (FAB) is an efficient tool for the assessment of EFs. The aims of this study were to determine the validity and reliability of the psychometric properties of the Persian version of FAB and assess its correlation with formal measures of EFs to provide normative data for the Persian version of FAB in patients with PD. The study recruited 149 healthy participants and 49 patients with idiopathic PD. In PD patients, FAB results were compared to their performance on EF tests. Reliability analysis involved test-retest reliability and internal consistency, whereas validity analysis involved convergent validity approach. FAB scores compared in normal controls and in PD patients matched for age, education, and Mini-Mental State Examination (MMSE) score. In PD patients, FAB scores were significantly decreased compared to normal controls, and correlated with Stroop test and Wisconsin Card Sorting Test (WCST). In healthy subjects, FAB scores varied according to the age, education, and MMSE. In the FAB subtest analysis, the performances of PD patients were worse than the healthy participants on similarities, fluency tasks, and Luria's motor series. Persian version of FAB could be used as a reliable scale for the assessment of frontal lobe functions in Iranian patients with PD. Furthermore, normative data provided for the Persian version of this test improve the accuracy and confidence in the clinical application of the FAB.

  15. Long-term prospective memory impairment following mild traumatic brain injury with loss of consciousness: findings from the Canadian Longitudinal Study on Aging.

    PubMed

    Bedard, Marc; Taler, Vanessa; Steffener, Jason

    2017-12-18

    We aimed to examine the extent to which loss of consciousness (LOC) following mild traumatic brain injury (mTBI) may be associated with impairments in time- and event-based prospective memory (PM). PM is thought to involve executive processes and be subserved by prefrontal regions. Neuroimaging research suggests alterations to these areas of the brain several years after mTBI, particularly if LOC was experienced. However, it remains unclear whether impairments in time- or event-based functioning may persist more than a year after mTBI, and what the link with duration of LOC may be. Analyses were run on data from the Canadian Longitudinal Study on Aging, a nationwide study on health and aging involving individuals between the ages of 45-85. The present study consisted of 1937 participants who experienced mTBI more than 12 months prior, of whom 1146 reported spending less than 1 min unconscious, and 791 had LOC between 1 and 20 min, and 13,525 cognitively healthy adults. Participants were administered the Miami Prospective Memory Test, and tests of retrospective memory and executive functioning. Both mTBI groups were impaired in time-based PM relative to people with no history of TBI. Time- and event-based impairments were predicted by older age, and executive dysfunction among those who spent more time unconscious. Those with mTBI with LOC may experience impairments in PM, particularly in conditions of high demand on executive processes (time-based PM). Implications for interventions aimed at ameliorating PM among those who have experienced mTBI are discussed.

  16. 2015 NREL Photovoltaic Reliability Workshops | Photovoltaic Research | NREL

    Science.gov Websites

    5 NREL Photovoltaic Reliability Workshops 2015 NREL Photovoltaic Reliability Workshops The 2015 NREL Photovoltaic Reliability Workshop was held February 24-27, 2015, in Golden, Colorado. This event be available for download as soon as possible. The Photovoltaic Module Reliability Workshop is

  17. Night-to-night variability of muscle tone, movements, and vocalizations in patients with REM sleep behavior disorder.

    PubMed

    Cygan, Fanny; Oudiette, Delphine; Leclair-Visonneau, Laurène; Leu-Semenescu, Smaranda; Arnulf, Isabelle

    2010-12-15

    The video-polysomnographic criteria of REM sleep behavior disorder (RBD) have not been well described. We evaluated the between-night reproducibility of phasic and tonic enhanced muscle activity during REM sleep as well as the associated behaviors and vocalizations of the patients. Fifteen patients with clinical RBD underwent two consecutive video-polysomnographies. The amount of excessive phasic and tonic chin muscle activity during REM sleep was measured in 15 patients in 3-sec mini-epochs. The time spent with motor (minor, major, complex, and scenic) or vocal (sounds, mumblings, and comprehensible speeches) events was measured in 7 patients during REM sleep. There was a good between-night agreement for tonic (Spearman rho = 0.55, p = 0.03; Kendall tau = 0.48, p = 0.01) but not for phasic (rho = 0.47, p = 0.1; tau = 0.31, p = 0.1) excessive chin muscle activity. On the video and audio recordings, the minor RBD behaviors tended to occur more frequently during the second night than the first, whereas the patients spoke longer during the first than the second night. The excessive tonic activity during REM sleep is a reliable marker of RBD. It could represent the extent of dysfunction in the permissive atonia systems. In contrast, the more variable phasic activity and motor/vocal events could be more dependent on dream content (executive systems).

  18. Time-Based and Event-Based Prospective Memory in Autism Spectrum Disorder: The Roles of Executive Function and Theory of Mind, and Time-Estimation

    ERIC Educational Resources Information Center

    Williams, David; Boucher, Jill; Lind, Sophie; Jarrold, Christopher

    2013-01-01

    Prospective memory (remembering to carry out an action in the future) has been studied relatively little in ASD. We explored time-based (carry out an action at a pre-specified time) and event-based (carry out an action upon the occurrence of a pre-specified event) prospective memory, as well as possible cognitive correlates, among 21…

  19. Load power device and system for real-time execution of hierarchical load identification algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yi; Madane, Mayura Arun; Zambare, Prachi Suresh

    A load power device includes a power input; at least one power output for at least one load; and a plurality of sensors structured to sense voltage and current at the at least one power output. A processor is structured to provide real-time execution of: (a) a plurality of load identification algorithms, and (b) event detection and operating mode detection for the at least one load.

  20. Intelligent Tutoring Methods for Optimizing Learning Outcomes with Embedded Training

    DTIC Science & Technology

    2009-01-01

    used to stimulate learning activities, from practice events with real-time coaching, to exercises with after action review. Particularly with free - play virtual...variations of correct performance can be prohibitive in free - play scenarios, and so for such conditions this has led to a state-based approach for...tiered logic that evaluates team member R for proper execution during free - play execution. In the first tier, the evaluation must know when it

  1. 12 CFR 985.8 - General duties of the OF board of directors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... chief executive officer for the OF and shall direct the implementation of the OF board of directors... market conditions, and the Banks' role as government-sponsored enterprises; (ii) Maintaining reliable... financial statements; (8) Select, evaluate, determine the compensation of, and, where appropriate, replace...

  2. IDENTIFICATION AND INTERPRETATION OF DEVELOPMENTAL NEUROTOXICITY EFFECTS: A REPORT FROM THE ILSI RESEARCH FOUNDATION/RISK SCIENCE INSTITUTE EXPERT WORKING GROUP ON NEURODEVELOPMENTAL ENDPOINTS

    EPA Science Inventory

    The reliable detection, measurement, and interpretation of treatment-related developmental neurotoxicity (DNT) effects depend on appropriate study design and execution, using scientifically established methodologies, with appropriate controls to minimize confounding factors. App...

  3. 78 FR 30733 - Modernizing Federal Infrastructure Review and Permitting Regulations, Policies, and Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-22

    ... Heads of Executive Departments and Agencies Reliable, safe, and resilient infrastructure is the backbone... and agencies (agencies) have achieved better outcomes for communities and the environment and realized... major infrastructure projects by half, while also improving outcomes for communities and the environment...

  4. Are there reliable changes in memory and executive functions after cognitive behavioural therapy in patients with obsessive-compulsive disorder?

    PubMed

    Vandborg, Sanne Kjær; Hartmann, Tue Borst; Bennedsen, Birgit Egedal; Pedersen, Anders Degn; Thomsen, Per Hove

    2015-01-01

    Patients with obsessive-compulsive disorder (OCD) have impaired memory and executive functions, but it is unclear whether these functions improve after cognitive behavioural therapy (CBT) of OCD symptoms. The primary aim of this study was to investigate whether memory and executive functions change after CBT in patients with OCD. We assessed 39 patients with OCD before and after CBT with neuropsychological tests of memory and executive functions. To correct for practice effects, 39 healthy controls (HCs) were assessed at two parallel time intervals with the neuropsychological tests. There were no changes in memory and executive functions after CBT in patients with OCD when results were corrected for practice effects. Patients performed worse on a test of visuospatial memory and organisational skills (Rey complex figure test [RCFT]) compared to HCs both before and after CBT (ps = .002-.036). The finding of persistent poor RCFT performances indicates that patients with OCD have impaired visuospatial memory and organisational skills that may be trait-related rather than state-dependent. These impairments may need to be considered in treatment. Our findings underline the importance of correcting for practice effects when investigating changes in cognitive functions.

  5. Pharyngeal pH alone is not reliable for the detection of pharyngeal reflux events: A study with oesophageal and pharyngeal pH-impedance monitoring

    PubMed Central

    Desjardin, Marie; Roman, Sabine; des Varannes, Stanislas Bruley; Gourcerol, Guillaume; Coffin, Benoit; Ropert, Alain; Mion, François

    2013-01-01

    Background Pharyngeal pH probes and pH-impedance catheters have been developed for the diagnosis of laryngo-pharyngeal reflux. Objective To determine the reliability of pharyngeal pH alone for the detection of pharyngeal reflux events. Methods 24-h pH-impedance recordings performed in 45 healthy subjects with a bifurcated probe for detection of pharyngeal and oesophageal reflux events were reviewed. Pharyngeal pH drops to below 4 and 5 were analysed for the simultaneous occurrence of pharyngeal reflux, gastro-oesophageal reflux, and swallows, according to impedance patterns. Results Only 7.0% of pharyngeal pH drops to below 5 identified with impedance corresponded to pharyngeal reflux, while 92.6% were related to swallows and 10.2 and 13.3% were associated with proximal and distal gastro-oesophageal reflux events, respectively. Of pharyngeal pH drops to below 4, 13.2% were related to pharyngeal reflux, 87.5% were related to swallows, and 18.1 and 21.5% were associated with proximal and distal gastro-oesophageal reflux events, respectively. Conclusions This study demonstrates that pharyngeal pH alone is not reliable for the detection of pharyngeal reflux and that adding distal oesophageal pH analysis is not helpful. The only reliable analysis should take into account impedance patterns demonstrating the presence of pharyngeal reflux event preceded by a distal and proximal reflux event within the oesophagus. PMID:24917995

  6. Self-regulation and the specificity of autobiographical memory in offenders.

    PubMed

    Neves, Daniela; Pinho, Maria S

    Certain clinical populations exhibit an Overgeneral Autobiographical Memory (OAM), characterized by difficulty remembering specific events. One study has observed OAM for positive events in a group of offenders. This study analyzed the stability of the valence effect in the OAM of offenders, the executive control impairments facilitating OAM in offenders, and the relationship of self-esteem and social desirability with AM specificity. The specificity (Autobiographical Memory Test) and emotional properties of the AMs of 59 prisoners (30 men, 29 women) and a control group (29 men, 30 women) were compared. Social desirability, depression symptoms, self-esteem and executive functions (Mazes, Stroop, Verbal Fluency) were assessed. The offenders recalled fewer specific positive AMs than controls, and did not perceive the emotional intensity of their negative AMs to decrease over time, unlike the controls. The offenders' recall of specific negative AMs seemed to influence negatively their performance in the subsequent executive control tasks. Dysfunctional coping strategies in offenders were related to OAM, but not social desirability or self-esteem. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. The origin of bursts and heavy tails in human dynamics.

    PubMed

    Barabási, Albert-László

    2005-05-12

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behaviour into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. In contrast, there is increasing evidence that the timing of many human activities, ranging from communication to entertainment and work patterns, follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. Here I show that the bursty nature of human behaviour is a consequence of a decision-based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, with most tasks being rapidly executed, whereas a few experience very long waiting times. In contrast, random or priority blind execution is well approximated by uniform inter-event statistics. These finding have important implications, ranging from resource management to service allocation, in both communications and retail.

  8. Reliability of cognitive tests of ELSA-Brasil, the brazilian longitudinal study of adult health

    PubMed Central

    Batista, Juliana Alves; Giatti, Luana; Barreto, Sandhi Maria; Galery, Ana Roscoe Papini; Passos, Valéria Maria de Azeredo

    2013-01-01

    Cognitive function evaluation entails the use of neuropsychological tests, applied exclusively or in sequence. The results of these tests may be influenced by factors related to the environment, the interviewer or the interviewee. OBJECTIVES We examined the test-retest reliability of some tests of the Brazilian version from the Consortium to Establish a Registry for Alzheimer's disease. METHODS The ELSA-Brasil is a multicentre study of civil servants (35-74 years of age) from public institutions across six Brazilian States. The same tests were applied, in different order of appearance, by the same trained and certified interviewer, with an approximate 20-day interval, to 160 adults (51% men, mean age 52 years). The Intraclass Correlation Coefficient (ICC) was used to assess the reliability of the measures; and a dispersion graph was used to examine the patterns of agreement between them. RESULTS We observed higher retest scores in all tests as well as a shorter test completion time for the Trail Making Test B. ICC values for each test were as following: Word List Learning Test (0.56), Word Recall (0.50), Word Recognition (0.35), Phonemic Verbal Fluency Test (VFT, 0.61), Semantic VFT (0.53) and Trail B (0.91). The Bland-Altman plot showed better correlation of executive function (VFT and Trail B) than of memory tests. CONCLUSIONS Better performance in retest may reflect a learning effect, and suggest that retest should be repeated using alternate forms or after longer periods. In this sample of adults with high schooling level, reliability was only moderate for memory tests whereas the measurement of executive function proved more reliable. PMID:29213860

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Nicely, Lucas D; Zhang, Haibin

    Modern large-scale networks require the ability to withstand arbitrary failures (i.e., Byzantine failures). Byzantine reliable broadcast algorithms can be used to reliably disseminate information in the presence of Byzantine failures. We design a novel Byzantine reliable broadcast protocol for loosely connected and synchronous networks. While previous such protocols all assume correct senders, our protocol is the first to handle Byzantine senders. To achieve this goal, we have developed new techniques for fault detection and fault tolerance. Our protocol is efficient, and under normal circumstances, no expensive public-key cryptographic operations are used. We implement and evaluate our protocol, demonstrating that ourmore » protocol has high throughput and is superior to the existing protocols in uncivil executions.« less

  10. Predicting the Reliability of Ceramics Under Transient Loads and Temperatures With CARES/Life

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Jadaan, Osama M.; Palfi, Tamas; Baker, Eric H.

    2003-01-01

    A methodology is shown for predicting the time-dependent reliability of ceramic components against catastrophic rupture when subjected to transient thermomechanical loads (including cyclic loads). The methodology takes into account the changes in material response that can occur with temperature or time (i.e., changing fatigue and Weibull parameters with temperature or time). This capability has been added to the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. The code has been modified to have the ability to interface with commercially available finite element analysis (FEA) codes executed for transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.

  11. Why Rules Matter in Complex Event Processing...and Vice Versa

    NASA Astrophysics Data System (ADS)

    Vincent, Paul

    Many commercial and research CEP solutions are moving beyond simple stream query languages to more complete definitions of "process" and thence to "decisions" and "actions". And as capabilities increase in event processing capabilities, there is an increasing realization that the humble "rule" is as relevant to the event cloud as it is to specific services. Less obvious is how much event processing has to offer the process and rule execution and management technologies. Does event processing change the way we should manage businesses, processes and services, together with their embedded (and hopefully managed) rulesets?

  12. History of Dropout-Prevention Events in AISD: Executive Summary.

    ERIC Educational Resources Information Center

    Frazer, Linda; And Others

    This report presents major drop-out prevention events in the Austin (Texas) Independent School District (AISD) since these efforts were initiated in 1982 by the Office of Research and Evaluation (ORE). The following are the major findings of the report: (1) the ORE has been researching and studying the dropout problem since 1982-83, and the effort…

  13. 76 FR 76382 - Executive-Led Business Development Mission to Kabul, Afghanistan; February 2012* Dates Are Withheld

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... security within and around the mission event.] Day One (weekend) Travel Day--Depart U.S. on evening flight. Day Two Travel Day--Participants arrive in transit city (tbd) and overnight in pre- arranged departure from transit city. Day Three Travel Day. Arrive in Kabul, Afghanistan (afternoon). Evening Event. Day...

  14. Event-Related Potentials Discriminate Familiar and Unusual Goal Outcomes in 5-Month-Olds and Adults

    ERIC Educational Resources Information Center

    Michel, Christine; Kaduk, Katharina; Ní Choisdealbha, Áine; Reid, Vincent M.

    2017-01-01

    Previous event-related potential (ERP) work has indicated that the neural processing of action sequences develops with age. Although adults and 9-month-olds use a semantic processing system, perceiving actions activates attentional processes in 7-month-olds. However, presenting a sequence of action context, action execution and action conclusion…

  15. Event-Driven Technology to Generate Relevant Collections of Near-Realtime Data

    NASA Astrophysics Data System (ADS)

    Graves, S. J.; Keiser, K.; Nair, U. S.; Beck, J. M.; Ebersole, S.

    2017-12-01

    Getting the right data when it is needed continues to be a challenge for researchers and decision makers. Event-Driven Data Delivery (ED3), funded by the NASA Applied Science program, is a technology that allows researchers and decision makers to pre-plan what data, information and processes they need to have collected or executed in response to future events. The Information Technology and Systems Center at the University of Alabama in Huntsville (UAH) has developed the ED3 framework in collaboration with atmospheric scientists at UAH, scientists at the Geological Survey of Alabama, and other federal, state and local stakeholders to meet the data preparedness needs for research, decisions and situational awareness. The ED3 framework supports an API that supports the addition of loosely-coupled, distributed event handlers and data processes. This approach allows the easy addition of new events and data processes so the system can scale to support virtually any type of event or data process. Using ED3's underlying services, applications have been developed that monitor for alerts of registered event types and automatically triggers subscriptions that match new events, providing users with a living "album" of results that can continued to be curated as more information for an event becomes available. This capability can allow users to improve capacity for the collection, creation and use of data and real-time processes (data access, model execution, product generation, sensor tasking, social media filtering, etc), in response to disaster (and other) events by preparing in advance for data and information needs for future events. This presentation will provide an update on the ED3 developments and deployments, and further explain the applicability for utilizing near-realtime data in hazards research, response and situational awareness.

  16. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  17. Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.

    PubMed

    Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime

    2017-10-01

    Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.

  18. The reliability of manual reporting of clinical events in an anesthesia information management system (AIMS).

    PubMed

    Simpao, Allan F; Pruitt, Eric Y; Cook-Sather, Scott D; Gurnaney, Harshad G; Rehman, Mohamed A

    2012-12-01

    Manual incident reports significantly under-report adverse clinical events when compared with automated recordings of intraoperative data. Our goal was to determine the reliability of AIMS and CQI reports of adverse clinical events that had been witnessed and recorded by research assistants. The AIMS and CQI records of 995 patients aged 2-12 years were analyzed to determine if anesthesia providers had properly documented the emesis events that were observed and recorded by research assistants who were present in the operating room at the time of induction. Research assistants recorded eight cases of emesis during induction that were confirmed with the attending anesthesiologist at the time of induction. AIMS yielded a sensitivity of 38 % (95 % confidence interval [CI] 8.5-75.5 %), while the sensitivity of CQI reporting was 13 % (95 % CI 0.3-52.7 %). The low sensitivities of the AIMS and CQI reports suggest that user-reported AIMS and CQI data do not reliably include significant clinical events.

  19. American Society of Anesthesiologists

    MedlinePlus

    ... Events ANESTHESIOLOGY 2018 Anesthesia Quality Meeting Executive Physician Leadership Program Certificate in Business Administration International Forum on Perioperative Safety and Quality PRACTICE MANAGEMENT LEGISLATIVE CONFERENCE Professional Development - The Practice of Anesthesiology ...

  20. Rumination impairs the control of stimulus-induced retrieval of irrelevant information, but not attention, control, or response selection in general.

    PubMed

    Colzato, Lorenza S; Steenbergen, Laura; Hommel, Bernhard

    2018-01-23

    The aim of the study was to throw more light on the relationship between rumination and cognitive-control processes. Seventy-eight adults were assessed with respect to rumination tendencies by means of the LEIDS-r before performing a Stroop task, an event-file task assessing the automatic retrieval of irrelevant information, an attentional set-shifting task, and the Attentional Network Task, which provided scores for alerting, orienting, and executive control functioning. The size of the Stroop effect and irrelevant retrieval in the event-five task were positively correlated with the tendency to ruminate, while all other scores did not correlate with any rumination scale. Controlling for depressive tendencies eliminated the Stroop-related finding (an observation that may account for previous failures to replicate), but not the event-file finding. Taken altogether, our results suggest that rumination does not affect attention, executive control, or response selection in general, but rather selectively impairs the control of stimulus-induced retrieval of irrelevant information.

  1. Robotic guarded motion system and method

    DOEpatents

    Bruemmer, David J.

    2010-02-23

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes instructions for repeating, on each iteration through an event timing loop, the acts of defining an event horizon, detecting a range to obstacles around the robot, and testing for an event horizon intrusion. Defining the event horizon includes determining a distance from the robot that is proportional to a current velocity of the robot and testing for the event horizon intrusion includes determining if any range to the obstacles is within the event horizon. Finally, on each iteration through the event timing loop, the method includes reducing the current velocity of the robot in proportion to a loop period of the event timing loop if the event horizon intrusion occurs.

  2. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  3. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  4. Enhancing healthcare process design with human factors engineering and reliability science, part 2: applying the knowledge to clinical documentation systems.

    PubMed

    Boston-Fleischhauer, Carol

    2008-02-01

    The demand to redesign healthcare processes that achieve efficient, effective, and safe results is never-ending. Part 1 of this 2-part series introduced human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare organizations. In part 2, the author applies this knowledge to one of the most common operational processes in healthcare: clinical documentation. Specific implementation strategies and anticipated results are discussed, along with organizational challenges and recommended executive responses.

  5. Common Cause Failures and Ultra Reliability

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A common cause failure occurs when several failures have the same origin. Common cause failures are either common event failures, where the cause is a single external event, or common mode failures, where two systems fail in the same way for the same reason. Common mode failures can occur at different times because of a design defect or a repeated external event. Common event failures reduce the reliability of on-line redundant systems but not of systems using off-line spare parts. Common mode failures reduce the dependability of systems using off-line spare parts and on-line redundancy.

  6. Interference due to shared features between action plans is influenced by working memory span.

    PubMed

    Fournier, Lisa R; Behmer, Lawrence P; Stubblefield, Alexandra M

    2014-12-01

    In this study, we examined the interactions between the action plans that we hold in memory and the actions that we carry out, asking whether the interference due to shared features between action plans is due to selection demands imposed on working memory. Individuals with low and high working memory spans learned arbitrary motor actions in response to two different visual events (A and B), presented in a serial order. They planned a response to the first event (A) and while maintaining this action plan in memory they then executed a speeded response to the second event (B). Afterward, they executed the action plan for the first event (A) maintained in memory. Speeded responses to the second event (B) were delayed when it shared an action feature (feature overlap) with the first event (A), relative to when it did not (no feature overlap). The size of the feature-overlap delay was greater for low-span than for high-span participants. This indicates that interference due to overlapping action plans is greater when fewer working memory resources are available, suggesting that this interference is due to selection demands imposed on working memory. Thus, working memory plays an important role in managing current and upcoming action plans, at least for newly learned tasks. Also, managing multiple action plans is compromised in individuals who have low versus high working memory spans.

  7. Meiosis.

    PubMed

    Hillers, Kenneth J; Jantsch, Verena; Martinez-Perez, Enrique; Yanowitz, Judith L

    2017-05-04

    Sexual reproduction requires the production of haploid gametes (sperm and egg) with only one copy of each chromosome; fertilization then restores the diploid chromosome content in the next generation. This reduction in genetic content is accomplished during a specialized cell division called meiosis, in which two rounds of chromosome segregation follow a single round of DNA replication. In preparation for the first meiotic division, homologous chromosomes pair and synapse, creating a context that promotes formation of crossover recombination events. These crossovers, in conjunction with sister chromatid cohesion, serve to connect the two homologs and facilitate their segregation to opposite poles during the first meiotic division. During the second meiotic division, which is similar to mitosis, sister chromatids separate; the resultant products are haploid cells that become gametes. In Caenorhabditis elegans (and most other eukaryotes) homologous pairing and recombination are required for proper chromosome inheritance during meiosis; accordingly, the events of meiosis are tightly coordinated to ensure the proper execution of these events. In this chapter, we review the seminal events of meiosis: pairing of homologous chromosomes, the changes in chromosome structure that chromosomes undergo during meiosis, the events of meiotic recombination, the differentiation of homologous chromosome pairs into structures optimized for proper chromosome segregation at Meiosis I, and the ultimate segregation of chromosomes during the meiotic divisions. We also review the regulatory processes that ensure the coordinated execution of these meiotic events during prophase I.

  8. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  9. sCO2 Power Cycles Summit Summary November 2017.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendez Cruz, Carmen Margarita; Rochau, Gary E.; Lance, Blake

    Over the past ten years, the Department of Energy (DOE) has helped to develop components and technologies for the Supercritical Carbon Dioxide (sCO2) power cycle capable of efficient operation at high temperatures and high efficiency. The DOE Offices of Fossil Energy, Nuclear Energy, and Energy Efficiency and Renewable Energy collaborated in the planning and execution of the sCO2 Power Cycle Summit conducted in Albuquerque, NM in November 2017. The summit brought together participants from government, national laboratories, research, and industry to engage in discussions regarding the future of sCO 2 Power Cycles Technology. This report summarizes the work involved inmore » summit planning and execution, before, during, and after the event, including the coordination between three DOE offices and technical content presented at the event.« less

  10. First experiences with the LHC BLM sanity checks

    NASA Astrophysics Data System (ADS)

    Emery, J.; Dehning, B.; Effinger, E.; Nordt, A.; Sapinski, M. G.; Zamantzas, C.

    2010-12-01

    The reliability concerns have driven the design of the Large Hardron Collider (LHC) Beam Loss Monitoring (BLM) system from the early stage of the studies up to the present commissioning and the latest development of diagnostic tools. To protect the system against non-conformities, new ways of automatic checking have been developed and implemented. These checks are regularly and systematically executed by the LHC operation team to ensure that the system status is after each test "as good as new". The sanity checks are part of this strategy. They are testing the electrical part of the detectors (ionisation chamber or secondary emission detector), their cable connections to the front-end electronics, further connections to the back-end electronics and their ability to request a beam abort. During the installation and in the early commissioning phase, these checks have shown their ability to find also non-conformities caused by unexpected failure event scenarios. In every day operation, a non-conformity discovered by this check inhibits any further injections into the LHC until the check confirms the absence of non-conformities.

  11. Digging Deeper: Crisis Management in the Coal Industry

    ERIC Educational Resources Information Center

    Miller, Barbara M.; Horsley, J. Suzanne

    2009-01-01

    This study explores crisis management/communication practices within the coal industry through the lens of high reliability organization (HRO) concepts and sensemaking theory. In-depth interviews with industry executives and an analysis of an emergency procedures manual were used to provide an exploratory examination of the status of crisis…

  12. Transaction-based building controls framework, Volume 2: Platform descriptive model and requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akyol, Bora A.; Haack, Jereme N.; Carpenter, Brandon J.

    Transaction-based Building Controls (TBC) offer a control systems platform that provides an agent execution environment that meets the growing requirements for security, resource utilization, and reliability. This report outlines the requirements for a platform to meet these needs and describes an illustrative/exemplary implementation.

  13. 29 CFR 1607.4 - Information on impact.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Report EEO-1 series of reports. The user should adopt safeguards to insure that the records required by... reliable, evidence concerning the impact of the procedure over a longer period of time and/or evidence... may in design and execution be race, color, sex, or ethnic conscious, selection procedures under such...

  14. Eye-Witness Memory and Suggestibility in Children with Asperger Syndrome

    ERIC Educational Resources Information Center

    McCrory, Eamon; Henry, Lucy A.; Happe, Francesca

    2007-01-01

    Background: Individuals with autism spectrum disorders (ASD) present with a particular profile of memory deficits, executive dysfunction and impaired social interaction that may raise concerns about their recall and reliability in forensic and legal contexts. Extant studies of memory shed limited light on this issue as they involved either…

  15. 17 CFR 37.1400 - Core Principle 14-System safeguards.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... procedures, and automated systems, that: (1) Are reliable and secure; and (2) Have adequate scalable capacity... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Core Principle 14-System... SWAP EXECUTION FACILITIES System Safeguards § 37.1400 Core Principle 14—System safeguards. The swap...

  16. Executive summary: biomarkers of nutrition for development: building a consensus

    USDA-ARS?s Scientific Manuscript database

    The ability to develop evidence-based clinical guidance and effective programs and policies to achieve global health promotion and disease prevention goals depends on the availability of valid and reliable data. With specific regard to the role of food and nutrition in achieving those goals, relevan...

  17. How to quantify exposure to traumatic stress? Reliability and predictive validity of measures for cumulative trauma exposure in a post-conflict population.

    PubMed

    Wilker, Sarah; Pfeiffer, Anett; Kolassa, Stephan; Koslowski, Daniela; Elbert, Thomas; Kolassa, Iris-Tatjana

    2015-01-01

    While studies with survivors of single traumatic experiences highlight individual response variation following trauma, research from conflict regions shows that almost everyone develops posttraumatic stress disorder (PTSD) if trauma exposure reaches extreme levels. Therefore, evaluating the effects of cumulative trauma exposure is of utmost importance in studies investigating risk factors for PTSD. Yet, little research has been devoted to evaluate how this important environmental risk factor can be best quantified. We investigated the retest reliability and predictive validity of different trauma measures in a sample of 227 Ugandan rebel war survivors. Trauma exposure was modeled as the number of traumatic event types experienced or as a score considering traumatic event frequencies. In addition, we investigated whether age at trauma exposure can be reliably measured and improves PTSD risk prediction. All trauma measures showed good reliability. While prediction of lifetime PTSD was most accurate from the number of different traumatic event types experienced, inclusion of event frequencies slightly improved the prediction of current PTSD. As assessing the number of traumatic events experienced is the least stressful and time-consuming assessment and leads to the best prediction of lifetime PTSD, we recommend this measure for research on PTSD etiology.

  18. Fault recovery in the reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Whetten, Brian

    1995-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast (12, 5) media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  19. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  20. Reliability of the Cooking Task in adults with acquired brain injury.

    PubMed

    Poncet, Frédérique; Swaine, Bonnie; Taillefer, Chantal; Lamoureux, Julie; Pradat-Diehl, Pascale; Chevignard, Mathilde

    2015-01-01

    Acquired brain injury (ABI) often leads to deficits in executive functioning (EF) responsible for severe and long-standing disabilities in daily life activities. The Cooking Task is an ecological and valid test of EF involving multi-tasking in a real environment. Given its complex scoring system, it is important to establish the tool's reliability. The objective of the study was to examine the reliability of the Cooking Task (internal consistency, inter-rater and test-retest reliability). A total of 160 patients with ABI (113 men, mean age 37 years, SD = 14.3) were tested using the Cooking Task. For test-retest reliability, patients were assessed by the same rater on two occasions (mean interval 11 days) while two raters independently and simultaneously observed and scored patients' performances to estimate inter-rater reliability. Internal consistency was high for the global scale (Cronbach α = .74). Inter-rater reliability (n = 66) for total errors was also high (ICC = .93), however the test-retest reliability (n = 11) was poor (ICC = .36). In general the Cooking Task appears to be a reliable tool. The low test-retest results were expected given the importance of EF in the performance of novel tasks.

  1. Jobs masonry in LHCb with elastic Grid Jobs

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, Ph

    2015-12-01

    In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of the available resources, and that it can easily use new types of resources. An example is represented by resources provided by batch queues, where low-priority MC jobs can be used as "masonry" jobs in multi-jobs pilots. A second example is represented by opportunistic resources with limited available time.

  2. Overview of Future of Probabilistic Methods and RMSL Technology and the Probabilistic Methods Education Initiative for the US Army at the SAE G-11 Meeting

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting sponsored by the Picatinny Arsenal during March 1-3, 2004 at Westin Morristown, will report progress on projects for probabilistic assessment of Army system and launch an initiative for probabilistic education. The meeting features several Army and industry Senior executives and Ivy League Professor to provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11s Probabilistic Methods Committee is to enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development.

  3. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    NASA Astrophysics Data System (ADS)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.

  4. Modulation of a Fronto-Parietal Network in Event-Based Prospective Memory: An rTMS Study

    ERIC Educational Resources Information Center

    Bisiacchi, P. S.; Cona, G.; Schiff, S.; Basso, D.

    2011-01-01

    Event-based prospective memory (PM) is a multi-component process that requires remembering the delayed execution of an intended action in response to a pre-specified PM cue, while being actively engaged in an ongoing task. Some neuroimaging studies have suggested that both prefrontal and parietal areas are involved in the maintenance and…

  5. Effects of catastrophic events on transportation system management and operations : executive summary of the August 2003 northeast blackout, Great Lakes and New York City regions

    DOT National Transportation Integrated Search

    2004-05-01

    On Thursday, August 14, 2003, a series of seemingly small events, happening in concert, produced the largest blackout in American history. Shortly after 2:00 p.m. on August 14, a brush fire caused a transmission line south of Columbus, Ohio, to go ou...

  6. Social-cognitive processes in preschoolers' selective trust: three cultures compared.

    PubMed

    Lucas, Amanda J; Lewis, Charlie; Pala, F Cansu; Wong, Katie; Berridge, Damon

    2013-03-01

    Research on preschoolers' selective learning has mostly been conducted in English-speaking countries. We compared the performance of Turkish preschoolers (who are exposed to a language with evidential markers), Chinese preschoolers (known to be advanced in executive skills), and English preschoolers on an extended selective trust task (N = 144). We also measured children's executive function skills and their ability to attribute false belief. Overall we found a Turkish (rather than a Chinese) advantage in selective trust and a relationship between selective trust and false belief (rather than executive function). This is the 1st evidence that exposure to a language that obliges speakers to state the sources of their knowledge may sensitize preschoolers to informant reliability. It is also the first demonstration of an association between false belief and selective trust. Together these findings suggest that effective selective learning may progress alongside children's developing capacity to assess the knowledge of others.

  7. Development of the Executive Personal Finance Scale.

    PubMed

    Spinella, Marcello; Yang, Bijou; Lester, David

    2007-03-01

    There is accumulating evidence that prefrontal systems play an important role in management of personal finances, based on studies using clinical populations, functional neuroimaging, and both subjective and objective neuropsychological measures. This study developed the Executive Personal Finance Scale (EPFS) as a specific self-rating measure of executive aspects of personal money management. The resulting 20-item scale had good reliability and showed four factors: impulse control, organization, planning, and motivational drive. Validity was evidenced by correlations with income, credit card debt, and investments. The EPFS also showed logical correlations with compulsive buying and money attitudes. Second-order factor analysis of the EPFS and other scales revealed two higher-order factors of personal finance: cognitive (e.g., planning, organizing) and emotional (e.g., anxiety, impulse-spending, prestige). The EPFS shows good psychometric properties, is easy to use, and will make a convenient complement to other research methodologies exploring the neural basis of personal finance management.

  8. The item level psychometrics of the behaviour rating inventory of executive function-adult (BRIEF-A) in a TBI sample.

    PubMed

    Waid-Ebbs, J Kay; Wen, Pey-Shan; Heaton, Shelley C; Donovan, Neila J; Velozo, Craig

    2012-01-01

    To determine whether the psychometrics of the BRIEF-A are adequate for individuals diagnosed with TBI. A prospective observational study in which the BRIEF-A was collected as part of a larger study. Informant ratings of the 75-item BRIEF-A on 89 individuals diagnosed with TBI were examined to determine items level psychometrics for each of the two BRIEF-A indexes: Behaviour Rating Index (BRI) and Metacognitive Index (MI). Patients were either outpatients or at least 1 year post-injury. Each index measured a latent trait, separating individuals into five-to-six ability levels and demonstrated good reliability (0.94 and 0.96). Four items were identified that did not meet the infit criteria. The results provide support for the use of the BRIEF-A as a supplemental assessment of executive function in TBI populations. However, further validation is needed with other measures of executive function. Recommendations include use of the index scores over the Global Executive Composite score and use of the difficulty hierarchy for setting therapy goals.

  9. Working memory capacity and the top-down control of visual search: Exploring the boundaries of "executive attention".

    PubMed

    Kane, Michael J; Poole, Bradley J; Tuholski, Stephen W; Engle, Randall W

    2006-07-01

    The executive attention theory of working memory capacity (WMC) proposes that measures of WMC broadly predict higher order cognitive abilities because they tap important and general attention capabilities (R. W. Engle & M. J. Kane, 2004). Previous research demonstrated WMC-related differences in attention tasks that required restraint of habitual responses or constraint of conscious focus. To further specify the executive attention construct, the present experiments sought boundary conditions of the WMC-attention relation. Three experiments correlated individual differences in WMC, as measured by complex span tasks, and executive control of visual search. In feature-absence search, conjunction search, and spatial configuration search, WMC was unrelated to search slopes, although they were large and reliably measured. Even in a search task designed to require the volitional movement of attention (J. M. Wolfe, G. A. Alvarez, & T. S. Horowitz, 2000), WMC was irrelevant to performance. Thus, WMC is not associated with all demanding or controlled attention processes, which poses problems for some general theories of WMC. Copyright 2006 APA, all rights reserved.

  10. Spaceflight tracking and data network operational reliability assessment for Skylab

    NASA Technical Reports Server (NTRS)

    Seneca, V. I.; Mlynarczyk, R. H.

    1974-01-01

    Data on the spaceflight communications equipment status during the Skylab mission were subjected to an operational reliability assessment. Reliability models were revised to reflect pertinent equipment changes accomplished prior to the beginning of the Skylab missions. Appropriate adjustments were made to fit the data to the models. The availabilities are based on the failure events resulting in the stations inability to support a function of functions and the MTBF's are based on all events including 'can support' and 'cannot support'. Data were received from eleven land-based stations and one ship.

  11. Measuring the Performance of Attention Networks with the Dalhousie Computerized Attention Battery (DalCAB): Methodology and Reliability in Healthy Adults.

    PubMed

    Jones, Stephanie A H; Butler, Beverly C; Kintzel, Franziska; Johnson, Anne; Klein, Raymond M; Eskes, Gail A

    2016-01-01

    Attention is an important, multifaceted cognitive domain that has been linked to three distinct, yet interacting, networks: alerting, orienting, and executive control. The measurement of attention and deficits of attention within these networks is critical to the assessment of many neurological and psychiatric conditions in both research and clinical settings. The Dalhousie Computerized Attention Battery (DalCAB) was created to assess attentional functions related to the three attention networks using a range of tasks including: simple reaction time, go/no-go, choice reaction time, dual task, flanker, item and location working memory, and visual search. The current study provides preliminary normative data, test-retest reliability (intraclass correlations) and practice effects in DalCAB performance 24-h after baseline for healthy young adults (n = 96, 18-31 years). Performance on the DalCAB tasks demonstrated Good to Very Good test-retest reliability for mean reaction time, while accuracy and difference measures (e.g., switch costs, interference effects, and working memory load effects) were most reliable for tasks that require more extensive cognitive processing (e.g., choice reaction time, flanker, dual task, and conjunction search). Practice effects were common and pronounced at the 24-h interval. In addition, performance related to specific within-task parameters of the DalCAB sub-tests provides preliminary support for future formal assessment of the convergent validity of our interpretation of the DalCAB as a potential clinical and research assessment tool for measuring aspects of attention related to the alerting, orienting, and executive control networks.

  12. Reliability and Maintainability model (RAM) user and maintenance manual. Part 2

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles E.

    1995-01-01

    This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.

  13. Type A and hardiness.

    PubMed

    Kobasa, S C; Maddi, S R; Zola, M A

    1983-03-01

    The study examined the relationship between the Type A behavior pattern and personality hardiness and predicted an interaction between the two that would be influential for illness onset. Type A and hardiness were found to be conceptually different and empirically independent factors. Under high stressful life events, male executives who were high in Type A and low in hardiness tended toward higher general illness scores than any other executives. Type A and hardiness emerge from this study as bases for extrinsic and intrinsic motivation, respectively.

  14. PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  15. PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. PAWS/STEM was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The package is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The standard distribution medium for the VMS version of PAWS/STEM (LAR-14165) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of PAWS/STEM (LAR-14920) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. PAWS/STEM was developed in 1989 and last updated in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  16. Reliable Decentralized Control of Fuzzy Discrete-Event Systems and a Test Algorithm.

    PubMed

    Liu, Fuchun; Dziong, Zbigniew

    2013-02-01

    A framework for decentralized control of fuzzy discrete-event systems (FDESs) has been recently presented to guarantee the achievement of a given specification under the joint control of all local fuzzy supervisors. As a continuation, this paper addresses the reliable decentralized control of FDESs in face of possible failures of some local fuzzy supervisors. Roughly speaking, for an FDES equipped with n local fuzzy supervisors, a decentralized supervisor is called k-reliable (1 ≤ k ≤ n) provided that the control performance will not be degraded even when n - k local fuzzy supervisors fail. A necessary and sufficient condition for the existence of k-reliable decentralized supervisors of FDESs is proposed by introducing the notions of M̃uc-controllability and k-reliable coobservability of fuzzy language. In particular, a polynomial-time algorithm to test the k-reliable coobservability is developed by a constructive methodology, which indicates that the existence of k-reliable decentralized supervisors of FDESs can be checked with a polynomial complexity.

  17. Integrating planning and reactive control

    NASA Technical Reports Server (NTRS)

    Wilkins, David E.; Myers, Karen L.

    1994-01-01

    Our research is developing persistent agents that can achieve complex tasks in dynamic and uncertain environments. We refer to such agents as taskable, reactive agents. An agent of this type requires a number of capabilities. The ability to execute complex tasks necessitates the use of strategic plans for accomplishing tasks; hence, the agent must be able to synthesize new plans at run time. The dynamic nature of the environment requires that the agent be able to deal with unpredictable changes in its world. As such, agents must be able to react to unanticipated events by taking appropriate actions in a timely manner, while continuing activities that support current goals. The unpredictability of the world could lead to failure of plans generated for individual tasks. Agents must have the ability to recover from failures by adapting their activities to the new situation, or replanning if the world changes sufficiently. Finally, the agent should be able to perform in the face of uncertainty. The Cypress system, described here, provides a framework for creating taskable, reactive agents. Several features distinguish our approach: (1) the generation and execution of complex plans with parallel actions; (2) the integration of goal-driven and event driven activities during execution; (3) the use of evidential reasoning for dealing with uncertainty; and (4) the use of replanning to handle run-time execution problems. Our model for a taskable, reactive agent has two main intelligent components, an executor and a planner. The two components share a library of possible actions that the system can take. The library encompasses a full range of action representations, including plans, planning operators, and executable procedures such as predefined standard operating procedures (SOP's). These three classes of actions span multiple levels of abstraction.

  18. Integrating planning and reactive control

    NASA Astrophysics Data System (ADS)

    Wilkins, David E.; Myers, Karen L.

    1994-10-01

    Our research is developing persistent agents that can achieve complex tasks in dynamic and uncertain environments. We refer to such agents as taskable, reactive agents. An agent of this type requires a number of capabilities. The ability to execute complex tasks necessitates the use of strategic plans for accomplishing tasks; hence, the agent must be able to synthesize new plans at run time. The dynamic nature of the environment requires that the agent be able to deal with unpredictable changes in its world. As such, agents must be able to react to unanticipated events by taking appropriate actions in a timely manner, while continuing activities that support current goals. The unpredictability of the world could lead to failure of plans generated for individual tasks. Agents must have the ability to recover from failures by adapting their activities to the new situation, or replanning if the world changes sufficiently. Finally, the agent should be able to perform in the face of uncertainty. The Cypress system, described here, provides a framework for creating taskable, reactive agents. Several features distinguish our approach: (1) the generation and execution of complex plans with parallel actions; (2) the integration of goal-driven and event driven activities during execution; (3) the use of evidential reasoning for dealing with uncertainty; and (4) the use of replanning to handle run-time execution problems. Our model for a taskable, reactive agent has two main intelligent components, an executor and a planner. The two components share a library of possible actions that the system can take. The library encompasses a full range of action representations, including plans, planning operators, and executable procedures such as predefined standard operating procedures (SOP's). These three classes of actions span multiple levels of abstraction.

  19. A workshop on developing risk assessment methods for medical use of radioactive material. Volume 1: Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tortorelli, J.P.

    1995-08-01

    A workshop was held at the Idaho National Engineering Laboratory, August 16--18, 1994 on the topic of risk assessment on medical devices that use radioactive isotopes. Its purpose was to review past efforts to develop a risk assessment methodology to evaluate these devices, and to develop a program plan and a scoping document for future methodology development. This report contains a summary of that workshop. Participants included experts in the fields of radiation oncology, medical physics, risk assessment, human-error analysis, and human factors. Staff from the US Nuclear Regulatory Commission (NRC) associated with the regulation of medical uses of radioactivemore » materials and with research into risk-assessment methods participated in the workshop. The workshop participants concurred in NRC`s intended use of risk assessment as an important technology in the development of regulations for the medical use of radioactive material and encouraged the NRC to proceed rapidly with a pilot study. Specific recommendations are included in the executive summary and the body of this report. An appendix contains the 8 papers presented at the conference: NRC proposed policy statement on the use of probabilistic risk assessment methods in nuclear regulatory activities; NRC proposed agency-wide implementation plan for probabilistic risk assessment; Risk evaluation of high dose rate remote afterloading brachytherapy at a large research/teaching institution; The pros and cons of using human reliability analysis techniques to analyze misadministration events; Review of medical misadministration event summaries and comparison of human error modeling; Preliminary examples of the development of error influences and effects diagrams to analyze medical misadministration events; Brachytherapy risk assessment program plan; and Principles of brachytherapy quality assurance.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hang Bae

    A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.

  1. SHEDS-Multimedia Model Version 3 (a) Technical Manual; (b) User Guide; and (c) Executable File to Launch SAS Program and Install Model

    EPA Science Inventory

    Reliable models for assessing human exposures are important for understanding health risks from chemicals. The Stochastic Human Exposure and Dose Simulation model for multimedia, multi-route/pathway chemicals (SHEDS-Multimedia), developed by EPA’s Office of Research and Developm...

  2. Hemispheric Asymmetry in the Efficiency of Attentional Networks

    ERIC Educational Resources Information Center

    Asanowicz, Dariusz; Marzecova, Anna; Jaskowski, Piotr; Wolski, Piotr

    2012-01-01

    Despite the fact that hemispheric asymmetry of attention has been widely studied, a clear picture of this complex phenomenon is still lacking. The aim of the present study was to provide an efficient and reliable measurement of potential hemispheric asymmetries of three attentional networks, i.e. alerting, orienting and executive attention.…

  3. Attitude Scale towards Web-Based Examination System (MOODLE)--Validity and Reliability Study

    ERIC Educational Resources Information Center

    Bulent, Basaran; Murat, Yalman; Selahattin, Gonen

    2016-01-01

    Today, the spread of Internet use has accelerated the development of educational technologies and increased the quality of education by encouraging teachers' cooperation and participation. As a result, examinations executed via the Internet have become common, and a number of universities have started using distant education management system.…

  4. The Impact of Caregiver Executive Skills on Reports of Patient Functioning

    ERIC Educational Resources Information Center

    Dassel, Kara Bottiggi; Schmitt, Frederick A.

    2008-01-01

    Purpose: The initial diagnosis and treatment of cognitive disorders such as mild cognitive impairment and Alzheimer's disease is highly dependent on caregiver reports of patient performance of activities of daily living (ADLs). However, these reports may not always be reliable. We investigated the cognitive skills of caregivers, specifically their…

  5. A 24-Week Multi-Modality Exercise Program Improves Executive Control in Older Adults with a Self-Reported Cognitive Complaint: Evidence from the Antisaccade Task.

    PubMed

    Heath, Matthew; Shellington, Erin; Titheridge, Sam; Gill, Dawn P; Petrella, Robert J

    2017-01-01

    Exercise programs involving aerobic and resistance training (i.e., multiple-modality) have shown promise in improving cognition and executive control in older adults at risk, or experiencing, cognitive decline. It is, however, unclear whether cognitive training within a multiple-modality program elicits an additive benefit to executive/cognitive processes. This is an important question to resolve in order to identify optimal training programs that delay, or ameliorate, executive deficits in persons at risk for further cognitive decline. In the present study, individuals with a self-reported cognitive complaint (SCC) participated in a 24-week multiple-modality (i.e., the M2 group) exercise intervention program. In addition, a separate group of individuals with a SCC completed the same aerobic and resistance training as the M2 group but also completed a cognitive-based stepping task (i.e., multiple-modality, mind-motor intervention: M4 group). Notably, pre- and post-intervention executive control was examined via the antisaccade task (i.e., eye movement mirror-symmetrical to a target). Antisaccades are an ideal tool for the study of individuals with subtle executive deficits because of its hands- and language-free nature and because the task's neural mechanisms are linked to neuropathology in cognitive decline (i.e., prefrontal cortex). Results showed that M2 and M4 group antisaccade reaction times reliably decreased from pre- to post-intervention and the magnitude of the decrease was consistent across groups. Thus, multi-modality exercise training improved executive performance in persons with a SCC independent of mind-motor training. Accordingly, we propose that multiple-modality training provides a sufficient intervention to improve executive control in persons with a SCC.

  6. The implementation and use of ADA on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1985-01-01

    The use and implementation of Ada in distributed environments in which reliability is the primary concern is investigated. Emphasis is placed on the possibility that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware. A new linguistic construct, the colloquy, is introduced which solves the problems identified in an earlier proposal, the conversation. It was shown that the colloquy is at least as powerful as recovery blocks, but it is also as powerful as all the other language facilities proposed for other situations requiring backward error recovery: recovery blocks, deadlines, generalized exception handlers, traditional conversations, s-conversations, and exchanges. The major features that distinguish the colloquy are described. Sample programs that were written, but not executed, using the colloquy show that extensive backward error recovery can be included in these programs simply and elegantly. These ideas are being implemented in an experimental Ada test bed.

  7. An investigation into pilot and system response to critical in-flight events. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Rockwell, T. H.; Griffin, W. C.

    1981-01-01

    Critical in-flight events (CIFE) that threaten the aircraft were studied. The scope of the CIFE was described and defined with emphasis on characterizing event development, detection and assessment; pilot information requirements, sources, acquisition, and interpretation, pilot response options, decision processed, and decision implementation and event outcome. Detailed scenarios were developed for use in simulators and paper and pencil testing for developing relationships between pilot performance and background information as well as for an analysis of pilot reaction decision and feedback processes. Statistical relationships among pilot characteristics and observed responses to CIFE's were developed.

  8. The Murder of Leonora De Guzman: A Reappraisal.

    ERIC Educational Resources Information Center

    Taggie, Benjamin F.

    1982-01-01

    Discusses the historical significance of Leonora de Guzman--longtime mistress of Alfonso IX--in the political developments of fourteenth century Castilian Spain. The events leading up to her execution are examined. (AM)

  9. Prospective memory functioning among ecstasy/polydrug users: evidence from the Cambridge Prospective Memory Test (CAMPROMPT).

    PubMed

    Hadjiefthyvoulou, Florentia; Fisk, John E; Montgomery, Catharine; Bridges, Nikola

    2011-06-01

    Prospective memory (PM) deficits in recreational drug users have been documented in recent years. However, the assessment of PM has largely been restricted to self-reported measures that fail to capture the distinction between event-based and time-based PM. The aim of the present study is to address this limitation. Extending our previous research, we augmented the range laboratory measures of PM by employing the CAMPROMPT test battery to investigate the impact of illicit drug use on prospective remembering in a sample of cannabis only, ecstasy/polydrug and non-users of illicit drugs, separating event and time-based PM performance. We also administered measures of executive function and retrospective memory in order to establish whether ecstasy/polydrug deficits in PM were mediated by group differences in these processes. Ecstasy/polydrug users performed significantly worse on both event and time-based prospective memory tasks in comparison to both cannabis only and non-user groups. Furthermore, it was found that across the whole sample, better retrospective memory and executive functioning was associated with superior PM performance. Nevertheless, this association did not mediate the drug-related effects that were observed. Consistent with our previous study, recreational use of cocaine was linked to PM deficits. PM deficits have again been found among ecstasy/polydrug users, which appear to be unrelated to group differences in executive function and retrospective memory. However, the possibility that these are attributable to cocaine use cannot be excluded.

  10. Assessment of higher level cognitive-communication functions in adolescents with ABI: Standardization of the student version of the functional assessment of verbal reasoning and executive strategies (S-FAVRES).

    PubMed

    MacDonald, Sheila

    2016-01-01

    Childhood acquired brain injuries can disrupt communication functions needed for success in school, work and social interaction. Cognitive-communication difficulties may not be apparent until adolescence, when academic, environmental and social-emotional demands increase. The Functional Assessment of Verbal Reasoning and Executive Strategies for Students (S-FAVRES) is a new activity-level measure of cognitive-communication skills in complex, contextual and integrative tasks that simulate real world communication challenges. It is hypothesized that S-FAVRES performance would differentiate adolescents with and without acquired brain injury (ABI) on scores for Accuracy, Rationale, Reasoning Subskills and Time. S-FAVRES was administered to 182 typically-developing (TD) and 57 adolescents with mild-to-severe ABI aged 12-19. Group differences, internal consistency, sensitivity, specificity, reliability and contributing factors to performance (age, gender, brain injury) were examined statistically. Those with ABI attained statistically lower Accuracy, Rationale and Reasoning sub-skills scores than their TD peers. Time scores were not significantly different. Performance trends were consistent across tasks, administrations, gender and age groups. Inter-rater reliability for scoring was acceptable. The S-FAVRES provides a reliable, functional and quantifiable measure of subtle cognitive-communication difficulties in adolescents that can assist speech-language pathologists in planning treatment and integration to school and real world communication.

  11. Effectiveness of different approaches to disseminating traveler information on travel time reliability.

    DOT National Transportation Integrated Search

    2014-01-01

    The second Strategic Highway Research Program (SHRP 2) Reliability program aims to improve trip time reliability by reducing the frequency and effects of events that cause travel times to fluctuate unpredictably. Congestion caused by unreliable, or n...

  12. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks.

    PubMed

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2017-11-05

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  13. Linking CSF and cognition in Alzheimer's disease: Reanalysis of clinical data.

    PubMed

    Guhra, Michael; Thomas, Christine; Boedeker, Sebastian; Kreisel, Stefan; Driessen, Martin; Beblo, Thomas; Ohrmann, Patricia; Toepper, Max

    2016-01-01

    Memory and executive deficits are important cognitive markers of Alzheimer's disease (AD). Moreover, in the past decade, cerebrospinal fluid (CSF) biomarkers have been increasingly utilized in clinical practice. Both cognitive and CSF markers can be used to differentiate between AD patients and healthy seniors with high diagnostic accuracy. However, the extent to which performance on specific mnemonic or executive tasks enables reliable estimations of the concentrations of different CSF markers and their ratios remains unclear. To address the above issues, we examined the association between neuropsychological data and CSF biomarkers in 51 AD patients using hierarchical multiple regression analyses. In the first step of these analyses, age, education and sex were entered as predictors to control for possible confounding effects. In the second step, data from a neuropsychological test battery assessing episodic memory, semantic memory and executive functioning were included to determine whether these variables significantly increased (compared to step 1) the explained variance in Aβ42 concentration, p-tau concentration, t-tau concentration, Aβ42/t-tau ratio, and Aβ42/Aβ40 ratio. The different models explained 52% of the variance in Aβ42/t-tau ratio, 27% of the variance in Aβ42 concentration, and 28% of the variance in t-tau concentration. In particular, Aβ42/t-tau ratio was associated with verbal recognition and code shifting, with Aβ42 being related to verbal recognition and t-tau being related to code shifting. By contrast, the inclusion of neuropsychological data did not allow reliable estimations of Aβ42/Aβ40 ratio or p-tau concentration. Our results showed that strong associations exist between the cognitive key symptoms of AD and the concentrations and ratios of specific CSF markers. In addition, we revealed a specific combination of neuropsychological tests that may facilitate reliable estimations of CSF concentrations, thereby providing important diagnostic information for non-invasive early AD detection. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A fast and reliable method for daily quality assurance in spot scanning proton therapy with a compact and inexpensive phantom.

    PubMed

    Bizzocchi, Nicola; Fracchiolla, Francesco; Schwarz, Marco; Algranati, Carlo

    2017-01-01

    In a radiotherapy center, daily quality assurance (QA) measurements are performed to ensure that the equipment can be safely used for patient treatment on that day. In a pencil beam scanning (PBS) proton therapy center, spot positioning, spot size, range, and dose output are usually verified every day before treatments. We designed, built, and tested a new, reliable, sensitive, and inexpensive phantom, coupled with an array of ionization chambers, for daily QA that reduces the execution times while preserving the reliability of the test. The phantom is provided with 2 pairs of wedges to sample the Bragg peak at different depths to have a transposition on the transverse plane of the depth dose. Three "boxes" are used to check spot positioning and delivered dose. The box thickness helps spread the single spot and to fit a Gaussian profile on a low resolution detector. We tested whether our new QA solution could detect errors larger than our action levels: 1 mm in spot positioning, 2 mm in range, and 10% in spot size. Execution time was also investigated. Our method is able to correctly detect 98% of spots that are actually in tolerance for spot positioning and 99% of spots out of 1 mm tolerance. All range variations greater than the threshold (2 mm) were correctly detected. The analysis performed over 1 month showed a very good repeatability of spot characteristics. The time taken to perform the daily quality assurance is 20 minutes, a half of the execution time of the former multidevice procedure. This "in-house build" phantom substitutes 2 very expensive detectors (a multilayer ionization chamber [MLIC] and a strip chamber, reducing by 5 times the cost of the equipment. We designed, built, and validated a phantom that allows for accurate, sensitive, fast, and inexpensive daily QA procedures in proton therapy with PBS. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  15. An enhanced Ada run-time system for real-time embedded processors

    NASA Technical Reports Server (NTRS)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  16. Application of Artificial Intelligence technology to the analysis and synthesis of reliable software systems

    NASA Technical Reports Server (NTRS)

    Wild, Christian; Eckhardt, Dave

    1987-01-01

    The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.

  17. DEFENSE MEDICAL SURVEILLANCE SYSTEM (DMSS)

    EPA Science Inventory

    AMSA operates the Defense Medical Surveillance System (DMSS), an executive information system whose database contains up-to-date and historical data on diseases and medical events (e.g., hospitalizations, ambulatory visits, reportable diseases, HIV tests, acute respiratory diseas...

  18. 41 CFR 60-30.30 - Final Administrative Order.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... EXECUTIVE ORDER 11246 Post-Hearing Procedures § 60-30.30 Final Administrative Order. After expiration of the... appropriate, or any of the above. In any event, failure to comply with the Administrative order shall result...

  19. An Architecture for Autonomous Rovers on Future Planetary Missions

    NASA Astrophysics Data System (ADS)

    Ocon, J.; Avilés, M.; Graziano, M.

    2018-04-01

    This paper proposes an architecture for autonomous planetary rovers. This architecture combines a set of characteristics required in this type of system: high level of abstraction, reactive event-based activity execution, and automous navigation.

  20. The hidden traps in decision making.

    PubMed

    Hammond, J S; Keeney, R L; Raiffa, H

    1998-01-01

    Bad decisions can often be traced back to the way the decisions were made--the alternatives were not clearly defined, the right information was not collected, the costs and benefits were not accurately weighted. But sometimes the fault lies not in the decision-making process but rather in the mind of the decision maker. The way the human brain works can sabotage the choices we make. John Hammond, Ralph Keeney, and Howard Raiffa examine eight psychological traps that are particularly likely to affect the way we make business decisions: The anchoring trap leads us to give disproportionate weight to the first information we receive. The statusquo trap biases us toward maintaining the current situation--even when better alternatives exist. The sunk-cost trap inclines us to perpetuate the mistakes of the past. The confirming-evidence trap leads us to seek out information supporting an existing predilection and to discount opposing information. The framing trap occurs when we misstate a problem, undermining the entire decision-making process. The overconfidence trap makes us overestimate the accuracy of our forecasts. The prudence trap leads us to be overcautious when we make estimates about uncertain events. And the recallability trap leads us to give undue weight to recent, dramatic events. The best way to avoid all the traps is awareness--forewarned is forearmed. But executives can also take other simple steps to protect themselves and their organizations from the various kinds of mental lapses. The authors show how to take action to ensure that important business decisions are sound and reliable.

  1. Effectiveness comparison of partially executed t-way test suite based generated by existing strategies

    NASA Astrophysics Data System (ADS)

    Othman, Rozmie R.; Ahmad, Mohd Zamri Zahir; Ali, Mohd Shaiful Aziz Rashid; Zakaria, Hasneeza Liza; Rahman, Md. Mostafijur

    2015-05-01

    Consuming 40 to 50 percent of software development cost, software testing is one of the most resource consuming activities in software development lifecycle. To ensure an acceptable level of quality and reliability of a typical software product, it is desirable to test every possible combination of input data under various configurations. Due to combinatorial explosion problem, considering all exhaustive testing is practically impossible. Resource constraints, costing factors as well as strict time-to-market deadlines are amongst the main factors that inhibit such consideration. Earlier work suggests that sampling strategy (i.e. based on t-way parameter interaction or called as t-way testing) can be effective to reduce number of test cases without effecting the fault detection capability. However, for a very large system, even t-way strategy will produce a large test suite that need to be executed. In the end, only part of the planned test suite can be executed in order to meet the aforementioned constraints. Here, there is a need for test engineers to measure the effectiveness of partially executed test suite in order for them to assess the risk they have to take. Motivated by the abovementioned problem, this paper presents the effectiveness comparison of partially executed t-way test suite generated by existing strategies using tuples coverage method. Here, test engineers can predict the effectiveness of the testing process if only part of the original test cases is executed.

  2. Executive dysfunction, brain aging, and political leadership.

    PubMed

    Fisher, Mark; Franklin, David L; Post, Jerrold M

    2014-01-01

    Decision-making is an essential component of executive function, and a critical skill of political leadership. Neuroanatomic localization studies have established the prefrontal cortex as the critical brain site for executive function. In addition to the prefrontal cortex, white matter tracts as well as subcortical brain structures are crucial for optimal executive function. Executive function shows a significant decline beginning at age 60, and this is associated with age-related atrophy of prefrontal cortex, cerebral white matter disease, and cerebral microbleeds. Notably, age-related decline in executive function appears to be a relatively selective cognitive deterioration, generally sparing language and memory function. While an individual may appear to be functioning normally with regard to relatively obvious cognitive functions such as language and memory, that same individual may lack the capacity to integrate these cognitive functions to achieve normal decision-making. From a historical perspective, global decline in cognitive function of political leaders has been alternatively described as a catastrophic event, a slowly progressive deterioration, or a relatively episodic phenomenon. Selective loss of executive function in political leaders is less appreciated, but increased utilization of highly sensitive brain imaging techniques will likely bring greater appreciation to this phenomenon. Former Israeli Prime Minister Ariel Sharon was an example of a political leader with a well-described neurodegenerative condition (cerebral amyloid angiopathy) that creates a neuropathological substrate for executive dysfunction. Based on the known neuroanatomical and neuropathological changes that occur with aging, we should probably assume that a significant proportion of political leaders over the age of 65 have impairment of executive function.

  3. Psychometric considerations in the measurement of event-related brain potentials: Guidelines for measurement and reporting.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Failing to consider psychometric issues related to reliability and validity, differential deficits, and statistical power potentially undermines the conclusions of a study. In research using event-related brain potentials (ERPs), numerous contextual factors (population sampled, task, data recording, analysis pipeline, etc.) can impact the reliability of ERP scores. The present review considers the contextual factors that influence ERP score reliability and the downstream effects that reliability has on statistical analyses. Given the context-dependent nature of ERPs, it is recommended that ERP score reliability be formally assessed on a study-by-study basis. Recommended guidelines for ERP studies include 1) reporting the threshold of acceptable reliability and reliability estimates for observed scores, 2) specifying the approach used to estimate reliability, and 3) justifying how trial-count minima were chosen. A reliability threshold for internal consistency of at least 0.70 is recommended, and a threshold of 0.80 is preferred. The review also advocates the use of generalizability theory for estimating score dependability (the generalizability theory analog to reliability) as an improvement on classical test theory reliability estimates, suggesting that the latter is less well suited to ERP research. To facilitate the calculation and reporting of dependability estimates, an open-source Matlab program, the ERP Reliability Analysis Toolbox, is presented. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Reliability Analysis and Standardization of Spacecraft Command Generation Processes

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Grenander, Sven; Evensen, Ken

    2011-01-01

    center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.

  5. Method and apparatus for single-stepping coherence events in a multiprocessor system under software control

    DOEpatents

    Blumrich, Matthias A.; Salapura, Valentina

    2010-11-02

    An apparatus and method are disclosed for single-stepping coherence events in a multiprocessor system under software control in order to monitor the behavior of a memory coherence mechanism. Single-stepping coherence events in a multiprocessor system is made possible by adding one or more step registers. By accessing these step registers, one or more coherence requests are processed by the multiprocessor system. The step registers determine if the snoop unit will operate by proceeding in a normal execution mode, or operate in a single-step mode.

  6. National Forums '89. Citizens, Leaders Look at Our Democracy. A Report on the Conference (Washington, D.C., April 16-19, 1989).

    ERIC Educational Resources Information Center

    National Issues Forums, Dayton, OH.

    This publication presents reports from National Forums '89, the culminating event of the National Issues Forums (NIF) 1988-89 cycle. A brief overview of this event is followed by a summary of the session entitled Executive Branch Conference: Reports from the Forums, in which policymakers were briefed on the outcomes of each of the 1988-89 issues.…

  7. NSW Executive Enhancements

    DTIC Science & Technology

    1981-06-01

    independently on the same network. Given this reduction in scale, the projected impl widely distributed, fully replicated, synchronized dat design...Manager that "owns" other resources. This strategy requires minimum synchronization while providing advantages in reliability and robustness. 2 3...interactive tools on TENEX, transparent file motion and translation, and a primitive set of project management functions. This demonstration confirmed that

  8. Fault tolerant software modules for SIFT

    NASA Technical Reports Server (NTRS)

    Hecht, M.; Hecht, H.

    1982-01-01

    The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

  9. Young Children Detect and Avoid Logically Inconsistent Sources: The Importance of Communicative Context and Executive Function

    ERIC Educational Resources Information Center

    Doebel, Sabine; Rowell, Shaina F.; Koenig, Melissa A.

    2016-01-01

    The reported research tested the hypothesis that young children detect logical inconsistency in communicative contexts that support the evaluation of speakers' epistemic reliability. In two experiments (N = 194), 3- to 5-year-olds were presented with two speakers who expressed logically consistent or inconsistent claims. Three-year-olds failed to…

  10. Fourth Integrated Communications, Navigation, and Surveillance (ICNS) Conference and Workshop 2004: Conclusions and Recommendations

    NASA Technical Reports Server (NTRS)

    Phillips, Brent; Swanda, Ronald L.; Lewis, Michael S.; Kenagy, Randy; Donahue, George; Homans, Al; Kerczewski, Robert; Pozesky, Marty

    2004-01-01

    The NASA Glenn Research Center organized and hosted the Fourth Integrated Communications, Navigation, and Surveillance (ICNS) Technologies Conference and Workshop, which took place April 26-30, 2004 at the Hyatt Fair Lakes Hotel in Fairfax, Virginia. This fourth conference of the annual series followed the very successful first ICNS Conference (May 1-3, 2001 in Cleveland, Ohio), second ICNS conference (April 29-May 2, 2002 in Vienna, Virginia), and third ICNS conference (May 19-22, 2003 in Annapolis, Maryland). The purpose of the Fourth ICNS Conference was to assemble government, industry and academic communities performing research and development for advanced digital communications, surveillance and navigation systems and associated applications supporting the national and global air transportation systems to: 1) Understand current efforts and recent results in near- and far-term R&D and technology demonstration; 2) Identify integrated digital communications, navigation and surveillance R&D requirements necessary for a safe, secure and reliable, high-capacity, advanced air transportation system; 3) Provide a forum for fostering collaboration and coordination; and 4) Discuss critical issues and develop recommendations to achieve the future integrated CNS vision for national and global air transportation. The workshop attracted 316 attendees from government, industry and academia to address these purposes through technical presentations, breakout sessions, and individual and group discussions during the workshop and after-hours events, and included 16 international attendees. An Executive Committee consisting of representatives of several key segments of the aviation community concerned with CNS issues met on the day following the workshop to consider the primary outcomes and recommendations of the workshop. This report presents an overview of the conference, workshop breakout session results, and the findings of the Executive Committee.

  11. Effective Measurement of Reliability of Repairable USAF Systems

    DTIC Science & Technology

    2012-09-01

    Hansen presented a course, Concepts and Models for Repairable Systems Reliability, at the 2009 Centro de Investigacion en Mathematicas ( CIMAT ). The...recurrent event by calculating the mean quantity of recurrent events of the population of systems at risk at that point in time. The number of systems at... risk is the number of systems that are operating and providing information. [9] Information can be obscured by data censoring and truncation. One

  12. Non-stationarity in US droughts and implications for water resources planning and management

    NASA Astrophysics Data System (ADS)

    Apurv, T.; Cai, X.

    2017-12-01

    The concepts of return period and reliability are widely used in hydrology for quantifying the risk of extreme events. The conventional way of calculating return period and reliability requires the assumption of stationarity and independence of extreme events in successive years. These assumptions may not be true for droughts since a single drought event can last for more than one year. Further, droughts are known to be influenced by multi-year to multi-decadal oscillations (eg. El Nino Southern Oscillation (ENSO), Atlantic Multidecadal Oscillation (AMO), Pacific Decadal Oscillation (PDO)), which means that the underlying distribution can change with time. In this study, we develop a non-stationary frequency analysis for relating meteorological droughts in the continental US (CONUS) with physical covariates. We calculate the return period and reliability of meteorological droughts in different parts of CONUS by considering the correlation and the non-stationarity in drought events. We then compare the return period and reliability calculated assuming non-stationarity with that calculated assuming stationarity. The difference between the two estimates is used to quantify the extent of non-stationarity in droughts in different parts of CONUS. We also use the non-stationary frequency analysis model for attributing the causes of non-stationarity. Finally we will discuss the implications for water resources planning and management in the United States.

  13. 1996 Olympic and Paralympic Games event study : executive summary

    DOT National Transportation Integrated Search

    1997-05-01

    The Atlanta metropolitan region is the location of one of the most ambitious intelligent transportation system (ITS) deployments in the United States. The system links eight regional agencies and includes a transportation management center (TMC), six...

  14. Report: EPA Needs to Improve Continuity of Operations Planning

    EPA Pesticide Factsheets

    Report #10-P-0017, October 27, 2009. EPA has limited assurance that it can successfully maintain continuity of operations and execute its mission essential functions during a significant national event such as a pandemic influenza outbreak.

  15. 77 FR 70123 - Retrospective Review Under Executive Order 13579

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-23

    ... ongoing review of its regulations related to the recent events at the Fukushima Dai-ichi Nuclear Power Plant in Japan; and (5) the NRC's previous and ongoing efforts to update its regulations in a systematic...

  16. Trail Making Test: normative data for Turkish elderly population by age, sex and education.

    PubMed

    Cangoz, Banu; Karakoc, Ebru; Selekler, Kaynak

    2009-08-15

    Trail Making Test (TMT) is a neuropsychological test, which has parts A and B that can precisely measure executive functions, like complex visual-motor conceptual screening, planning, organization, abstract thinking and response inhibition. The main purpose of this study is to standardize TMT for Turkish adults and/or elderly population. This study primarily consists of two main parts; norm determination study and reliability/validity studies, respectively. The standardization study was carried on 484 participants (238 female and 246 male). Participants at the age of 50 years and older were selected from a pool of people employed in or retired from governmental and/or private institutions. The research design of this study involves the following variables mainly; age (7 subgroups), sex (2 subgroups) and education (3 subgroups). Age, sex and education variables have significant influence on eight different kinds of TMT scores. Statistical analysis by ANOVA revealed a major effect of age (p<0.001) and education (p<0.001) on time spent in Part A or B, or time difference between Parts B and A, or sum of Parts A and B. Similarly, influence of sex (p<0.05) on time spent on Part A or B, or sum of Parts A and B was shown to be significant. Kruskal-Wallis Test was performed and chi-square (chi(2)) values revealed that, correction scores for Part A and B were found to be influenced by age groups (p<0.001). Test-retest reliability and inter-rater reliability coefficients for time scores of Parts A and B were estimated as 0.78, 0.99 and 0.73, 0.93, respectively. This study provides normative data for a psychometric tool that reliably measures the executive functions in Turkish elderly population at the age of 50 and over.

  17. Runtime Verification of C Programs

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2008-01-01

    We present in this paper a framework, RMOR, for monitoring the execution of C programs against state machines, expressed in a textual (nongraphical) format in files separate from the program. The state machine language has been inspired by a graphical state machine language RCAT recently developed at the Jet Propulsion Laboratory, as an alternative to using Linear Temporal Logic (LTL) for requirements capture. Transitions between states are labeled with abstract event names and Boolean expressions over such. The abstract events are connected to code fragments using an aspect-oriented pointcut language similar to ASPECTJ's or ASPECTC's pointcut language. The system is implemented in the C analysis and transformation package CIL, and is programmed in OCAML, the implementation language of CIL. The work is closely related to the notion of stateful aspects within aspect-oriented programming, where pointcut languages are extended with temporal assertions over the execution trace.

  18. Uncertainty and Surprise: Ideas from the Open Discussion

    NASA Astrophysics Data System (ADS)

    Jordan, Michelle E.

    Approximately one hundred participants met for three days at a conference entitled "Uncertainty and Surprise: Questions on Working with the Unexpected and Unknowable." There were a diversity of conference participants ranging from researchers in the natural sciences and researchers in the social sciences (business professors, physicists, ethnographers, nursing school deans) to practitioners and executives in public policy and management (business owners, health care managers, high tech executives), all of whom had varying levels of experience and expertise in dealing with uncertainty and surprise. One group held the traditional, statistical view that uncertainty comes from variance and events that are described by usually unimodal probability law. A second group was comfortable on the one hand with phase diagrams and the phase transitions that come from systems with multi-modal distributions, and on the other hand, with deterministic chaos. A third group was comfortable with the emergent events from evolutionary processes that may not have any probability laws at all.

  19. Compositional Solution Space Quantification for Probabilistic Software Analysis

    NASA Technical Reports Server (NTRS)

    Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem

    2014-01-01

    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.

  20. The reliability and validity of qualitative scores for the Controlled Oral Word Association Test.

    PubMed

    Ross, Thomas P; Calhoun, Emily; Cox, Tara; Wenner, Carolyn; Kono, Whitney; Pleasant, Morgan

    2007-05-01

    The reliability and validity of two qualitative scoring systems for the Controlled Oral Word Association Test [Benton, A. L., Hamsher, de S. K., & Sivan, A. B. (1983). Multilingual aplasia examination (2nd ed.). Iowa City, IA: AJA Associates] were examined in 108 healthy young adults. The scoring systems developed by Troyer et al. [Troyer, A. K., Moscovich, M., & Winocur, G. (1997). Clustering and switching as two components of verbal fluency: Evidence from younger and older healthy adults. Neuropsychology, 11, 138-146] and by Abwender et al. [Abwender, D. A., Swan, J. G., Bowerman, J. T., & Connolly, S. W. (2001a). Qualitative analysis of verbal fluency output: Review and comparison of several scoring methods. Assessment, 8, 323-336] each demonstrated excellent interrater reliability (all indices at or above r(icc)=.9). Consistent with previous research [e.g., Ross, T. P. (2003). The reliability of cluster and switch scores for the COWAT. Archives of Clinical Psychology, 18, 153-164), test-retest reliability coefficients (N=53; M interval 44.6 days) for the qualitative scores were modest to poor (r(icc)=.6 to .4 range). Correlations among COWAT scores, measures of executive functioning, verbal learning, working memory, and vocabulary were examined. The idea that qualitative scores represent distinct executive functions such as cognitive flexibility or strategy utilization was not supported. We offer the interpretation that COWAT performance may require the ability to retrieve words in a non-routine manner while suppressing habitual responses and associated processing interference, presumably due to a spread of activation across semantic or lexical networks. This interpretation, though speculative at present, implies that clustering and switching on the COWAT may not be entirely deliberate, but rather an artifact of a passive (i.e., state-dependent) process. Ideas for future research, most noticeably experimental studies using cognitive methods (e.g., priming), are discussed.

  1. Text messaging as a strategy to address the limits of audio-based communication during mass-gathering events with high ambient noise.

    PubMed

    Lund, Adam; Wong, Daniel; Lewis, Kerrie; Turris, Sheila A; Vaisler, Sean; Gutman, Samuel

    2013-02-01

    The provision of medical care in environments with high levels of ambient noise (HLAN), such as concerts or sporting events, presents unique communication challenges. Audio transmissions can be incomprehensible to the receivers. Text-based communications may be a valuable primary and/or secondary means of communication in this type of setting. To evaluate the usability of text-based communications in parallel with standard two-way radio communications during mass-gathering (MG) events in the context of HLAN. This Canadian study used outcome survey methods to evaluate the performance of communication devices during MG events. Ten standard commercially available handheld smart phones loaded with basic voice and data plans were assigned to health care providers (HCPs) for use as an adjunct to the medical team's typical radio-based communication. Common text messaging and chat platforms were trialed. Both efficacy and provider satisfaction were evaluated. During a 23-month period, the smart phones were deployed at 17 events with HLAN for a total of 40 event days or approximately 460 hours of active use. Survey responses from health care providers (177) and dispatchers (26) were analyzed. The response rate was unknown due to the method of recruitment. Of the 155 HCP responses to the question measuring difficulty of communication in environments with HLAN, 68.4% agreed that they "occasionally" or "frequently" found it difficult to clearly understand voice communications via two-way radio. Similarly, of the 23 dispatcher responses to the same item, 65.2% of the responses indicated that "occasionally" or "frequently" HLAN negatively affected the ability to communicate clearly with team members. Of the 168 HCP responses to the item assessing whether text-based communication improved the ability to understand and respond to calls when compared to radio alone, 86.3% "agreed" or "strongly agreed" that this was the case. The dispatcher responses (n = 21) to the same item also "agreed" or "strongly agreed" that this was the case 95.5% of the time. CONCLUSION The use of smart phone technology for text-based communications is a practical and feasible tool for MG events and should be explored further. Multiple, reliable, discrete forms of communication technology are pivotal to executing effective on-site medical and disaster responses.

  2. A mediation skills model to manage disclosure of errors and adverse events to patients.

    PubMed

    Liebman, Carol B; Hyman, Chris Stern

    2004-01-01

    In 2002 Pennsylvania became the first state to impose on hospitals a statutory duty to notify patients in writing of a serious event. If the disclosure conversations are carefully planned, properly executed, and responsive to patients' needs, this new requirement creates possible benefits for both patient safety and litigation risk management. This paper describes a model for accomplishing these goals that encourages health care providers to communicate more effectively with patients following an adverse event or medical error, learn from mistakes, respond to the concerns of patients and families after an adverse event, and arrive at a fair and cost-effective resolution of valid claims.

  3. A Step Toward High Reliability: Implementation of a Daily Safety Brief in a Children's Hospital.

    PubMed

    Saysana, Michele; McCaskey, Marjorie; Cox, Elaine; Thompson, Rachel; Tuttle, Lora K; Haut, Paul R

    2017-09-01

    Health care is a high-risk industry. To improve communication about daily events and begin the journey toward a high reliability organization, the Riley Hospital for Children at Indiana University Health implemented a daily safety brief. Various departments in our children's hospital were asked to participate in a daily safety brief, reporting daily events and unexpected outcomes within their scope of responsibility. Participants were surveyed before and after implementation of the safety brief about communication and awareness of events in the hospital. The length of the brief and percentage of departments reporting unexpected outcomes were measured. The analysis of the presurvey and the postsurvey showed a statistically significant improvement in the questions related to the awareness of daily events as well as communication and relationships between departments. The monthly mean length of time for the brief was 15 minutes or less. Unexpected outcomes were reported by 50% of the departments for 8 months. A daily safety brief can be successfully implemented in a children's hospital. Communication between departments and awareness of daily events were improved. Implementation of a daily safety brief is a step toward becoming a high reliability organization.

  4. Extension of the Contingency Naming Test to adult assessment: psychometric analysis in a college student sample.

    PubMed

    Riddle, Tara; Suhr, Julie

    2012-01-01

    The Contingency Naming Test (CNT; Taylor, Albo, Phebus, Sachs, & Bierl, 1987) was initially designed to assess aspects of executive functioning, such as processing speed and response inhibition, in children. The measure has shown initial utility in identifying differences in executive function among child clinical groups; however, there is an absence of adequate psychometric data for use with adults. The current study expanded psychometric data upward for use with a college student sample and explored the measure's test-retest reliability and factor structure. Performance in the adult sample showed continued improvement above child norms, consistent with theories of executive function development. Exploratory factor analysis showed that the CNT is most closely related to measures of processing speed, as well as elements of response inhibition within the latter trials. Overall, results from the current study provide added support for the utility of the CNT as a measure of executive functioning in young adults. However, more research is needed to determine patterns of performance among adult clinical groups, as well as to better understand how performance patterns may change in a broader age range, including middle and older adulthood.

  5. Should this event be notified to the World Health Organization? Reliability of the international health regulations notification assessment process.

    PubMed

    Haustein, Thomas; Hollmeyer, Helge; Hardiman, Max; Harbarth, Stephan; Pittet, Didier

    2011-04-01

    To investigate the reliability of the public health event notification assessment process under the International Health Regulations (2005) (IHR). In 2009, 193 National IHR Focal Points (NFPs) were invited to use the decision instrument in Annex 2 of the IHR to determine whether 10 fictitious public health events should be notified to WHO. Each event's notifiability was assessed independently by an expert panel. The degree of consensus among NFPs and of concordance between NFPs and the expert panel was considered high when more than 70% agreed on a response. Overall, 74% of NFPs responded. The median degree of consensus among NFPs on notification decisions was 78%. It was high for the six events considered notifiable by the majority (median: 80%; range: 76-91) but low for the remaining four (median: 55%; range: 54-60). The degree of concordance between NFPs and the expert panel was high for the five events deemed notifiable by the panel (median: 82%; range: 76-91) but low (median: 51%; range: 42-60) for those not considered notifiable. The NFPs identified notifiable events with greater sensitivity than specificity (P < 0.001). When used by NFPs, the notification assessment process in Annex 2 of the IHR was sensitive in identifying public health events that were considered notifiable by an expert panel, but only moderately specific. The reliability of the assessments could be increased by expanding guidance on the use of the decision instrument and by including more specific criteria for assessing events and clearer definitions of terms.

  6. Psychometric assessment of the IBS-D Daily Symptom Diary and Symptom Event Log.

    PubMed

    Rosa, Kathleen; Delgado-Herrera, Leticia; Zeiher, Bernie; Banderas, Benjamin; Arbuckle, Rob; Spears, Glen; Hudgens, Stacie

    2016-12-01

    Diarrhea-predominant irritable bowel syndrome (IBS-D) can considerably impact patients' lives. Patient-reported symptoms are crucial in understanding the diagnosis and progression of IBS-D. This study psychometrically evaluates the newly developed IBS-D Daily Symptom Diary and Symptom Event Log (hereafter, "Event Log") according to US regulatory recommendations. A US-based observational field study was conducted to understand cross-sectional psychometric properties of the IBS-D Daily Symptom Diary and Event Log. Analyses included item descriptive statistics, item-to-item correlations, reliability, and construct validity. The IBS-D Daily Symptom Diary and Event Log had no items with excessive missing data. With the exception of two items ("frequency of gas" and "accidents"), moderate to high inter-item correlations were observed among all items of the IBS-D Daily Symptom Diary and Event Log (day 1 range 0.67-0.90). Item scores demonstrated reliability, with the exception of the "frequency of gas" and "accidents" items of the Diary and "incomplete evacuation" item of the Event Log. The pattern of correlations of the IBS-D Daily Symptom Diary and Event Log item scores with generic and disease-specific measures was as expected, moderate for similar constructs and low for dissimilar constructs, supporting construct validity. Known-groups methods showed statistically significant differences and monotonic trends in each of the IBS-D Daily Symptom Diary item scores among groups defined by patients' IBS-D severity ratings ("none"/"mild," "moderate," or "severe"/"very severe"), supporting construct validity. Initial psychometric results support the reliability and validity of the items of the IBS-D Daily Symptom Diary and Event Log.

  7. A reliable multicast for XTP

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.

  8. Program Instrumentation and Trace Analysis

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)

    2002-01-01

    Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.

  9. Differentiating between Alzheimer's Disease and Vascular Cognitive Impairment: Is the "Memory Versus Executive Function" Contrast Still Relevant?

    PubMed

    Andriuta, Daniela; Roussel, Martine; Barbay, Mélanie; Despretz-Wannepain, Sandrine; Godefroy, Olivier

    2018-01-01

    The contrast between memory versus executive function impairments is commonly used to differentiate between neurocognitive disorders (NCDs) due to Alzheimer's disease (AD) and vascular cognitive impairment (VCI). We reconsidered this question because of the current use of AD biomarkers and the recent revision of the criteria for AD, VCI, and dysexecutive syndrome. To establish and compare the neuropsychological profiles in AD (i.e., with positive CSF biomarkers) and in VCI. We included 62 patients with mild or major NCDs due to pure AD (with positive CSF biomarker assays), and 174 patients (from the GRECogVASC cohort) with pure VCI. The neuropsychological profiles were compared after stratification for disease severity (mild or major NCD). We defined a memory-executive function index (the mean z score for the third free recall and the delayed free recall in the Free and Cued Selective Reminding Test minus the mean z score for category fluency and the completion time in the Trail Making Test part B) and determined its diagnostic accuracy. Compared with VCI patients, patients with AD had significantly greater memory impairments (p = 0.001). Executive function was impaired to a similar extent in the two groups (p = 0.11). Behavioral executive disorders were more prominent in the AD group (p = 0.001). Although the two groups differed significant with regard to the memory-executive function index (p < 0.001), the latter's diagnostic accuracy was only moderate (sensitivity: 63%, specificity: 87%). Although the contrast between memory and executive function impairments was supported at the group level it does not reliably discriminate between AD and VCI at the individual level.

  10. The Single Event Effect Characteristics of the 486-DX4 Microprocessor

    NASA Technical Reports Server (NTRS)

    Kouba, Coy; Choi, Gwan

    1996-01-01

    This research describes the development of an experimental radiation testing environment to investigate the single event effect (SEE) susceptibility of the 486-DX4 microprocessor. SEE effects are caused by radiation particles that disrupt the logic state of an operating semiconductor, and include single event upsets (SEU) and single event latchup (SEL). The relevance of this work can be applied directly to digital devices that are used in spaceflight computer systems. The 486-DX4 is a powerful commercial microprocessor that is currently under consideration for use in several spaceflight systems. As part of its selection process, it must be rigorously tested to determine its overall reliability in the space environment, including its radiation susceptibility. The goal of this research is to experimentally test and characterize the single event effects of the 486-DX4 microprocessor using a cyclotron facility as the fault-injection source. The test philosophy is to focus on the "operational susceptibility," by executing real software and monitoring for errors while the device is under irradiation. This research encompasses both experimental and analytical techniques, and yields a characterization of the 486-DX4's behavior for different operating modes. Additionally, the test methodology can accommodate a wide range of digital devices, such as microprocessors, microcontrollers, ASICS, and memory modules, for future testing. The goals were achieved by testing with three heavy-ion species to provide different linear energy transfer rates, and a total of six microprocessor parts were tested from two different vendors. A consistent set of error modes were identified that indicate the manner in which the errors were detected in the processor. The upset cross-section curves were calculated for each error mode, and the SEU threshold and saturation levels were identified for each processor. Results show a distinct difference in the upset rate for different configurations of the on-chip cache, as well as proving that one vendor is superior to the other in terms of latchup susceptibility. Results from this testing were also used to provide a mean-time-between-failure estimate of the 486-DX4 operating in the radiation environment for the International Space Station.

  11. Reasoning, learning, and creativity: frontal lobe function and human decision-making.

    PubMed

    Collins, Anne; Koechlin, Etienne

    2012-01-01

    The frontal lobes subserve decision-making and executive control--that is, the selection and coordination of goal-directed behaviors. Current models of frontal executive function, however, do not explain human decision-making in everyday environments featuring uncertain, changing, and especially open-ended situations. Here, we propose a computational model of human executive function that clarifies this issue. Using behavioral experiments, we show that unlike others, the proposed model predicts human decisions and their variations across individuals in naturalistic situations. The model reveals that for driving action, the human frontal function monitors up to three/four concurrent behavioral strategies and infers online their ability to predict action outcomes: whenever one appears more reliable than unreliable, this strategy is chosen to guide the selection and learning of actions that maximize rewards. Otherwise, a new behavioral strategy is tentatively formed, partly from those stored in long-term memory, then probed, and if competitive confirmed to subsequently drive action. Thus, the human executive function has a monitoring capacity limited to three or four behavioral strategies. This limitation is compensated by the binary structure of executive control that in ambiguous and unknown situations promotes the exploration and creation of new behavioral strategies. The results support a model of human frontal function that integrates reasoning, learning, and creative abilities in the service of decision-making and adaptive behavior.

  12. Reasoning, Learning, and Creativity: Frontal Lobe Function and Human Decision-Making

    PubMed Central

    Collins, Anne; Koechlin, Etienne

    2012-01-01

    The frontal lobes subserve decision-making and executive control—that is, the selection and coordination of goal-directed behaviors. Current models of frontal executive function, however, do not explain human decision-making in everyday environments featuring uncertain, changing, and especially open-ended situations. Here, we propose a computational model of human executive function that clarifies this issue. Using behavioral experiments, we show that unlike others, the proposed model predicts human decisions and their variations across individuals in naturalistic situations. The model reveals that for driving action, the human frontal function monitors up to three/four concurrent behavioral strategies and infers online their ability to predict action outcomes: whenever one appears more reliable than unreliable, this strategy is chosen to guide the selection and learning of actions that maximize rewards. Otherwise, a new behavioral strategy is tentatively formed, partly from those stored in long-term memory, then probed, and if competitive confirmed to subsequently drive action. Thus, the human executive function has a monitoring capacity limited to three or four behavioral strategies. This limitation is compensated by the binary structure of executive control that in ambiguous and unknown situations promotes the exploration and creation of new behavioral strategies. The results support a model of human frontal function that integrates reasoning, learning, and creative abilities in the service of decision-making and adaptive behavior. PMID:22479152

  13. Mission Data System Java Edition Version 7

    NASA Technical Reports Server (NTRS)

    Reinholtz, William K.; Wagner, David A.

    2013-01-01

    The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.

  14. Reliability Concerns for Flying SiC Power MOSFETs in Space

    NASA Technical Reports Server (NTRS)

    Galloway, K. F.; Witulski, A. F.; Schrimpf, R. D.; Sternberg, A. L.; Ball, D. R.; Javanainen, A.; Reed, R. A.; Sierawski, B. D.; Lauenstein, J.-M.

    2018-01-01

    SiC power MOSFETs are space-ready in terms of typical reliability measures. However, single event burnout (SEB) often occurs at voltages 50% or lower than specified breakdown. Data illustrating burnout for 1200 V devices is reviewed and the space reliability of SiC MOSFETs is discussed.

  15. Psychometric characteristics of the BRIEF scale for the assessment of executive functions in Spanish clinical population.

    PubMed

    García Fernández, Trinidad; González-Pienda, Julio Antonio; Rodríguez Pérez, Celestino; Álvarez García, David; Álvarez Pérez, Luis

    2014-01-01

    The Behavior Rating Inventory of Executive Functions (BRIEF) scale, completed by families, is widely known in the assessment of executive functions in children and adolescents. However, its application is limited to English-speaking population. This study analyzes the preliminary results from its application in a Spanish clinical sample, comprising 125 participants aged 5-18 years. Internal structure and reliability of the translated scale were analyzed, as well as its relationship with other behavioral measures through the analysis of their correlations with the Assessment of Attention Deficit Hyperactivity Disorder Scale (EDAH). The results were compared with those from the original validation study. The data revealed the presence of the same internal structure, as well as acceptable internal consistency and significant correlations with the Attention Deficit and Hyperactivity components of the EDAH scale. This study provides preliminary evidence of the utility of the BRIEF scale in cultural contexts different from the original, particularly in Spanish clinical population.

  16. 76 FR 76771 - Proposed Submission of Information Collections for OMB Review; Comment Request; Reportable Events...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-08

    ... and in the spirit of Executive Order 13563 on Improving Regulation and Regulatory Review, PBGC plans... indicate plan or employer financial problems. PBGC uses the information provided in determining what, if...

  17. Contact technique and concussions in the South African under-18 Coca-Cola Craven Week Rugby tournament.

    PubMed

    Hendricks, Sharief; O'connor, Sam; Lambert, Michael; Brown, James; Burger, Nicholas; Mc Fie, Sarah; Readhead, Clint; Viljoen, Wayne

    2015-01-01

    In rugby union, understanding the techniques and events leading to concussions is important because of the nature of the injury and the severity and potential long-term consequences, particularly in junior players. Proper contact technique is a prerequisite for successful participation in rugby and is a major factor associated with injury. However, the execution of proper contact technique and its relationship to injury has yet to be studied in matches. Therefore, the aim of this study was to compare contact techniques leading to concussion with a representative sample of similarly matched non-injury (NI) contact events. Injury surveillance was conducted at the 2011-2013 under-18 Craven Week Rugby tournaments. Video footage of 10 concussive events (5 tackle, 4 ruck and 1 aerial collision) and 83 NI events were identified (19 tackle, 61 ruck and 3 aerial collisions). Thereafter, each phase of play was analysed using standardised technical proficiency criteria. Overall score for ruck proficiency in concussive events was 5.67 (out of a total of 15) vs. 6.98 for NI events (n = 54) (effect size = 0.52, small). Overall average score for tackler proficiency was 7.25 (n = 4) and 6.67 (n = 15) for injury and NI tackles, respectively (out of 16) (effect size = 0.19, trivial). This is the first study to compare concussion injury contact technique to a player-matched sample of NI contact techniques. Certain individual technical criteria had an effect towards an NI outcome, and others had an effect towards a concussive event, highlighting that failure to execute certain techniques may substantially increase the opportunity for concussion.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Kevin; Tuffner, Frank; Elizondo, Marcelo

    Regulated electricity utilities are required to provide safe and reliable service to their customers at a reasonable cost. To balance the objectives of reliable service and reasonable cost, utilities build and operate their systems to operate under typical historic conditions. As a result, when abnormal events such as major storms or disasters occur, it is not uncommon to have extensive interruptions in service to the end-use customers. Because it is not cost effective to make the existing electrical infrastructure 100% reliable, society has come to expect disruptions during abnormal events. However, with the increasing number of abnormal weather events, themore » public is becoming less tolerant of these disruptions. One possible solution is to deploy microgrids as part of a coordinated resiliency plan to minimize the interruption of power to essential loads. This paper evaluates the feasibility of using microgrids as a resiliency resource, including their possible benefits and the associated technical challenges. A use-case of an operational microgrid is included.« less

  19. Interrater reliability of visually evaluated high frequency oscillations.

    PubMed

    Spring, Aaron M; Pittman, Daniel J; Aghakhani, Yahya; Jirsch, Jeffrey; Pillay, Neelan; Bello-Espinosa, Luis E; Josephson, Colin; Federico, Paolo

    2017-03-01

    High frequency oscillations (HFOs) and interictal epileptiform discharges (IEDs) have been shown to be markers of epileptogenic regions. However, there is currently no 'gold standard' for identifying HFOs. Accordingly, we aimed to formally characterize the interrater reliability of HFO markings to validate the current practices. A morphology detector was implemented to detect events (candidate HFOs, lower-threshold events, and distractors) from the intracranial EEG (iEEG) of ten patients. Six electroencephalographers visually evaluated these events for the presence of HFOs and IEDs. Interrater reliability was calculated using pairwise Cohen's Kappa (κ) and intraclass correlation coefficients (ICC). The HFO evaluation distributions were significantly different for most pairs of reviewers (p<0.05; 11/15 pairs). Interrater reliability was poor for HFOs alone (κ mean =0.403; ICC=0.401) and HFO+IEDs (κ mean =0.568; ICC=0.570). The current practice of using two visual reviewers to identify HFOs is prone to bias arising from the poor agreement between reviewers, limiting the extrinsic validity of studies using these markers. The poor interrater reliability underlines the need for a framework to reconcile the important findings of existing studies. The present epoched design is an ideal candidate for the implementation of such a framework. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  20. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1984-01-01

    The use and implementation of Ada in distributed environments in which reliability is the primary concern is investigated. Emphasis is placed on the possibility that a distributed system may be programmed entirely in ADA so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware. The primary activities are: (1) Continued development and testing of our fault-tolerant Ada testbed; (2) consideration of desirable language changes to allow Ada to provide useful semantics for failure; (3) analysis of the inadequacies of existing software fault tolerance strategies.

  1. Exploiting replication in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, T. A.

    1989-01-01

    Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior. Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures. It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols. Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs.

  2. Impaired prospective memory but intact episodic memory in intellectually average 7- to 9-year-olds born very preterm and/or very low birth weight.

    PubMed

    Ford, Ruth M; Griffiths, Sarah; Neulinger, Kerryn; Andrews, Glenda; Shum, David H K; Gray, Peter H

    2017-11-01

    Relatively little is known about episodic memory (EM: memory for personally-experienced events) and prospective memory (PM: memory for intended actions) in children born very preterm (VP) or with very low birth weight (VLBW). This study evaluates EM and PM in mainstream-schooled 7- to 9-year-olds born VP (≤ 32 weeks) and/or VLBW (< 1500 g) and matches full-term children for comparison (n = 35 and n = 37, respectively). Additionally, participants were assessed for verbal and non-verbal ability, executive function (EF), and theory of mind (ToM). The results show that the VP/VLBW children were outperformed by the full-term children on the memory tests overall, with a significant univariate group difference in PM. Moreover, within the VP/VLBW group, the measures of PM, verbal ability and working memory all displayed reliable negative correlations with severity of neonatal illness. PM was found to be independent of EM and cognitive functioning, suggesting that this form of memory might constitute a domain of specific vulnerability for VP/VLBW children.

  3. Substance use disorders and Cluster B personality disorders: physiological, cognitive, and environmental correlates in a college sample.

    PubMed

    Taylor, Jeanette

    2005-01-01

    Substance use disorders (SUDs) and Cluster B personality disorders (PDs) are both marked by impulsivity and poor behavioral control and may result in part from shared neurobiological or executive cognitive functioning deficits. To examine the potential utility of such models in explaining variance in SUDs and PDs at the lower end of symptom expression and impairment, 123 (73 female) volunteer college students were administered 2 measures of executive cognitive functioning; a task assessing autonomic reactivity to aversive noise blasts; a life events and a peer substance use measure; and structured clinical interviews to assess symptoms of substance abuse/dependence and antisocial, borderline, histrionic, and narcissistic PDs. As expected, symptoms of SUDs and PDs were significantly positively correlated. Antisocial PD, alcohol and cannabis use disorder symptoms were significantly positively related to proportion of friends who use alcohol and drugs regularly and drug use among romantic partners. Number of negative life events was positively related to PD symptoms and to alcohol use disorder symptoms. Executive cognitive functioning was not related to SUD and PD symptoms in the expected direction. Findings suggest that, among higher functioning young adults, environmental factors may be particularly relevant to our understanding of SUDs and certain PDs.

  4. Methodology for Software Reliability Prediction. Volume 1.

    DTIC Science & Technology

    1987-11-01

    SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were

  5. Performance Data Gathering and Representation from Fixed-Size Statistical Data

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Jin, Haoqiang H.; Schmidt, Melisa A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The two commonly-used performance data types in the super-computing community, statistics and event traces, are discussed and compared. Statistical data are much more compact but lack the probative power event traces offer. Event traces, on the other hand, are unbounded and can easily fill up the entire file system during program execution. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. Two basic ideas are employed: the use of averages to replace recording data for each instance and 'formulae' to represent sequences associated with communication and control flow. The user can trade off tracing overhead, trace data size with data quality incrementally. In other words, the user will be able to limit the amount of trace data collected and, at the same time, carry out some of the analysis event traces offer using space-time views. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected with event traces. We found that the trace files thus obtained are, indeed, small, bounded and predictable before program execution, and that the quality of the space-time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at runtime to learn longer sequences.

  6. Frequency of the first feature in action sequences influences feature binding.

    PubMed

    Mattson, Paul S; Fournier, Lisa R; Behmer, Lawrence P

    2012-10-01

    We investigated whether binding among perception and action feature codes is a preliminary step toward creating a more durable memory trace of an action event. If so, increasing the frequency of a particular event (e.g., a stimulus requiring a movement with the left or right hand in an up or down direction) should increase the strength and speed of feature binding for this event. The results from two experiments, using a partial-repetition paradigm, confirmed that feature binding increased in strength and/or occurred earlier for a high-frequency (e.g., left hand moving up) than for a low-frequency (e.g., right hand moving down) event. Moreover, increasing the frequency of the first-specified feature in the action sequence alone (e.g., "left" hand) increased the strength and/or speed of action feature binding (e.g., between the "left" hand and movement in an "up" or "down" direction). The latter finding suggests an update to the theory of event coding, as not all features in the action sequence equally determine binding strength. We conclude that action planning involves serial binding of features in the order of action feature execution (i.e., associations among features are not bidirectional but are directional), which can lead to a more durable memory trace. This is consistent with physiological evidence suggesting that serial order is preserved in an action plan executed from memory and that the first feature in the action sequence may be critical in preserving this serial order.

  7. Development and testing of an assessment instrument for the formative peer review of significant event analyses.

    PubMed

    McKay, J; Murphy, D J; Bowie, P; Schmuck, M-L; Lough, M; Eva, K W

    2007-04-01

    To establish the content validity and specific aspects of reliability for an assessment instrument designed to provide formative feedback to general practitioners (GPs) on the quality of their written analysis of a significant event. Content validity was quantified by application of a content validity index. Reliability testing involved a nested design, with 5 cells, each containing 4 assessors, rating 20 unique significant event analysis (SEA) reports (10 each from experienced GPs and GPs in training) using the assessment instrument. The variance attributable to each identified variable in the study was established by analysis of variance. Generalisability theory was then used to investigate the instrument's ability to discriminate among SEA reports. Content validity was demonstrated with at least 8 of 10 experts endorsing all 10 items of the assessment instrument. The overall G coefficient for the instrument was moderate to good (G>0.70), indicating that the instrument can provide consistent information on the standard achieved by the SEA report. There was moderate inter-rater reliability (G>0.60) when four raters were used to judge the quality of the SEA. This study provides the first steps towards validating an instrument that can provide educational feedback to GPs on their analysis of significant events. The key area identified to improve instrument reliability is variation among peer assessors in their assessment of SEA reports. Further validity and reliability testing should be carried out to provide GPs, their appraisers and contractual bodies with a validated feedback instrument on this aspect of the general practice quality agenda.

  8. Addressing Uniqueness and Unison of Reliability and Safety for a Better Integration

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Safie, Fayssal

    2016-01-01

    Over time, it has been observed that Safety and Reliability have not been clearly differentiated, which leads to confusion, inefficiency, and, sometimes, counter-productive practices in executing each of these two disciplines. It is imperative to address this situation to help Reliability and Safety disciplines improve their effectiveness and efficiency. The paper poses an important question to address, "Safety and Reliability - Are they unique or unisonous?" To answer the question, the paper reviewed several most commonly used analyses from each of the disciplines, namely, FMEA, reliability allocation and prediction, reliability design involvement, system safety hazard analysis, Fault Tree Analysis, and Probabilistic Risk Assessment. The paper pointed out uniqueness and unison of Safety and Reliability in their respective roles, requirements, approaches, and tools, and presented some suggestions for enhancing and improving the individual disciplines, as well as promoting the integration of the two. The paper concludes that Safety and Reliability are unique, but compensating each other in many aspects, and need to be integrated. Particularly, the individual roles of Safety and Reliability need to be differentiated, that is, Safety is to ensure and assure the product meets safety requirements, goals, or desires, and Reliability is to ensure and assure maximum achievability of intended design functions. With the integration of Safety and Reliability, personnel can be shared, tools and analyses have to be integrated, and skill sets can be possessed by the same person with the purpose of providing the best value to a product development.

  9. 7 CFR 1755.522 - RUS general specification for digital, stored program controlled central office equipment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) shall be detected and the equivalent of a carrier group alarm shall be executed in 2.5 ±0.5 seconds.../federal_register/code_of_federal_regulations/ibr_locations.html. (b) Reliability. (1) Quality control and... designed such that the expected individual line downtime does not exceed 30 minutes per year. This is the...

  10. Mitigating earthquakes; the federal role

    USGS Publications Warehouse

    Press, F.

    1977-01-01

    With rapid approach of a capability to make reliable earthquake forecasts, it essential that the Federal Government play a strong, positive role in formulating and implementing plans to reduce earthquake hazards. Many steps are being taken in this direction, with the President looking to the Office of Science and Technology Policy (OSTP) in his Executive Office to provide leadership in establishing and coordinating Federal activities. 

  11. Market capture by 30/20 GHz satellite systems. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Gamble, R. B.; Saporta, L.

    1981-01-01

    Demand for 30/20 GHz satellite systems over the next two decades is projected. Topics include a profile of the communications market, switched, dedicated, and packet transmission modes, deferred and real-time traffic, quality and reliability considerations, the capacity of competing transmission media, and scenarios for the growth and development of 30/20 GHz satellite communications.

  12. Market capture by 30/20 GHz satellite systems. Volume 1: Executive summary

    NASA Astrophysics Data System (ADS)

    Gamble, R. B.; Saporta, L.

    1981-04-01

    Demand for 30/20 GHz satellite systems over the next two decades is projected. Topics include a profile of the communications market, switched, dedicated, and packet transmission modes, deferred and real-time traffic, quality and reliability considerations, the capacity of competing transmission media, and scenarios for the growth and development of 30/20 GHz satellite communications.

  13. A task scheduler framework for self-powered wireless sensors.

    PubMed

    Nordman, Mikael M

    2003-10-01

    The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.

  14. Exploring dual commitment among physician executives in managed care.

    PubMed

    Hoff, T J

    2001-01-01

    The growth of a medical management specialty is a significant event associated with managed care. Physician executives are lauded for their potential in bridging the clinical and managerial realms. They also serve as a countervailing force to help the medical profession and patients maintain a strong voice in healthcare decision making at the strategic level. However, little is known about their work loyalties. These attitudes are important to explore because they speak to whose interests physician executives consider and represent in their everyday management roles. If physician executives are to maximize their effectiveness in the healthcare workplace, both physicians and organizations must view them as credible sources of authority. This study examines organizational and professional commitment among a national sample of physician executives employed in managed care settings. Data used for the analysis come from a national survey conducted through the American College of Physician Executives in 1996. The findings support the notion that physician executives can and do express simultaneous loyalty to organizational and professional interests. This dual commitment is related to other work attitudes that contribute to success in the management role. In addition, it appears that situational factors increase the chances for dual commitment. These factors derive from a favorable work environment that includes both organizational and professional socialization in the management role. The results of the study are useful in specifying the training and socialization needs of physicians who wish to do management work. They also provide a rationale for collaboration between healthcare organizations and rank-and-file physicians aimed at cultivating physician executives who are credible leaders within the healthcare system.

  15. Understanding Setting Events: What They Are and How to Identify Them

    ERIC Educational Resources Information Center

    Iovannone, Rose; Anderson, Cynthia; Scott, Terrance

    2017-01-01

    A functional behavior assessment is a process for identifying events in the environment that reliably precede (i.e., antecedents) and follow (i.e., consequences) problem behavior. This information is used to develop an intervention plan. There are two types of antecedents--triggers and setting events. Triggers are antecedent events that happen…

  16. Development and assessment of stressful life events subscales - A preliminary analysis.

    PubMed

    Buccheri, Teresa; Musaad, Salma; Bost, Kelly K; Fiese, Barbara H

    2018-01-15

    Stress affects people of all ages, genders, and cultures and is associated with physical and psychological complications. Stressful life events are an important research focus and a psychometrically valid measure could provide useful clinical information. The purpose of the study was to develop a reliable and valid measurement of stressful life events and to assess its reliability and validity using established measures of social support, stress, depression, anxiety and maternal and child health. The authors used an adaptation from the Social Readjustment Rating Scale (SRRS) to describe the prevalence of life events; they developed a 4-factor stressful life events subscales and used Medical Outcomes Social Support Scale, Social Support Scale, Depression, Anxiety and Stress Scale and 14 general health items for validity analysis. Analyses were performed with descriptive statistics, Cronbach's alpha, Spearman's rho, Chi-square test or Fisher's exact test and Wilcoxon 2-sample test. The 4-factor stressful life events subscales showed acceptable reliability. The resulting subscale scores were significantly associated with established measures of social support, depression, anxiety, stress, and caregiver health indicators. The study presented a number of limitations in terms of design and recall bias. Despite the presence of a number of limitations, the study provided valuable insight and suggested that further investigation is needed in order to determine the effectiveness of the measures in revealing the family's wellbeing and to develop and strengthen a more detailed analysis of the stressful life events/health association. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Mood effects on memory and executive control in a real-life situation.

    PubMed

    Lagner, Prune; Kliegel, Matthias; Phillips, Louise H; Ihle, Andreas; Hering, Alexandra; Ballhausen, Nicola; Schnitzspahn, Katharina M

    2015-01-01

    In the laboratory, studies have shown an inconsistent pattern of whether, and how, mood may affect cognitive functions indicating both mood-related enhancement as well as decline. Surprisingly, little is known about whether there are similar effects in everyday life. Hence, the present study aimed to investigate possible mood effects on memory and executive control in a real-life situation. Mood effects were examined in the context of winning in a sports competition. Sixty-one male handball players were tested with an extensive cognitive test battery (comprising memory and executive control) both after winning a match and after training as neutral baseline. Mood differed significantly between the two testing situations, while physiological arousal and motivation were comparable. Results showed lowered performance after the win compared with training in selected cognitive measures. Specifically, short-term and episodic memory performance was poorer following a win, whereas executive control performance was unaffected by condition. Differences in memory disappeared when emotional states after the match were entered as covariates into the initial analyses. Thus, findings suggest mood-related impairments in memory, but not in executive control processes after a positive real-life event.

  18. Superfund: evaluating the impact of executive order 12898.

    PubMed

    O'Neil, Sandra George

    2007-07-01

    The U.S. Environmental Protection Agency (EPA) addresses uncontrolled and abandoned hazardous waste sites throughout the country. Sites that are perceived to be a significant threat to both surrounding populations and the environment can be placed on the U.S. EPA Superfund list and qualify for federal cleanup funds. The equitability of the Superfund program has been questioned; the representation of minority and low-income populations in this cleanup program is lower than would be expected. Thus, minorities and low-income populations may not be benefiting proportionately from this environmental cleanup program. In 1994 President Clinton signed Executive Order 12898 requiring that the U.S. EPA and other federal agencies implement environmental justice policies. These policies were to specifically address the disproportionate environmental effects of federal programs and policies on minority and low-income populations. I use event history analysis to evaluate the impact of Executive Order 12898 on the equitability of the Superfund program. Findings suggest that despite environmental justice legislation, Superfund site listings in minority and poor areas are even less likely for sites discovered since the 1994 Executive Order. The results of this study indicate that Executive Order 12898 for environmental justice has not increased the equitability of the Superfund program.

  19. Autistic traits and cognitive performance in young people with mild intellectual impairment.

    PubMed

    Harris, Jonathan M; Best, Catherine S; Moffat, Vivien J; Spencer, Michael D; Philip, Ruth C M; Power, Michael J; Johnstone, Eve C

    2008-08-01

    Cognitive performance and the relationship between theory of mind (TOM), weak central coherence and executive function were investigated in a cohort of young people with additional learning needs. Participants were categorized by social communication questionnaire score into groups of 10 individuals within the autistic spectrum disorder (ASD) range, 14 within the pervasive developmental disorder range and 18 with few autistic traits. The ASD group were significantly poorer than the other groups on a test of cognitive flexibility. In the ASD group only, there was a strong relationship between executive performance and TOM which remained after controlling for IQ. Our findings suggest that the relationship between cognitive traits may more reliably distinguish autism than the presence of individual deficits alone.

  20. Modeling Imperfect Generator Behavior in Power System Operation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krad, Ibrahim

    A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less

  1. Reliability of a Qualitative Video Analysis for Running.

    PubMed

    Pipkin, Andrew; Kotecki, Kristy; Hetzel, Scott; Heiderscheit, Bryan

    2016-07-01

    Study Design Reliability study. Background Video analysis of running gait is frequently performed in orthopaedic and sports medicine practices to assess biomechanical factors that may contribute to injury. However, the reliability of a whole-body assessment has not been determined. Objective To determine the intrarater and interrater reliability of the qualitative assessment of specific running kinematics from a 2-dimensional video. Methods Running-gait analysis was performed on videos recorded from 15 individuals (8 male, 7 female) running at a self-selected pace (3.17 ± 0.40 m/s, 8:28 ± 1:04 min/mi) using a high-speed camera (120 frames per second). These videos were independently rated on 2 occasions by 3 experienced physical therapists using a standardized qualitative assessment. Fifteen sagittal and frontal plane kinematic variables were rated on a 3- or 5-point categorical scale at specific events of the gait cycle, including initial contact (n = 3) and midstance (n = 9), or across the full gait cycle (n = 3). The video frame number corresponding to each gait event was also recorded. Intrarater and interrater reliability values were calculated for gait-event detection (intraclass correlation coefficient [ICC] and standard error of measurement [SEM]) and the individual kinematic variables (weighted kappa [κw]). Results Gait-event detection was highly reproducible within raters (ICC = 0.94-1.00; SEM, 0.3-1.0 frames) and between raters (ICC = 0.77-1.00; SEM, 0.4-1.9 frames). Eleven of the 15 kinematic variables demonstrated substantial (κw = 0.60-0.799) or excellent (κw>0.80) intrarater agreement, with the exception of foot-to-center-of-mass position (κw = 0.59), forefoot position (κw = 0.58), ankle dorsiflexion at midstance (κw = 0.49), and center-of-mass vertical excursion (κw = 0.36). Interrater agreement for the kinematic measures varied more widely (κw = 0.00-0.85), with 5 variables showing substantial or excellent reliability. Conclusion The qualitative assessment of specific kinematic measures during running can be reliably performed with the use of a high-speed video camera. Detection of specific gait events was highly reproducible, as were common kinematic variables such as rearfoot position, foot-strike pattern, tibial inclination angle, knee flexion angle, and forward trunk lean. Other variables should be used with caution. J Orthop Sports Phys Ther 2016;46(7):556-561. Epub 6 Jun 2016. doi:10.2519/jospt.2016.6280.

  2. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  3. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  4. Towards Real-Time Detection of Gait Events on Different Terrains Using Time-Frequency Analysis and Peak Heuristics Algorithm.

    PubMed

    Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin

    2016-10-01

    Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.

  5. Towards Real-Time Detection of Gait Events on Different Terrains Using Time-Frequency Analysis and Peak Heuristics Algorithm

    PubMed Central

    Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin

    2016-01-01

    Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses. PMID:27706086

  6. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots

    PubMed Central

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-01-01

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933

  7. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.

    PubMed

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-10-21

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.

  8. The processing of actions and action-words in amyotrophic lateral sclerosis patients.

    PubMed

    Papeo, Liuba; Cecchetto, Cinzia; Mazzon, Giulia; Granello, Giulia; Cattaruzza, Tatiana; Verriello, Lorenzo; Eleopra, Roberto; Rumiati, Raffaella I

    2015-03-01

    Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease with prime consequences on the motor function and concomitant cognitive changes, most frequently in the domain of executive functions. Moreover, poorer performance with action-verbs versus object-nouns has been reported in ALS patients, raising the hypothesis that the motor dysfunction deteriorates the semantic representation of actions. Using action-verbs and manipulable-object nouns sharing semantic relationship with the same motor representations, the verb-noun difference was assessed in a group of 21 ALS-patients with severely impaired motor behavior, and compared with a normal sample's performance. ALS-group performed better on nouns than verbs, both in production (action and object naming) and comprehension (word-picture matching). This observation implies that the interpretation of the verb-noun difference in ALS cannot be accounted by the relatedness of verbs to motor representations, but has to consider the role of other semantic and/or morpho-phonological dimensions that distinctively define the two grammatical classes. Moreover, this difference in the ALS-group was not greater than the noun-verb difference in the normal sample. The mental representation of actions also involves an executive-control component to organize, in logical/temporal order, the individual motor events (or sub-goals) that form a purposeful action. We assessed this ability with action sequencing tasks, requiring participants to re-construct a purposeful action from the scrambled presentation of its constitutive motor events, shown in the form of photographs or short sentences. In those tasks, ALS-group's performance was significantly poorer than controls'. Thus, the executive dysfunction manifested in the sequencing deficit -but not the selective verb deficit- appears as a consistent feature of the cognitive profile associated with ALS. We suggest that ALS can offer a valuable model to study the relationship between (frontal) motor centers and the executive-control machinery housed in the frontal brain, and the implications of executive dysfunctions in tasks such as action processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Towards a user-friendly brain-computer interface: initial tests in ALS and PLS patients.

    PubMed

    Bai, Ou; Lin, Peter; Huang, Dandan; Fei, Ding-Yu; Floeter, Mary Kay

    2010-08-01

    Patients usually require long-term training for effective EEG-based brain-computer interface (BCI) control due to fatigue caused by the demands for focused attention during prolonged BCI operation. We intended to develop a user-friendly BCI requiring minimal training and less mental load. Testing of BCI performance was investigated in three patients with amyotrophic lateral sclerosis (ALS) and three patients with primary lateral sclerosis (PLS), who had no previous BCI experience. All patients performed binary control of cursor movement. One ALS patient and one PLS patient performed four-directional cursor control in a two-dimensional domain under a BCI paradigm associated with human natural motor behavior using motor execution and motor imagery. Subjects practiced for 5-10min and then participated in a multi-session study of either binary control or four-directional control including online BCI game over 1.5-2h in a single visit. Event-related desynchronization and event-related synchronization in the beta band were observed in all patients during the production of voluntary movement either by motor execution or motor imagery. The online binary control of cursor movement was achieved with an average accuracy about 82.1+/-8.2% with motor execution and about 80% with motor imagery, whereas offline accuracy was achieved with 91.4+/-3.4% with motor execution and 83.3+/-8.9% with motor imagery after optimization. In addition, four-directional cursor control was achieved with an accuracy of 50-60% with motor execution and motor imagery. Patients with ALS or PLS may achieve BCI control without extended training, and fatigue might be reduced during operation of a BCI associated with human natural motor behavior. The development of a user-friendly BCI will promote practical BCI applications in paralyzed patients. Copyright 2010 International Federation of Clinical Neurophysiology. All rights reserved.

  10. Reliable fusion of control and sensing in intelligent machines. Thesis

    NASA Technical Reports Server (NTRS)

    Mcinroy, John E.

    1991-01-01

    Although robotics research has produced a wealth of sophisticated control and sensing algorithms, very little research has been aimed at reliably combining these control and sensing strategies so that a specific task can be executed. To improve the reliability of robotic systems, analytic techniques are developed for calculating the probability that a particular combination of control and sensing algorithms will satisfy the required specifications. The probability can then be used to assess the reliability of the design. An entropy formulation is first used to quickly eliminate designs not capable of meeting the specifications. Next, a framework for analyzing reliability based on the first order second moment methods of structural engineering is proposed. To ensure performance over an interval of time, lower bounds on the reliability of meeting a set of quadratic specifications with a Gaussian discrete time invariant control system are derived. A case study analyzing visual positioning in robotic system is considered. The reliability of meeting timing and positioning specifications in the presence of camera pixel truncation, forward and inverse kinematic errors, and Gaussian joint measurement noise is determined. This information is used to select a visual sensing strategy, a kinematic algorithm, and a discrete compensator capable of accomplishing the desired task. Simulation results using PUMA 560 kinematic and dynamic characteristics are presented.

  11. What makes a family reliable?

    NASA Technical Reports Server (NTRS)

    Williams, James G.

    1992-01-01

    Asteroid families are clusters of asteroids in proper element space which are thought to be fragments from former collisions. Studies of families promise to improve understanding of large collision events and a large event can open up the interior of a former parent body to view. While a variety of searches for families have found the same heavily populated families, and some searches have found the same families of lower population, there is much apparent disagreement between proposed families of lower population of different investigations. Indicators of reliability, factors compromising reliability, an illustration of the influence of different data samples, and a discussion of how several investigations perceived families in the same region of proper element space are given.

  12. Supporting Beacon and Event-Driven Messages in Vehicular Platoons through Token-Based Strategies

    PubMed Central

    Uhlemann, Elisabeth

    2018-01-01

    Timely and reliable inter-vehicle communications is a critical requirement to support traffic safety applications, such as vehicle platooning. Furthermore, low-delay communications allow the platoon to react quickly to unexpected events. In this scope, having a predictable and highly effective medium access control (MAC) method is of utmost importance. However, the currently available IEEE 802.11p technology is unable to adequately address these challenges. In this paper, we propose a MAC method especially adapted to platoons, able to transmit beacons within the required time constraints, but with a higher reliability level than IEEE 802.11p, while concurrently enabling efficient dissemination of event-driven messages. The protocol circulates the token within the platoon not in a round-robin fashion, but based on beacon data age, i.e., the time that has passed since the previous collection of status information, thereby automatically offering repeated beacon transmission opportunities for increased reliability. In addition, we propose three different methods for supporting event-driven messages co-existing with beacons. Analysis and simulation results in single and multi-hop scenarios showed that, by providing non-competitive channel access and frequent retransmission opportunities, our protocol can offer beacon delivery within one beacon generation interval while fulfilling the requirements on low-delay dissemination of event-driven messages for traffic safety applications. PMID:29570676

  13. Supporting Beacon and Event-Driven Messages in Vehicular Platoons through Token-Based Strategies.

    PubMed

    Balador, Ali; Uhlemann, Elisabeth; Calafate, Carlos T; Cano, Juan-Carlos

    2018-03-23

    Timely and reliable inter-vehicle communications is a critical requirement to support traffic safety applications, such as vehicle platooning. Furthermore, low-delay communications allow the platoon to react quickly to unexpected events. In this scope, having a predictable and highly effective medium access control (MAC) method is of utmost importance. However, the currently available IEEE 802.11p technology is unable to adequately address these challenges. In this paper, we propose a MAC method especially adapted to platoons, able to transmit beacons within the required time constraints, but with a higher reliability level than IEEE 802.11p, while concurrently enabling efficient dissemination of event-driven messages. The protocol circulates the token within the platoon not in a round-robin fashion, but based on beacon data age, i.e., the time that has passed since the previous collection of status information, thereby automatically offering repeated beacon transmission opportunities for increased reliability. In addition, we propose three different methods for supporting event-driven messages co-existing with beacons. Analysis and simulation results in single and multi-hop scenarios showed that, by providing non-competitive channel access and frequent retransmission opportunities, our protocol can offer beacon delivery within one beacon generation interval while fulfilling the requirements on low-delay dissemination of event-driven messages for traffic safety applications.

  14. Frontal P300 decrement and executive dysfunction in adolescents with conduct problems.

    PubMed

    Kim, M S; Kim, J J; Kwon, J S

    2001-01-01

    This study investigated the cognitive and cerebral function of adolescents with conduct problems by neuropsychological battery (STIM) and event-related potential (ERP). Eighteen adolescents with conduct disorder, and 18 age-matched normal subjects were included. Such cognitive functions as attention, memory, executive function and problem solving were evaluated using subtests of STIM. ERP was measured using an auditory oddball paradigm. The conduct group showed a significantly lower hit rate on the Wisconsin Card Sorting Test (WCST) than the control group. In addition, the conduct group showed reduced P300 amplitude at Fz and Cz, and prolonged P300 latency at Fz, and there was a significant correlation between P300 amplitude and Stroop test performance. These results indicate that adolescents with conduct problems have impairments of executive function and inhibition, and that these impairments are associated with frontal dysfunction.

  15. Planning and Execution for an Autonomous Aerobot

    NASA Technical Reports Server (NTRS)

    Gaines, Daniel M.; Estlin, Tara A.; Schaffer, Steven R.; Chouinard, Caroline M.

    2010-01-01

    The Aerial Onboard Autonomous Science Investigation System (AerOASIS) system provides autonomous planning and execution capabilities for aerial vehicles (see figure). The system is capable of generating high-quality operations plans that integrate observation requests from ground planning teams, as well as opportunistic science events detected onboard the vehicle while respecting mission and resource constraints. AerOASIS allows an airborne planetary exploration vehicle to summarize and prioritize the most scientifically relevant data; identify and select high-value science sites for additional investigation; and dynamically plan, schedule, and monitor the various science activities being performed, even during extended communications blackout periods with Earth.

  16. Review of Findings for Human Performance Contribution to Risk in Operating Events

    DTIC Science & Technology

    2002-03-01

    and loss of DC power. Key to this event was failure to control setpoints on safety-related equipment and failure to maintain the load tap changer...34 Therefore, "to optimize task execution at the job site, it is important to align organizational processes and values." Effective team skills are an...reactor was blocked and the water level rapidly dropped to the automatic low-level scram setpoint . Human Performance Issues Control rods were fully

  17. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    PubMed

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  18. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks

    PubMed Central

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction. PMID:24834050

  19. 77 FR 7619 - Submission of Information Collections for OMB Review; Comment Request; Reportable Events; Notice...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ... negative. In response to the comments and in the spirit of Executive Order 13563 on Improving Regulation... provided in determining what, if any, action it needs to take. For example, PBGC might need to institute...

  20. Test-retest reliability of infant event related potentials evoked by faces.

    PubMed

    Munsters, N M; van Ravenswaaij, H; van den Boomen, C; Kemner, C

    2017-04-05

    Reliable measures are required to draw meaningful conclusions regarding developmental changes in longitudinal studies. Little is known, however, about the test-retest reliability of face-sensitive event related potentials (ERPs), a frequently used neural measure in infants. The aim of the current study is to investigate the test-retest reliability of ERPs typically evoked by faces in 9-10 month-old infants. The infants (N=31) were presented with neutral, fearful and happy faces that contained only the lower or higher spatial frequency information. They were tested twice within two weeks. The present results show that the test-retest reliability of the face-sensitive ERP components is moderate (P400 and Nc) to substantial (N290). However, there is low test-retest reliability for the effects of the specific experimental manipulations (i.e. emotion and spatial frequency) on the face-sensitive ERPs. To conclude, in infants the face-sensitive ERP components (i.e. N290, P400 and Nc) show adequate test-retest reliability, but not the effects of emotion and spatial frequency on these ERP components. We propose that further research focuses on investigating elements that might increase the test-retest reliability, as adequate test-retest reliability is necessary to draw meaningful conclusions on individual developmental trajectories of the face-sensitive ERPs in infants. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Autonomous navigation system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-08

    A robot platform includes perceptors, locomotors, and a system controller, which executes instructions for autonomously navigating a robot. The instructions repeat, on each iteration through an event timing loop, the acts of defining an event horizon based on the robot's current velocity, detecting a range to obstacles around the robot, testing for an event horizon intrusion by determining if any range to the obstacles is within the event horizon, and adjusting rotational and translational velocity of the robot accordingly. If the event horizon intrusion occurs, rotational velocity is modified by a proportion of the current rotational velocity reduced by a proportion of the range to the nearest obstacle and translational velocity is modified by a proportion of the range to the nearest obstacle. If no event horizon intrusion occurs, translational velocity is set as a ratio of a speed factor relative to a maximum speed.

  2. Mobile Functional Reach Test in People Who Suffer Stroke: A Pilot Study

    PubMed Central

    Merchán-Baeza, Jose Antonio; González-Sánchez, Manuel

    2015-01-01

    Background Postural instability is one of the major complications found in people who survive a stroke. Parameterizing the Functional Reach Test (FRT) could be useful in clinical practice and basic research, as this test is a clinically accepted tool (for its simplicity, reliability, economy, and portability) to measure the semistatic balance of a subject. Objective The aim of this study is to analyze the reliability in the FRT parameterization using inertial sensor within mobile phones (mobile sensors) for recording kinematic variables in patients who have suffered a stroke. Our hypothesis is that the sensors in mobile phones will be reliable instruments for kinematic study of the FRT. Methods This is a cross-sectional study of 7 subjects over 65 years of age who suffered a stroke. During the execution of FRT, the subjects carried two mobile phones: one placed in the lumbar region and the other one on the trunk. After analyzing the data obtained in the kinematic registration by the mobile sensors, a number of direct and indirect variables were obtained. The variables extracted directly from FRT through the mobile sensors were distance, maximum angular lumbosacral/thoracic displacement, time for maximum angular lumbosacral/thoracic displacement, time of return to the initial position, and total time. Using these data, we calculated speed and acceleration of each. A descriptive analysis of all kinematic outcomes recorded by the two mobile sensors (trunk and lumbar) was developed and the average range achieved in the FRT. Reliability measures were calculated by analyzing the internal consistency of the measures with 95% confidence interval of each outcome variable. We calculated the reliability of mobile sensors in the measurement of the kinematic variables during the execution of the FRT. Results The values in the FRT obtained in this study (2.49 cm, SD 13.15) are similar to those found in other studies with this population and with the same age range. Intrasubject reliability values observed in the use of mobile phones are all located above 0.831, ranging from 0.831 (time B_C trunk area) and 0.894 (displacement A_B trunk area). Likewise, the observed intersubject values range from 0.835 (time B_C trunk area) and 0.882 (displacement A_C trunk area). On the other hand, the reliability of the FRT was 0.989 (0.981-0.996) and 0.978 (0.970-0.985), intrasubject and intersubject respectively. Conclusions We found that mobile sensors in mobile phones could be reliable tools in the parameterization of the Functional Reach Test in people who have had a stroke. PMID:28582239

  3. Reliability of a novel serious game using dual-task gait profiles to early characterize aMCI

    PubMed Central

    Tarnanas, Ioannis; Papagiannopoulos, Sotirios; Kazis, Dimitris; Wiederhold, Mark; Widerhold, Brenda; Tsolaki, Magda

    2015-01-01

    Background: As the population of older adults is growing, the interest in a simple way to detect characterize amnestic mild cognitive impairment (aMCI), a prodromal stage of Alzheimer’s disease (AD), is becoming increasingly important. Serious game (SG) -based cognitive and motor performance profiles while performing everyday activities and dual-task walking (DTW) “motor signatures” are two very promising markers that can be detected in predementia states. We aim to compare the consistency, or conformity, of measurements made by a custom SG with DTW (NAV), a SG without DTW (DOT), neuropsychological measures and genotyping as markers for early detection of aMCI. Methods: The study population included three groups: early AD (n = 86), aMCI (n = 65), and healthy control subjects (n = 76), who completed the custom SG tasks in three separate sessions over a 3-month period. Outcome measures were neuropsychological data across-domain and within-domain intra-individual variability (IIV) and DOT and NAV latency-based and accuracy-based IIV. IIV reflects a transient, within-person change in behavioral performance, either during different cognitive domains (across-domain) or within the same domain (within-domain). Test–retest reliability of the DOT and NAV markers were assessed using an intraclass correlation (ICC) analysis. Results: Results indicated that performance data, such as the NAV latency-based and accuracy-based IIV, during the task displayed greater reliability across sessions compared to DOT. During the NAV task-engagement, the executive function, planning, and motor performance profiles exhibited moderate to good reliability (ICC = 0.6–0.8), while during DOT, executive function and spatial memory accuracy profiles exhibited fair to moderate reliability (ICC = 0.3–0.6). Additionally, reliability across tasks was more stable when three sessions were used in the ICC calculation relative to two sessions. Discussion: Our findings suggest that “motor signature” data during the NAV tasks were a more reliable marker for early diagnosis of aMCI than DOT. This result accentuates the importance of utilizing motor performance data as a metric for aMCI populations where memory decline is often the behavioral outcome of interest. In conclusion, custom SG with DTW performance data provide an ecological and reliable approach for cognitive assessment across multiple sessions and thus can be used as a useful tool for tracking longitudinal change in observational and interventional studies on aMCI. PMID:25954193

  4. Interrater reliability of the injury reporting of the injury surveillance system used in international athletics championships.

    PubMed

    Edouard, Pascal; Junge, Astrid; Kiss-Polauf, Marianna; Ramirez, Christophe; Sousa, Monica; Timpka, Toomas; Branco, Pedro

    2018-03-01

    The quality of epidemiological injury data depends on the reliability of reporting to an injury surveillance system. Ascertaining whether all physicians/physiotherapists report the same information for the same injury case is of major interest to determine data validity. The aim of this study was therefore to analyse the data collection reliability through the analysis of the interrater reliability. Cross-sectional survey. During the 2016 European Athletics Advanced Athletics Medicine Course in Amsterdam, all national medical teams were asked to complete seven virtual case reports on a standardised injury report form using the same definitions and classifications of injuries as the international athletics championships injury surveillance protocol. The completeness of data and the Fleiss' kappa coefficients for the inter-rater reliability were calculated for: sex, age, event, circumstance, location, type, assumed cause and estimated time-loss. Forty-one team physicians and physiotherapists of national medical teams participated in the study (response rate 89.1%). Data completeness was 96.9%. The Fleiss' kappa coefficients were: almost perfect for sex (k=1), injury location (k=0.991), event (k=0.953), circumstance (k=0.942), and age (k=0.870), moderate for type (k=0.507), fair for assumed cause (k=0.394), and poor for estimated time-loss (k=0.155). The injury surveillance system used during international athletics championships provided reliable data for "sex", "location", "event", "circumstance", and "age". More caution should be taken for "assumed cause" and "type", and even more for "estimated time-loss". This injury surveillance system displays satisfactory data quality (reliable data and high data completeness), and thus, can be recommended as tool to collect epidemiology information on injuries during international athletics championships. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  5. Evaluation of a Simpler Tool to Assess Nontechnical Skills During Simulated Critical Events.

    PubMed

    Watkins, Scott C; Roberts, David A; Boulet, John R; McEvoy, Matthew D; Weinger, Matthew B

    2017-04-01

    Management of critical events requires teams to employ nontechnical skills (NTS), such as teamwork, communication, decision making, and vigilance. We sought to estimate the reliability and provide evidence for the validity of the ratings gathered using a new tool for assessing the NTS of anesthesia providers, the behaviorally anchored rating scale (BARS), and compare its scores with those of an established NTS tool, the Anaesthetists' Nontechnical Skills (ANTS) scale. Six previously trained raters (4 novices and 2 experts) reviewed and scored 18 recorded simulated pediatric crisis management scenarios using a modified ANTS and a BARS tool. Pearson correlation coefficients were calculated separately for the novice and expert raters, by scenario, and overall. The intrarater reliability of the ANTS total score was 0.73 (expert, 0.57; novice, 0.84); for the BARS tool, it was 0.80 (expert, 0.79; novice, 0.81). The average interrater reliability of BARS scores (0.58) was better than ANTS scores (0.37), and the interrater reliabilities of scores from novices (0.69 BARS and 0.52 ANTS) were better than those obtained from experts (0.47 BARS and 0.21 ANTS) for both scoring instruments. The Pearson correlation between the ANTS and BARS total scores was 0.74. Overall, reliability estimates were better for the BARS scores than the ANTS scores. For both measures, the intrarater and interrater reliability was better for novices compared with domain experts, suggesting that properly trained novices can reliably assess the NTS of anesthesia providers managing a simulated critical event. There was substantial correlation between the 2 scoring instruments, suggesting that the tools measured similar constructs. The BARS tool can be an alternative to the ANTS scale for the formative assessment of NTS of anesthesia providers.

  6. Investigating the effects of caffeine on executive functions using traditional Stroop and a new ecologically-valid virtual reality task, the Jansari assessment of Executive Functions (JEF(©)).

    PubMed

    Soar, K; Chapman, E; Lavan, N; Jansari, A S; Turner, J J D

    2016-10-01

    Caffeine has been shown to have effects on certain areas of cognition, but in executive functioning the research is limited and also inconsistent. One reason could be the need for a more sensitive measure to detect the effects of caffeine on executive function. This study used a new non-immersive virtual reality assessment of executive functions known as JEF(©) (the Jansari Assessment of Executive Function) alongside the 'classic' Stroop Colour-Word task to assess the effects of a normal dose of caffeinated coffee on executive function. Using a double-blind, counterbalanced within participants procedure 43 participants were administered either a caffeinated or decaffeinated coffee and completed the 'JEF(©)' and Stroop tasks, as well as a subjective mood scale and blood pressure pre- and post condition on two separate occasions a week apart. JEF(©) yields measures for eight separate aspects of executive functions, in addition to a total average score. Findings indicate that performance was significantly improved on the planning, creative thinking, event-, time- and action-based prospective memory, as well as total JEF(©) score following caffeinated coffee relative to the decaffeinated coffee. The caffeinated beverage significantly decreased reaction times on the Stroop task, but there was no effect on Stroop interference. The results provide further support for the effects of a caffeinated beverage on cognitive functioning. In particular, it has demonstrated the ability of JEF(©) to detect the effects of caffeine across a number of executive functioning constructs, which weren't shown in the Stroop task, suggesting executive functioning improvements as a result of a 'typical' dose of caffeine may only be detected by the use of more real-world, ecologically valid tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The development of control processes supporting source memory discrimination as revealed by event-related potentials.

    PubMed

    de Chastelaine, Marianne; Friedman, David; Cycowicz, Yael M

    2007-08-01

    Improvement in source memory performance throughout childhood is thought to be mediated by the development of executive control. As postretrieval control processes may be better time-locked to the recognition response rather than the retrieval cue, the development of processes underlying source memory was investigated with both stimulus- and response-locked event-related potentials (ERPs). These were recorded in children, adolescents, and adults during a recognition memory exclusion task. Green- and red-outlined pictures were studied, but were tested in black outline. The test requirement was to endorse old items shown in one study color ("targets") and to reject new items along with old items shown in the alternative study color ("nontargets"). Source memory improved with age. All age groups retrieved target and nontarget memories as reflected by reliable parietal episodic memory (EM) effects, a stimulus-locked ERP correlate of recollection. Response-locked ERPs to targets and nontargets diverged in all groups prior to the response, although this occurred at an increasingly earlier time point with age. We suggest these findings reflect the implementation of attentional control mechanisms to enhance target memories and facilitate response selection with the greatest and least success, respectively, in adults and children. In adults only, response-locked ERPs revealed an early-onsetting parietal negativity for nontargets, but not for targets. This was suggested to reflect adults' ability to consistently inhibit prepotent target responses for nontargets. The findings support the notion that the development of source memory relies on the maturation of control processes that serve to enhance accurate selection of task-relevant memories.

  8. Measures of Reliability in Behavioral Observation: The Advantage of "Real Time" Data Acquisition.

    ERIC Educational Resources Information Center

    Hollenbeck, Albert R.; Slaby, Ronald G.

    Two observers who were using an electronic digital data acquisition system were spot checked for reliability at random times over a four month period. Between-and within-observer reliability was assessed for frequency, duration, and duration-per-event measures of four infant behaviors. The results confirmed the problem of observer drift--the…

  9. Solar power satellite system definition study. Volume 1, phase 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A systems definition study of the solar satellite system (SPS) is presented. The technical feasibility of solar power satellites based on forecasts of technical capability in the various applicable technologies is assessed. The performance, cost, operational characteristics, reliability, and the suitability of SPS's as power generators for typical commercial electricity grids are discussed. The uncertainties inherent in the system characteristics forecasts are assessed.

  10. Youth Attitude Tracking Study II Wave 18 -- Fall 1987

    DTIC Science & Technology

    1988-08-01

    positive and negative aspects. One of the major goals of military advertising is to increase knowledge about the advantages and benefits of military...Educational Benefits Can Be Used ........................... 102 7.4 Incremental Effects of Cash Bonus on Propensity to Enlist in Guard/Reserve... YOUTH ATTITUDE TRACKING STUDY Fall 1987 EXECUTIVE SWNARY Effective recruiting for the military requires reliable and timely recruit market data

  11. What Is the Right RFID for Your Process?

    DTIC Science & Technology

    2006-01-30

    Support Model for Valuing Proposed Improvements in Component Reliability. June 2005. NPS-PM-05-007 Dillard, John T., and Mark E. Nissen...Arlington, VA. 2005. Kang, Keebom, Ken Doerr, Uday Apte, and Michael Boudreau. “Decision Support Models for Valuing Improvements in Component...courses in the Executive and Full-time MBA programs. Areas of Uday’s research interests include managing service operations, supply chain

  12. CAUSE Resiliency (West Coast) Experiment Final Report

    DTIC Science & Technology

    2012-10-01

    implemented in BCeMap and can therefore consume alerting messages direct from MASAS. This would solve the issue with the update frequency and speed of the...in production for use by the Provincial Emergency Operations Centres and brings together multiple static layers together with several dynamic data...executive order established the requirement for an “effective, reliable, integrated, flexible, and comprehensive system to alert and warn the

  13. Evaluating ACLS Algorithms for the International Space Station (ISS) - A Paradigm Revisited

    NASA Technical Reports Server (NTRS)

    Alexander, Dave; Brandt, Keith; Locke, James; Hurst, Victor, IV; Mack, Michael D.; Pettys, Marianne; Smart, Kieran

    2007-01-01

    The ISS may have communication gaps of up to 45 minutes during each orbit and therefore it is imperative to have medical protocols, including an effective ACLS algorithm, that can be reliably autonomously executed during flight. The aim of this project was to compare the effectiveness of the current ACLS algorithm with an improved algorithm having a new navigation format.

  14. Incident learning in pursuit of high reliability: implementing a comprehensive, low-threshold reporting program in a large, multisite radiation oncology department.

    PubMed

    Gabriel, Peter E; Volz, Edna; Bergendahl, Howard W; Burke, Sean V; Solberg, Timothy D; Maity, Amit; Hahn, Stephen M

    2015-04-01

    Incident learning programs have been recognized as cornerstones of safety and quality assurance in so-called high reliability organizations in industries such as aviation and nuclear power. High reliability organizations are distinguished by their drive to continuously identify and proactively address a broad spectrum of latent safety issues. Many radiation oncology institutions have reported on their experience in tracking and analyzing adverse events and near misses but few have incorporated the principles of high reliability into their programs. Most programs have focused on the reporting and retrospective analysis of a relatively small number of significant adverse events and near misses. To advance a large, multisite radiation oncology department toward high reliability, a comprehensive, cost-effective, electronic condition reporting program was launched to enable the identification of a broad spectrum of latent system failures, which would then be addressed through a continuous quality improvement process. A comprehensive program, including policies, work flows, and information system, was designed and implemented, with use of a low reporting threshold to focus on precursors to adverse events. In a 46-month period from March 2011 through December 2014, a total of 8,504 conditions (average, 185 per month, 1 per patient treated, 3.9 per 100 fractions [individual treatments]) were reported. Some 77.9% of clinical staff members reported at least 1 condition. Ninety-eight percent of conditions were classified in the lowest two of four severity levels, providing the opportunity to address conditions before they contribute to adverse events. Results after approximately four years show excellent employee engagement, a sustained rate of reporting, and a focus on low-level issues leading to proactive quality improvement interventions.

  15. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  16. Brazilian adaptation of the Hotel Task: A tool for the ecological assessment of executive functions.

    PubMed

    Cardoso, Caroline de Oliveira; Zimmermann, Nicolle; Paraná, Camila Borges; Gindri, Gigiane; de Pereira, Ana Paula Almeida; Fonseca, Rochele Paz

    2015-01-01

    Over recent years, neuropsychological research has been increasingly concerned with the need to develop more ecologically valid instruments for the assessment of executive functions. The Hotel Task is one of the most widely used ecological measures of executive functioning, and provides an assessment of planning, organization, self-monitoring and cognitive flexibility. The goal of this study was to adapt the Hotel Task for use in the Brazilian population. The sample comprised 27 participants (three translators, six expert judges, seven healthy adults, ten patients with traumatic brain injuries and one hotel manager). The adaptation process consisted of five steps, which were repeated until a satisfactory version of the task was produced. The steps were as follows:(1) Translation;(2) Development of new stimuli and brainstorming among the authors;(3) Analysis by expert judges;(4) Pilot studies;(5) Assessment by an expert in business administration and hotel management. The adapted version proved adequate and valid for the assessment of executive functions. However, further research must be conducted to obtain evidence of the reliability, as well as the construct and criterion validity, sensitivity and specificity, of the Hotel Task. Many neurological and/or psychiatric populations may benefit from the adapted task, since it may make significant contributions to the assessment of dysexecutive syndromes and their impact on patient functioning.

  17. The Influence of Executive Functioning on Facial and Subjective Pain Responses in Older Adults

    PubMed Central

    2016-01-01

    Cognitive decline is known to reduce reliability of subjective pain reports. Although facial expressions of pain are generally considered to be less affected by this decline, empirical support for this assumption is sparse. The present study therefore examined how cognitive functioning relates to facial expressions of pain and whether cognition acts as a moderator between nociceptive intensity and facial reactivity. Facial and subjective responses of 51 elderly participants to mechanical stimulation at three intensities levels (50 kPa, 200 kPa, and 400 kPa) were assessed. Moreover, participants completed a neuropsychological examination of executive functioning (planning, cognitive inhibition, and working memory), episodic memory, and psychomotor speed. The results showed that executive functioning has a unique relationship with facial reactivity at low pain intensity levels (200 kPa). Moreover, cognitive inhibition (but not other executive functions) moderated the effect of pressure intensity on facial pain expressions, suggesting that the relationship between pressure intensity and facial reactivity was less pronounced in participants with high levels of cognitive inhibition. A similar interaction effect was found for cognitive inhibition and subjective pain report. Consequently, caution is needed when interpreting facial (as well as subjective) pain responses in individuals with a high level of cognitive inhibition. PMID:27274618

  18. Cognitive functions in preschool children with specific language impairment.

    PubMed

    Reichenbach, Katrin; Bastian, Laura; Rohrbach, Saskia; Gross, Manfred; Sarrar, Lea

    2016-07-01

    A growing body of research has focused on executive functions in children with specific language impairment (SLI). However, results show limited convergence, particularly in preschool age. The current neuropsychological study compared performance of cognitive functions focused on executive components and working memory in preschool children with SLI to typically developing controls. Performance on the measures cognitive flexibility, inhibition, processing speed and phonological short-term memory was assessed. The monolingual, Caucasian study sample consisted of 30 children with SLI (Mage = 63.3 months, SD = 4.3 months) and 30 healthy controls (Mage = 62.2 months, SD = 3.7 months). Groups were matched for age and nonverbal IQ. Socioeconomic status of the participating families was included. Children with SLI had significantly poorer abilities of phonological short-term memory than matched controls. A tendency of poorer abilities in the SLI group was found for inhibition and processing speed. We confirmed phonological short-term memory to be a reliable marker of SLI in preschoolers. Our results do not give definite support for impaired executive function in SLI, possibly owing to limited sensitivity of test instruments in this age group. We argue for a standardization of executive function tests for research use. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. The puzzle box as a simple and efficient behavioral test for exploring impairments of general cognition and executive functions in mouse models of schizophrenia.

    PubMed

    Ben Abdallah, Nada M-B; Fuss, Johannes; Trusel, Massimo; Galsworthy, Michael J; Bobsin, Kristin; Colacicco, Giovanni; Deacon, Robert M J; Riva, Marco A; Kellendonk, Christoph; Sprengel, Rolf; Lipp, Hans-Peter; Gass, Peter

    2011-01-01

    Deficits in executive functions are key features of schizophrenia. Rodent behavioral paradigms used so far to find animal correlates of such deficits require extensive effort and time. The puzzle box is a problem-solving test in which mice are required to complete escape tasks of increasing difficulty within a limited amount of time. Previous data have indicated that it is a quick but highly reliable test of higher-order cognitive functioning. We evaluated the use of the puzzle box to explore executive functioning in five different mouse models of schizophrenia: mice with prefrontal cortex and hippocampus lesions, mice treated sub-chronically with the NMDA-receptor antagonist MK-801, mice constitutively lacking the GluA1 subunit of AMPA-receptors, and mice over-expressing dopamine D2 receptors in the striatum. All mice displayed altered executive functions in the puzzle box, although the nature and extent of the deficits varied between the different models. Deficits were strongest in hippocampus-lesioned and GluA1 knockout mice, while more subtle deficits but specific to problem solving were found in the medial prefrontal-lesioned mice, MK-801-treated mice, and in mice with striatal overexpression of D2 receptors. Data from this study demonstrate the utility of the puzzle box as an effective screening tool for executive functions in general and for schizophrenia mouse models in particular. Published by Elsevier Inc.

  20. Effective record length for the T-year event

    USGS Publications Warehouse

    Tasker, Gary D.

    1983-01-01

    The effect of serial dependence on the reliability of an estimate of the T-yr. event is of importance in hydrology because design decisions are based upon the estimate. In this paper the reliability of estimates of the T-yr. event from two common distributions is given as a function of number of observations and lag-one serial correlation coefficient for T = 2, 10, 20, 50, and 100 yr. A lag-one autoregressive model is assumed with either a normal or Pearson Type-III disturbance term. Results indicate that, if observations are serially correlated, the effective record length should be used to estimate the discharge associated with the expected exceedance probability. ?? 1983.

  1. Should this event be notified to the World Health Organization? Reliability of the International Health Regulations notification assessment process

    PubMed Central

    Hollmeyer, Helge; Hardiman, Max; Harbarth, Stephan; Pittet, Didier

    2011-01-01

    Abstract Objective To investigate the reliability of the public health event notification assessment process under the International Health Regulations (2005) (IHR). Methods In 2009, 193 National IHR Focal Points (NFPs) were invited to use the decision instrument in Annex 2 of the IHR to determine whether 10 fictitious public health events should be notified to WHO. Each event’s notifiability was assessed independently by an expert panel. The degree of consensus among NFPs and of concordance between NFPs and the expert panel was considered high when more than 70% agreed on a response. Findings Overall, 74% of NFPs responded. The median degree of consensus among NFPs on notification decisions was 78%. It was high for the six events considered notifiable by the majority (median: 80%; range: 76–91) but low for the remaining four (median: 55%; range: 54–60). The degree of concordance between NFPs and the expert panel was high for the five events deemed notifiable by the panel (median: 82%; range: 76–91) but low (median: 51%; range: 42–60) for those not considered notifiable. The NFPs identified notifiable events with greater sensitivity than specificity (P < 0.001). Conclusion When used by NFPs, the notification assessment process in Annex 2 of the IHR was sensitive in identifying public health events that were considered notifiable by an expert panel, but only moderately specific. The reliability of the assessments could be increased by expanding guidance on the use of the decision instrument and by including more specific criteria for assessing events and clearer definitions of terms. PMID:21479094

  2. 41 CFR 300-3.1 - What do the following terms mean?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., an independent establishment, the Government Accountability Office, or a wholly owned Government... Aviation Services (CAS)—Commercial aviation services (CAS) include, for the exclusive use of an executive.... Conference—A meeting, retreat, seminar, symposium or event that involves attendee travel. The term...

  3. Partnering Events | NCI Technology Transfer Center | TTC

    Cancer.gov

    Our team of technology transfer specialists has specialized training in invention reporting, patenting, patent strategy, executing technology transfer agreements and marketing. TTC is comprised of professionals with diverse legal, scientific, and business/marketing expertise. Most of our staff hold doctorate-level technical and/or legal training.

  4. Council Membership Directory 1969.

    ERIC Educational Resources Information Center

    Council of Organizations Serving the Deaf, Washington, DC.

    Information is provided on the purposes, goals, functions, membership, board of directors, calendar of events, publications, and names and addresses of the officers or executive committees of 19 national organizations serving the deaf. Organizations included are the Council of Organizations Serving the Deaf, Alexander Graham Bell Association for…

  5. Perspectives. 1983 Edition.

    ERIC Educational Resources Information Center

    Close Up Foundation, Arlington, VA.

    Designed to encourage informed and critical thinking on contemporary political issues and processes, the articles, case studies, and activities in this student handbook can be incorporated into secondary school social studies units on government or current events. Seven chapters cover the executive branch of government, Congress, the judiciary,…

  6. A Systems Approach to Scalable Transportation Network Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2006-01-01

    Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less

  7. Disrupted Executive Function and Aggression in Individuals With a History of Adverse Childhood Experiences: An Event-Related Potential Study.

    PubMed

    Xue, Jiao-Mei; Lin, Ping-Zhen; Sun, Ji-Wei; Cao, Feng-Lin

    2017-12-01

    Here, we explored the functional and neural mechanisms underlying aggression related to adverse childhood experiences. We assessed behavioral performance and event-related potentials during a go/no-go and N-back paradigm. The participants were 15 individuals with adverse childhood experiences and high aggression (ACE + HA), 13 individuals with high aggression (HA), and 14 individuals with low aggression and no adverse childhood experiences (control group). The P2 latency (initial perceptual processing) was longer in the ACE + HA group for the go trials. The HA group had a larger N2 (response inhibition) than controls for the no-go trials. Error-related negativity (error processing) in the ACE + HA and HA groups was smaller than that of controls for false alarm go trials. Lastly, the ACE + HA group had shorter error-related negativity latencies than controls for false alarm trials. Overall, our results reveal the neural correlates of executive function in aggressive individuals with ACEs.

  8. Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less

  9. The extended fronto-striatal model of obsessive compulsive disorder: convergence from event-related potentials, neuropsychology and neuroimaging

    PubMed Central

    Melloni, Margherita; Urbistondo, Claudia; Sedeño, Lucas; Gelormini, Carlos; Kichic, Rafael; Ibanez, Agustin

    2012-01-01

    In this work, we explored convergent evidence supporting the fronto-striatal model of obsessive-compulsive disorder (FSMOCD) and the contribution of event-related potential (ERP) studies to this model. First, we considered minor modifications to the FSMOCD model based on neuroimaging and neuropsychological data. We noted the brain areas most affected in this disorder -anterior cingulate cortex (ACC), basal ganglia (BG), and orbito-frontal cortex (OFC) and their related cognitive functions, such as monitoring and inhibition. Then, we assessed the ERPs that are directly related to the FSMOCD, including the error-related negativity (ERN), N200, and P600. Several OCD studies present enhanced ERN and N2 responses during conflict tasks as well as an enhanced P600 during working memory (WM) tasks. Evidence from ERP studies (especially regarding ERN and N200 amplitude enhancement), neuroimaging and neuropsychological findings suggests abnormal activity in the OFC, ACC, and BG in OCD patients. Moreover, additional findings from these analyses suggest dorsolateral prefrontal and parietal cortex involvement, which might be related to executive function (EF) deficits. Thus, these convergent results suggest the existence of a self-monitoring imbalance involving inhibitory deficits and executive dysfunctions. OCD patients present an impaired ability to monitor, control, and inhibit intrusive thoughts, urges, feelings, and behaviors. In the current model, this imbalance is triggered by an excitatory role of the BG (associated with cognitive or motor actions without volitional control) and inhibitory activity of the OFC as well as excessive monitoring of the ACC to block excitatory impulses. This imbalance would interact with the reduced activation of the parietal-DLPC network, leading to executive dysfunction. ERP research may provide further insight regarding the temporal dynamics of action monitoring and executive functioning in OCD. PMID:23015786

  10. Theta oscillations are sensitive to both early and late conflict processing stages: effects of alcohol intoxication.

    PubMed

    Kovacevic, Sanja; Azma, Sheeva; Irimia, Andrei; Sherfey, Jason; Halgren, Eric; Marinkovic, Ksenija

    2012-01-01

    Prior neuroimaging evidence indicates that decision conflict activates medial and lateral prefrontal and parietal cortices. Theoretical accounts of cognitive control highlight anterior cingulate cortex (ACC) as a central node in this network. However, a better understanding of the relative primacy and functional contributions of these areas to decision conflict requires insight into the neural dynamics of successive processing stages including conflict detection, response selection and execution. Moderate alcohol intoxication impairs cognitive control as it interferes with the ability to inhibit dominant, prepotent responses when they are no longer correct. To examine the effects of moderate intoxication on successive processing stages during cognitive control, spatio-temporal changes in total event-related theta power were measured during Stroop-induced conflict. Healthy social drinkers served as their own controls by participating in both alcohol (0.6 g/kg ethanol for men, 0.55 g/kg women) and placebo conditions in a counterbalanced design. Anatomically-constrained magnetoencephalography (aMEG) approach was applied to complex power spectra for theta (4-7 Hz) frequencies. The principal generator of event-related theta power to conflict was estimated to ACC, with contributions from fronto-parietal areas. The ACC was uniquely sensitive to conflict during both early conflict detection, and later response selection and execution stages. Alcohol attenuated theta power to conflict across successive processing stages, suggesting that alcohol-induced deficits in cognitive control may result from theta suppression in the executive network. Slower RTs were associated with attenuated theta power estimated to ACC, indicating that alcohol impairs motor preparation and execution subserved by the ACC. In addition to their relevance for the currently prevailing accounts of cognitive control, our results suggest that alcohol-induced impairment of top-down strategic processing underlies poor self-control and inability to refrain from drinking.

  11. A Computational Model of Event Segmentation from Perceptual Prediction

    ERIC Educational Resources Information Center

    Reynolds, Jeremy R.; Zacks, Jeffrey M.; Braver, Todd S.

    2007-01-01

    People tend to perceive ongoing continuous activity as series of discrete events. This partitioning of continuous activity may occur, in part, because events correspond to dynamic patterns that have recurred across different contexts. Recurring patterns may lead to reliable sequential dependencies in observers' experiences, which then can be used…

  12. Monitoring outcomes with relational databases: does it improve quality of care?

    PubMed

    Clemmer, Terry P

    2004-12-01

    There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.

  13. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1988-01-01

    The use and implementation of Ada were investigated in distributed environments in which reliability is the primary concern. In particular, the focus was on the possibility that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors are being executed, and that failures may occur in the software and underlying hardware. A secondary interest is in the performance of Ada systems and how that performance can be gauged reliably. Primary activities included: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; development of a refined approach to recovery that was applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.

  14. Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.

    1984-01-01

    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.

  15. History of Reliability and Quality Assurance at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Childers, Frank M.

    2004-01-01

    This Kennedy Historical Document (KHD) provides a unique historical perspective of the organizational and functional responsibilities for the manned and un-manned programs at Kennedy Space Center, Florida. As systems become more complex and hazardous, the attention to detailed planning and execution continues to be a challenge. The need for a robust reliability and quality assurance program will always be a necessity to ensure mission success. As new space missions are defined and technology allows for continued access to space, these programs cannot be compromised. The organizational structure that has provided the reliability and quality assurance functions for both the manned and unmanned programs has seen many changes since the first group came to Florida in the 1950's. The roles of government and contractor personnel have changed with each program and organizational alignment has changed based on that responsibility. The organizational alignment of the personnel performing these functions must ensure independent assessment of the processes.

  16. Real-time stereo matching using orthogonal reliability-based dynamic programming.

    PubMed

    Gong, Minglun; Yang, Yee-Hong

    2007-03-01

    A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.

  17. An Awful and Impressive Spectacle: Crime Scene Executions in Scotland, 1801-1841

    PubMed Central

    Bennett, Rachel

    2018-01-01

    Early nineteenth-century Britain witnessed rising numbers of offenders facing capital punishment and a plethora of legal and public discourse debating the criminal justice system. This article will examine a distinct Scottish response to the problem in the form of crime scene executions. By the turn of the nineteenth century it had long been the established practice of the Scottish courts to order that capitally convicted offenders would be executed at an established ‘common place’. However, between 1801 and 1841, the decision was taken to execute 37 offenders at the scene of their crimes. This article argues that in the face of an unprecedented number of offenders facing the hangman’s noose the Scottish judges chose to exercise this penal option which had not been used to a similar extent since the mid-eighteenth century. In turn these events had a multiplicity of impact and provoked responses ranging from a morbid curiosity to witness the spectacle to anxiety and outright disdain at its intrusion into areas previously unsullied by the last punishment of the law. PMID:29780278

  18. Anthropological analysis of the Second World War skeletal remains from three karst sinkholes located in southern Croatia.

    PubMed

    Jerković, Ivan; Bašić, Željana; Bečić, Kristijan; Jambrešić, Gordana; Grujić, Ivan; Alujević, Antonio; Kružić, Ivana

    2016-11-01

    Although in the cases of war crimes the main effort goes to the identification of victims, it is crucial to consider the execution event as a whole. Thus, the goal of the research was to determine the trauma type and probable cause of death on skeletal remains of civilians executed by partisans from WWS found in the three karst sinkholes and to explain the context in which the injuries occurred. We determined biological profiles, pathological conditions, traumas, and assessed their lethality. Nineteen skeletons were found, 68.4% had, at least, one perimortem trauma, classified as lethal/lethal if untreated in 69.2% cases. The type of execution and administered violence showed to be age and health dependent: elderly and diseased were executed with the intention to kill, by the gunshot facing victims, whilst the more violent behavior expressed towards younger and healthy individuals was indicated by the higher frequency of blunt force trauma. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  19. Cardiovascular fitness and executive control during task-switching: an ERP study.

    PubMed

    Scisco, Jenna L; Leynes, P Andrew; Kang, Jie

    2008-07-01

    Cardiovascular fitness recently has been linked to executive control function in older adults. The present study examined the relationship between cardiovascular fitness and executive control in young adults using event-related potentials (ERPs). Participants completed a two-part experiment. In part one, a graded exercise test (GXT) was administered using a cycle ergometer to obtain VO(2)max, a measure of maximal oxygen uptake. High-fit participants had VO(2)max measures at or above the 70th percentile based on age and sex, and low-fit participants had VO(2)max measures at or below the 30th percentile. In part two, a task-switching paradigm was used to investigate executive control. Task-switching trials produced slower response times and greater amplitude for both the P3a and P3b components of the ERP relative to a non-switch trial block. No ERP components varied as a function of fitness group. These findings, combined with results from previous research, suggest that the relationship between greater cardiovascular fitness and better cognitive function emerges after early adulthood.

  20. Infant polysomnography: reliability and validity of infant arousal assessment.

    PubMed

    Crowell, David H; Kulp, Thomas D; Kapuniai, Linda E; Hunt, Carl E; Brooks, Lee J; Weese-Mayer, Debra E; Silvestri, Jean; Ward, Sally Davidson; Corwin, Michael; Tinsley, Larry; Peucker, Mark

    2002-10-01

    Infant arousal scoring based on the Atlas Task Force definition of transient EEG arousal was evaluated to determine (1). whether transient arousals can be identified and assessed reliably in infants and (2). whether arousal and no-arousal epochs scored previously by trained raters can be validated reliably by independent sleep experts. Phase I for inter- and intrarater reliability scoring was based on two datasets of sleep epochs selected randomly from nocturnal polysomnograms of healthy full-term, preterm, idiopathic apparent life-threatening event cases, and siblings of Sudden Infant Death Syndrome infants of 35 to 64 weeks postconceptional age. After training, test set 1 reliability was assessed and discrepancies identified. After retraining, test set 2 was scored by the same raters to determine interrater reliability. Later, three raters from the trained group rescored test set 2 to assess inter- and intrarater reliabilities. Interrater and intrarater reliability kappa's, with 95% confidence intervals, ranged from substantial to almost perfect levels of agreement. Interrater reliabilities for spontaneous arousals were initially moderate and then substantial. During the validation phase, 315 previously scored epochs were presented to four sleep experts to rate as containing arousal or no-arousal events. Interrater expert agreements were diverse and considered as noninterpretable. Concordance in sleep experts' agreements, based on identification of the previously sampled arousal and no-arousal epochs, was used as a secondary evaluative technique. Results showed agreement by two or more experts on 86% of the Collaborative Home Infant Monitoring Evaluation Study arousal scored events. Conversely, only 1% of the Collaborative Home Infant Monitoring Evaluation Study-scored no-arousal epochs were rated as an arousal. In summary, this study presents an empirically tested model with procedures and criteria for attaining improved reliability in transient EEG arousal assessments in infants using the modified Atlas Task Force standards. With training based on specific criteria, substantial inter- and intrarater agreement in identifying infant arousals was demonstrated. Corroborative validation results were too disparate for meaningful interpretation. Alternate evaluation based on concordance agreements supports reliance on infant EEG criteria for assessment. Results mandate additional confirmatory validation studies with specific training on infant EEG arousal assessment criteria.

  1. Modeling Airport Ground Operations using Discrete Event Simulation (DES) and X3D Visualization

    DTIC Science & Technology

    2008-03-01

    scenes. It is written in open-source Java and XML using the Netbeans platform, which gave the features of being suitable as standalone applications...and as a plug-in module for the Netbeans integrated development environment (IDE). X3D Graphics is the tool used for the elaboration the creation of...process is shown in Figure 2. To 20 create a new event graph in Viskit, first, Viskit tool must be launched via Netbeans or from the executable

  2. CARA Risk Assessment Thresholds

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  3. Sex Differences during Visual Scanning of Occlusion Events in Infants

    ERIC Educational Resources Information Center

    Wilcox, Teresa; Alexander, Gerianne M.; Wheeler, Lesley; Norvell, Jennifer M.

    2012-01-01

    A growing number of sex differences in infancy have been reported. One task on which they have been observed reliably is the event-mapping task. In event mapping, infants view an occlusion event involving 1 or 2 objects, the occluder is removed, and then infants see 1 object. Typically, boys are more likely than girls to detect an inconsistency…

  4. Automatic Imitation in Rhythmical Actions: Kinematic Fidelity and the Effects of Compatibility, Delay, and Visual Monitoring

    PubMed Central

    Eaves, Daniel L.; Turgeon, Martine; Vogt, Stefan

    2012-01-01

    We demonstrate that observation of everyday rhythmical actions biases subsequent motor execution of the same and of different actions, using a paradigm where the observed actions were irrelevant for action execution. The cycle time of the distractor actions was subtly manipulated across trials, and the cycle time of motor responses served as the main dependent measure. Although distractor frequencies reliably biased response cycle times, this imitation bias was only a small fraction of the modulations in distractor speed, as well as of the modulations produced when participants intentionally imitated the observed rhythms. Importantly, this bias was not only present for compatible actions, but was also found, though numerically reduced, when distractor and executed actions were different (e.g., tooth brushing vs. window wiping), or when the dominant plane of movement was different (horizontal vs. vertical). In addition, these effects were equally pronounced for execution at 0, 4, and 8 s after action observation, a finding that contrasts with the more short-lived effects reported in earlier studies. The imitation bias was also unaffected when vision of the hand was occluded during execution, indicating that this effect most likely resulted from visuomotor interactions during distractor observation, rather than from visual monitoring and guidance during execution. Finally, when the distractor was incompatible in both dimensions (action type and plane) the imitation bias was not reduced further, in an additive way, relative to the single-incompatible conditions. This points to a mechanism whereby the observed action’s impact on motor processing is generally reduced whenever this is not useful for motor planning. We interpret these findings in the framework of biased competition, where intended and distractor actions can be represented as competing and quasi-encapsulated sensorimotor streams. PMID:23071623

  5. Dysexecutive performance of healthy oldest old subjects on the Frontal Assessment Battery.

    PubMed

    Iavarone, Alessandro; Lorè, Elisa; De Falco, Caterina; Milan, Graziella; Mosca, Raffaela; Pappatà, Sabina; Galeone, Filomena; Sorrentino, Paolo; Scognamiglio, Mario; Postiglione, Alfredo

    2011-01-01

    Frontal lobes and executive functions appear to be more vulnerable to normal aging than other cerebral regions and domains. The aim of the study was to evaluate executive functions by the Frontal Assessment Battery (FAB) in healthy oldest old subjects free of dementia. Thirty-two healthy oldest old subjects (age range 85-97 yrs) and 32 young old subjects (aged 61-74 yrs) were studied. All subjects were living with their families or alone and were considered normal, since they were fully independent in their activities of daily living and without signs or symptoms characteristic of any type of dementia. Mental status was assessed by the Mini- Mental State Examination (MMSE) and executive functions by the FAB. Mean MMSE scores were 23.12 ± 4.68 in oldest old and 26.78 ± 2.60 in young old subjects (p<0.005). Delayed recall was the most impaired domain, followed by executive (Serial 7). Mean FAB scores were 9.37 ± 4.14 in the oldest old and 13.53 ± 2.12 in the young old (p<0.0001). Among the FAB subtests, conceptualization was the most impaired in both groups, with sensitivity to interference and inhibitory control exhibiting higher discrimination between the oldest old and young old. Education influenced performance on MMSE and FAB in both groups. On the FAB test, healthy oldest old subjects showed executive impairment with respect to the young olds, due to the involvement of functions depending on activities of different regions of the frontal lobes. FAB results were consistent with the hypothesis that frontal lobes have a high vulnerability to normal aging. Short composite batteries like the FAB are suitable for rapid and reliable description of patterns of executive functioning in the oldest old.

  6. Concurrent validity and reliability of using ground reaction force and center of pressure parameters in the determination of leg movement initiation during single leg lift.

    PubMed

    Aldabe, Daniela; de Castro, Marcelo Peduzzi; Milosavljevic, Stephan; Bussey, Melanie Dawn

    2016-09-01

    Postural adjustment evaluations during single leg lift requires the initiation of heel lift (T1) identification. T1 measured by means of motion analyses system is the most reliable approach. However, this method involves considerable workspace, expensive cameras, and time processing data and setting up laboratory. The use of ground reaction forces (GRF) and centre of pressure (COP) data is an alternative method as its data processing and setting up is less time consuming. Further, kinetic data is normally collected using frequency samples higher than 1000Hz whereas kinematic data are commonly captured using 50-200Hz. This study describes the concurrent-validity and reliability of GRF and COP measurements in determining T1, using a motion analysis system as reference standard. Kinematic and kinetic data during single leg lift were collected from ten participants. GRF and COP data were collected using one and two force plates. Displacement of a single heel marker was captured by means of ten Vicon(©) cameras. Kinetic and kinematic data were collected using a sample frequency of 1000Hz. Data were analysed in two stages: identification of key events in the kinetic data, and assessing concurrent validity of T1 based on the chosen key events with T1 provided by the kinematic data. The key event presenting the least systematic bias, along with a narrow 95% CI and limits of agreement against the reference standard T1, was the Baseline COPy event. Baseline COPy event was obtained using one force plate and presented excellent between-tester reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew D.; Grabaskas, David; Brunett, Acacia J.

    2016-01-01

    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologiesmore » for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Centering on an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive reactor cavity cooling system following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. While this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability for the reactor cavity cooling system (and the reactor system in general) to the postulated transient event.« less

  8. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    DOE PAGES

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; ...

    2017-01-24

    We report that many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has beenmore » examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Lastly, although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.« less

  9. I. SPATIAL SKILLS, THEIR DEVELOPMENT, AND THEIR LINKS TO MATHEMATICS.

    PubMed

    Verdine, Brian N; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathy; Newcombe, Nora S

    2017-03-01

    Understanding the development of spatial skills is important for promoting school readiness and improving overall success in STEM (science, technology, engineering, and mathematics) fields (e.g., Wai, Lubinski, Benbow, & Steiger, 2010). Children use their spatial skills to understand the world, including visualizing how objects fit together, and can practice them via spatial assembly activities (e.g., puzzles or blocks). These skills are incorporated into measures of overall intelligence and have been linked to success in subjects like mathematics (Mix & Cheng, 2012) and science (Pallrand & Seeber, 1984; Pribyl & Bodner, 1987). This monograph sought to answer four questions about early spatial skill development: 1) Can we reliably measure spatial skills in 3- and 4-year-olds?; 2) Do spatial skills measured at 3 predict spatial skills at age 5?; 3) Do preschool spatial skills predict mathematics skills at age 5?; and 4) What factors contribute to individual differences in preschool spatial skills (e.g., SES, gender, fine-motor skills, vocabulary, and executive function)? Longitudinal data generated from a new spatial skill test for 3-year-old children, called the TOSA (Test of Spatial Assembly), show that it is a reliable and valid measure of early spatial skills that provides strong prediction to spatial skills measured with established tests at age 5. New data using this measure finds links between early spatial skill and mathematics, language, and executive function skills. Analyses suggest that preschool spatial experiences may play a central role in children's mathematical skills around the time of school entry. Executive function skills provide an additional unique contribution to predicting mathematical performance. In addition, individual differences, specifically socioeconomic status, are related to spatial and mathematical skill. We conclude by exploring ways of providing rich early spatial experiences to children. © 2017 The Society for Research in Child Development, Inc.

  10. A Passive System Reliability Analysis for a Station Blackout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David

    2015-05-03

    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less

  11. Executive working memory load induces inattentional blindness.

    PubMed

    Fougnie, Daryl; Marois, René

    2007-02-01

    When attention is engaged in a task, unexpected events in the visual scene may go undetected, a phenomenon known as inattentional blindness (IB). At what stage of information processing must attention be engaged for IB to occur? Although manipulations that tax visuospatial attention can induce IB, the evidence is more equivocal for tasks that engage attention at late, central stages of information processing. Here, we tested whether IB can be specifically induced by central executive processes. An unexpected visual stimulus was presented during the retention interval of a working memory task that involved either simply maintaining verbal material or rearranging the material into alphabetical order. The unexpected stimulus was more likely to be missed during manipulation than during simple maintenance of the verbal information. Thus, the engagement of executive processes impairs the ability to detect unexpected, task-irrelevant stimuli, suggesting that IB can result from central, amodal stages of processing.

  12. Component Framework for Loosely Coupled High Performance Integrated Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Elwasif, W. R.; Bernholdt, D. E.; Shet, A. G.; Batchelor, D. B.; Foley, S.

    2010-11-01

    We present the design and implementation of a component-based simulation framework for the execution of coupled time-dependent plasma modeling codes. The Integrated Plasma Simulator (IPS) provides a flexible lightweight component model that streamlines the integration of stand alone codes into coupled simulations. Standalone codes are adapted to the IPS component interface specification using a thin wrapping layer implemented in the Python programming language. The framework provides services for inter-component method invocation, configuration, task, and data management, asynchronous event management, simulation monitoring, and checkpoint/restart capabilities. Services are invoked, as needed, by the computational components to coordinate the execution of different aspects of coupled simulations on Massive parallel Processing (MPP) machines. A common plasma state layer serves as the foundation for inter-component, file-based data exchange. The IPS design principles, implementation details, and execution model will be presented, along with an overview of several use cases.

  13. Method for distributed object communications based on dynamically acquired and assembled software components

    NASA Technical Reports Server (NTRS)

    Sundermier, Amy (Inventor)

    2002-01-01

    A method for acquiring and assembling software components at execution time into a client program, where the components may be acquired from remote networked servers is disclosed. The acquired components are assembled according to knowledge represented within one or more acquired mediating components. A mediating component implements knowledge of an object model. A mediating component uses its implemented object model knowledge, acquired component class information and polymorphism to assemble components into an interacting program at execution time. The interactions or abstract relationships between components in the object model may be implemented by the mediating component as direct invocations or indirect events or software bus exchanges. The acquired components may establish communications with remote servers. The acquired components may also present a user interface representing data to be exchanged with the remote servers. The mediating components may be assembled into layers, allowing arbitrarily complex programs to be constructed at execution time.

  14. Superfund: Evaluating the Impact of Executive Order 12898

    PubMed Central

    O’Neil, Sandra George

    2007-01-01

    Background The U.S. Environmental Protection Agency (EPA) addresses uncontrolled and abandoned hazardous waste sites throughout the country. Sites that are perceived to be a significant threat to both surrounding populations and the environment can be placed on the U.S. EPA Superfund list and qualify for federal cleanup funds. The equitability of the Superfund program has been questioned; the representation of minority and low-income populations in this cleanup program is lower than would be expected. Thus, minorities and low-income populations may not be benefiting proportionately from this environmental cleanup program. In 1994 President Clinton signed Executive Order 12898 requiring that the U.S. EPA and other federal agencies implement environmental justice policies. These policies were to specifically address the disproportionate environmental effects of federal programs and policies on minority and low-income populations. Objective and Methods I use event history analysis to evaluate the impact of Executive Order 12898 on the equitability of the Superfund program. Discussion Findings suggest that despite environmental justice legislation, Superfund site listings in minority and poor areas are even less likely for sites discovered since the 1994 Executive Order. Conclusion The results of this study indicate that Executive Order 12898 for environmental justice has not increased the equitability of the Superfund program. PMID:17637927

  15. State survey of medical boards regarding abrupt loss of a prescriber of controlled substances.

    PubMed

    Sera, Leah; Brown, Micke; McPherson, Mary Lynn; Walker, Kathryn A; Klein-Schwartz, Wendy

    The purpose of the study was to evaluate states' experiences with abrupt changes in controlled substances (CS) prescribing, to determine whether states have action plans in place to manage such situations, and describe the components of any such plans. A survey of executive directors of 51 medical boards was conducted to evaluate states' experiences with abrupt changes in CS prescribing, the extent of consumer complaints attributed to these events, and the types of plans in place to manage these situations. Forty-six executive directors of medical boards responded. Twenty boards (43.5 percent) confirmed that their state had experienced abrupt loss of CS providers and 11 (55 percent) of these executive directors indicated that the loss resulted in increased consumer complaints. The majority of executive directors (86 percent) had no action plan. Six executive directors reported some type of action plan or process consisting of regulatory action, patient-provider connection, professional education, patient education, or public notice. Most states do not have operational plans in place. However, a few have key strategies that may be useful in addressing potential problems following abrupt loss of a CS prescriber. State medical boards can play a significant role in the development of comprehensive preparedness plans to mitigate damage from the loss of CS prescribers in the community.

  16. Contingency Management Requirements Document: Preliminary Version. Revision F

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This is the High Altitude, Long Endurance (HALE) Remotely Operated Aircraft (ROA) Contingency Management (CM) Functional Requirements document. This document applies to HALE ROA operating within the National Airspace System (NAS) limited at this time to enroute operations above 43,000 feet (defined as Step 1 of the Access 5 project, sponsored by the National Aeronautics and Space Administration). A contingency is an unforeseen event requiring a response. The unforeseen event may be an emergency, an incident, a deviation, or an observation. Contingency Management (CM) is the process of evaluating the event, deciding on the proper course of action (a plan), and successfully executing the plan.

  17. Experiences with hypercube operating system instrumentation

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Rudolph, David C.

    1989-01-01

    The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.

  18. Fast grasping of unknown objects using cylinder searching on a single point cloud

    NASA Astrophysics Data System (ADS)

    Lei, Qujiang; Wisse, Martijn

    2017-03-01

    Grasping of unknown objects with neither appearance data nor object models given in advance is very important for robots that work in an unfamiliar environment. The goal of this paper is to quickly synthesize an executable grasp for one unknown object by using cylinder searching on a single point cloud. Specifically, a 3D camera is first used to obtain a partial point cloud of the target unknown object. An original method is then employed to do post treatment on the partial point cloud to minimize the uncertainty which may lead to grasp failure. In order to accelerate the grasp searching, surface normal of the target object is then used to constrain the synthetization of the cylinder grasp candidates. Operability analysis is then used to select out all executable grasp candidates followed by force balance optimization to choose the most reliable grasp as the final grasp execution. In order to verify the effectiveness of our algorithm, Simulations on a Universal Robot arm UR5 and an under-actuated Lacquey Fetch gripper are used to examine the performance of this algorithm, and successful results are obtained.

  19. The Non-Profit Sector: An Interview with Joanna Lennon.

    ERIC Educational Resources Information Center

    Kielsmeier, Jim

    1986-01-01

    Interviews Joanna Lennon, executive director of the East Bay Conservation Corps, an exemplary program among approximately 40 youth service and conservation corps around the country. Discusses her vision for social change through education. Traces key events and people influencing her personal and career development. (JHZ)

  20. Increasing Student/Corporate Engagement

    ERIC Educational Resources Information Center

    Janicki, Thomas N.; Cummings, Jeffrey W.

    2017-01-01

    Increasing dialog and interaction between the corporate community and students is a key strategic goal of many universities. This paper describes an event that has been specifically designed to increase student and corporate engagement. It describes the process of planning and executing a targeted career day for information systems and information…

  1. 76 FR 7123 - Eleventh Coast Guard District Annual Marine Events

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-09

    ... Federal Government and Indian tribes. Energy Effects We have analyzed this proposed rule under Executive Order 13211, Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use. We have determined that it is not a ``significant energy action'' under that order because it is...

  2. 76 FR 53329 - Eleventh Coast Guard District Annual Marine Events

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    .... Energy Effects We have analyzed this rule under Executive Order 13211, Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use. We have determined that it is not a ``significant energy action'' under that order because it is not a ``significant regulatory action'' under...

  3. 76 FR 30575 - Eleventh Coast Guard District Annual Marine Events

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-26

    ... Federal Government and Indian Tribes. Energy Effects We have analyzed this proposed rule under Executive Order 13211, Actions Concerning Regulations That Significantly Affect Energy Supply, Distribution, or Use. We have determined that it is not a ``significant energy action'' under that order because it is...

  4. 24 CFR 55.2 - Terminology.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... or inoperative during flood and storm events (e.g., data storage centers, generating plants...” (§ 55.2(b)(5)). When FEMA provides interim flood hazard data, such as Advisory Base Flood Elevations... data may be used as “best available information” in accordance with Executive Order 11988. However, a...

  5. Back from the future: Volitional postdiction of perceived apparent motion direction.

    PubMed

    Sun, Liwei; Frank, Sebastian M; Hartstein, Kevin C; Hassan, Wassim; Tse, Peter U

    2017-11-01

    Among physical events, it is impossible that an event could alter its own past for the simple reason that past events precede future events, and not vice versa. Moreover, to do so would invoke impossible self-causation. However, mental events are constructed by physical neuronal processes that take a finite duration to execute. Given this fact, it is conceivable that later brain events could alter the ongoing interpretation of previous brain events if they arrive within this finite duration of interpretive processing, before a commitment is made to what happened. In the current study, we show that humans can volitionally influence how they perceive an ambiguous apparent motion sequence, as long as the top-down command occurs up to 300ms after the occurrence of the actual motion event in the world. This finding supports the view that there is a temporal integration period over which perception is constructed on the basis of both bottom-up and top-down inputs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    PubMed

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  7. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.; Gregory, S. T.; Urquhart, J. I. A.

    1984-01-01

    The use and implementation of Ada (a trade mark of the US Dept. of Defense) in distributed environments in which the hardware are assumed to be unreliable were investigated. The possibility that a distributed system is programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors they are executing on and failures occurring in the underlying hardware were examined.

  8. Using patient acuity data to manage patient care outcomes and patient care costs.

    PubMed

    Van Slyck, A; Johnson, K R

    2001-01-01

    This article describes actual reported uses for patient acuity data that go beyond historical uses in determining staffing allocations. These expanded uses include managing patient care outcomes and health care costs. The article offers the patient care executive examples of how objective, valid, and reliable data are used to drive approaches to effectively influence decision making in an increasingly competitive health care environment.

  9. Candidate Technologies for the Integrated Health Management Program

    NASA Technical Reports Server (NTRS)

    Johnson, Neal F., Jr.; Martin, Fred H.

    1993-01-01

    The purpose of this report is to assess Vehicle Health Management (VHM) technologies for implementation as a demonstration. Extensive studies have been performed to determine technologies which could be implemented on the Atlas and Centaur vehicles as part of a bridging program. This paper discusses areas today where VHM can be implemented for benefits in reliability, performance, and cost reduction. VHM Options are identified and one demonstration is recommended for execution.

  10. USSOCOM Fact Book: Special Operations Forces

    DTIC Science & Technology

    2010-01-01

    units include the 75th Ranger Regiment, headquartered at Fort Benning, Ga.; 160th Special Operations Aviation Regiment (Airborne) at Fort Campbell, Ky...throughout the world Rangers are the masters of special light infantry operations. This lethal, agile, and flexible force is capable of executing a wide...responsiveness and reliability define the Ranger Regiment as the versatile and adaptive force of choice for missions of high risk and strategic importance in

  11. Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.

    PubMed

    Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie

    2010-07-01

    Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.

  12. Objects and events as determinants of parallel processing in dual tasks: evidence from the backward compatibility effect.

    PubMed

    Ellenbogen, Ravid; Meiran, Nachshon

    2011-02-01

    The backward-compatibility effect (BCE) is a major index of parallel processing in dual tasks and is related to the dependency of Task 1 performance on Task 2 response codes (Hommel, 1998). The results of four dual-task experiments showed that a BCE occurs when the stimuli of both tasks are included in the same visual object (Experiments 1 and 2) or belong to the same perceptual event (Experiments 3 and 4). Thus, the BCE may be modulated by factors that influence whether both task stimuli are included in the same perceptual event (objects, as studied in cognitive experiments, being special cases of events). As with objects, drawing attention to a (selected) event results in the processing of its irrelevant features and may interfere with task execution. (c) 2010 APA, all rights reserved.

  13. Using Mean Orbit Period in Mars Reconnaissance Orbiter Maneuver Design

    NASA Technical Reports Server (NTRS)

    Chung, Min-Kun J.; Menon, Premkumar R.; Wagner, Sean V.; Williams, Jessica L.

    2014-01-01

    Mars Reconnaissance Orbiter (MRO) has provided communication relays for a number of Mars spacecraft. In 2016 MRO is expected to support a relay for NASA's Interior Exploration using Seismic Investigations, Geodesy and Heat Transport (InSight) spacecraft. In addition, support may be needed by another mission, ESA's ExoMars EDL Demonstrator Module's (EDM), only 21 days after the InSight coverage. The close proximity of these two events presents a unique challenge to a conventional orbit synchronization maneuver where one deterministic maneuver is executed prior to each relay. Since the two events are close together and the difference in required phasing between InSight and EDM may be up to half an orbit (yielding a large execution error), the downtrack timing error can increase rapidly at the EDM encounter. Thus, a new maneuver strategy that does not require a deterministic maneuver in-between the two events (with only a small statistical cleanup) is proposed in the paper. This proposed strategy rests heavily on the stability of the mean orbital period. The ability to search and set the specified mean period is fundamental in the proposed maneuver design as well as in understanding the scope of the problem. The proposed strategy is explained and its result is used to understand and solve the problem in the flight operations environment.

  14. Phase-Space Detection of Cyber Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Jimenez, Jarilyn M; Ferber, Aaron E; Prowell, Stacy J

    Energy Delivery Systems (EDS) are a network of processes that produce, transfer and distribute energy. EDS are increasingly dependent on networked computing assets, as are many Industrial Control Systems. Consequently, cyber-attacks pose a real and pertinent threat, as evidenced by Stuxnet, Shamoon and Dragonfly. Hence, there is a critical need for novel methods to detect, prevent, and mitigate effects of such attacks. To detect cyber-attacks in EDS, we developed a framework for gathering and analyzing timing data that involves establishing a baseline execution profile and then capturing the effect of perturbations in the state from injecting various malware. The datamore » analysis was based on nonlinear dynamics and graph theory to improve detection of anomalous events in cyber applications. The goal was the extraction of changing dynamics or anomalous activity in the underlying computer system. Takens' theorem in nonlinear dynamics allows reconstruction of topologically invariant, time-delay-embedding states from the computer data in a sufficiently high-dimensional space. The resultant dynamical states were nodes, and the state-to-state transitions were links in a mathematical graph. Alternatively, sequential tabulation of executing instructions provides the nodes with corresponding instruction-to-instruction links. Graph theorems guarantee graph-invariant measures to quantify the dynamical changes in the running applications. Results showed a successful detection of cyber events.« less

  15. In Silico, Experimental, Mechanistic Model for Extended-Release Felodipine Disposition Exhibiting Complex Absorption and a Highly Variable Food Interaction

    PubMed Central

    Kim, Sean H. J.; Jackson, Andre J.; Hunt, C. Anthony

    2014-01-01

    The objective of this study was to develop and explore new, in silico experimental methods for deciphering complex, highly variable absorption and food interaction pharmacokinetics observed for a modified-release drug product. Toward that aim, we constructed an executable software analog of study participants to whom product was administered orally. The analog is an object- and agent-oriented, discrete event system, which consists of grid spaces and event mechanisms that map abstractly to different physiological features and processes. Analog mechanisms were made sufficiently complicated to achieve prespecified similarity criteria. An equation-based gastrointestinal transit model with nonlinear mixed effects analysis provided a standard for comparison. Subject-specific parameterizations enabled each executed analog’s plasma profile to mimic features of the corresponding six individual pairs of subject plasma profiles. All achieved prespecified, quantitative similarity criteria, and outperformed the gastrointestinal transit model estimations. We observed important subject-specific interactions within the simulation and mechanistic differences between the two models. We hypothesize that mechanisms, events, and their causes occurring during simulations had counterparts within the food interaction study: they are working, evolvable, concrete theories of dynamic interactions occurring within individual subjects. The approach presented provides new, experimental strategies for unraveling the mechanistic basis of complex pharmacological interactions and observed variability. PMID:25268237

  16. Event-Related Potentials in a Cued Go-NoGo Task Associated with Executive Functions in Adolescents with Autism Spectrum Disorder; A Case-Control Study.

    PubMed

    Høyland, Anne L; Øgrim, Geir; Lydersen, Stian; Hope, Sigrun; Engstrøm, Morten; Torske, Tonje; Nærland, Terje; Andreassen, Ole A

    2017-01-01

    Executive functions are often affected in autism spectrum disorders (ASD). The underlying biology is however not well known. In the DSM-5, ASD is characterized by difficulties in two domains: Social Interaction and Repetitive and Restricted Behavior, RRB. Insistence of Sameness is part of RRB and has been reported related to executive functions. We aimed to identify differences between ASD and typically developing (TD) adolescents in Event Related Potentials (ERPs) associated with response preparation, conflict monitoring and response inhibition using a cued Go-NoGo paradigm. We also studied the effect of age and emotional content of paradigm related to these ERPs. We investigated 49 individuals with ASD and 49 TD aged 12-21 years, split into two groups below (young) and above (old) 16 years of age. ASD characteristics were quantified by the Social Communication Questionnaire (SCQ) and executive functions were assessed with the Behavior Rating Inventory of Executive Function (BRIEF), both parent-rated. Behavioral performance and ERPs were recorded during a cued visual Go-NoGo task which included neutral pictures (VCPT) and pictures of emotional faces (ECPT). The amplitudes of ERPs associated with response preparation, conflict monitoring, and response inhibition were analyzed. The ASD group showed markedly higher scores than TD in both SCQ and BRIEF. Behavioral data showed no case-control differences in either the VCPT or ECPT in the whole group. While there were no significant case-control differences in ERPs from the combined VCPT and ECPT in the whole sample, the Contingent Negative Variation (CNV) was significantly enhanced in the old ASD group ( p = 0.017). When excluding ASD with comorbid ADHD we found a significantly increased N2 NoGo ( p = 0.016) and N2-effect ( p = 0.023) for the whole group. We found no case-control differences in the P3-components. Our findings suggest increased response preparation in adolescents with ASD older than 16 years and enhanced conflict monitoring in ASD without comorbid ADHD during a Go-NoGo task. The current findings may be related to Insistence of Sameness in ASD. The pathophysiological underpinnings of executive dysfunction should be further investigated to learn more about how this phenomenon is related to core characteristics of ASD.

  17. The reliability and validity of subjective notational analysis in comparison to global positioning system tracking to assess athlete movement patterns.

    PubMed

    Doğramac, Sera N; Watsford, Mark L; Murphy, Aron J

    2011-03-01

    Subjective notational analysis can be used to track players and analyse movement patterns during match-play of team sports such as futsal. The purpose of this study was to establish the validity and reliability of the Event Recorder for subjective notational analysis. A course was designed, replicating ten minutes of futsal match-play movement patterns, where ten participants undertook the course. The course allowed a comparison of data derived from subjective notational analysis, to the known distances of the course, and to GPS data. The study analysed six locomotor activity categories, focusing on total distance covered, total duration of activities and total frequency of activities. The values between the known measurements and the Event Recorder were similar, whereas the majority of significant differences were found between the Event Recorder and GPS values. The reliability of subjective notational analysis was established with all ten participants being analysed on two occasions, as well as analysing five random futsal players twice during match-play. Subjective notational analysis is a valid and reliable method of tracking player movements, and may be a preferred and more effective method than GPS, particularly for indoor sports such as futsal, and field sports where short distances and changes in direction are observed.

  18. Seeking high reliability in primary care: Leadership, tools, and organization.

    PubMed

    Weaver, Robert R

    2015-01-01

    Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an organization. Progress toward a reliability-seeking, system-oriented approach to care remains ongoing, and movement in that direction requires deliberate and sustained effort by committed leaders in health care.

  19. An object-oriented approach to risk and reliability analysis : methodology and aviation safety applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane

    2003-09-01

    This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to anmore » aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.« less

  20. Neural Correlates of Working Memory Performance in Adolescents and Young Adults with Dyslexia

    ERIC Educational Resources Information Center

    Vasic, Nenad; Lohr, Christina; Steinbrink, Claudia; Martin, Claudia; Wolf, Robert Christian

    2008-01-01

    Behavioral studies indicate deficits in phonological working memory (WM) and executive functioning in dyslexics. However, little is known about the underlying functional neuroanatomy. In the present study, neural correlates of WM in adolescents and young adults with dyslexia were investigated using event-related functional magnetic resonance…

  1. 76 FR 61717 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-05

    ... computer science based technology that may provide the capability of detecting untoward events such as... is comprised of a dedicated computer server that executes specially designed software with input data... computer assisted clinical ordering. J Biomed Inform. 2003 Feb-Apr;36(1-2):4-22. [PMID 14552843...

  2. Why Are Faculty Development Workshops a Waste of Time?

    ERIC Educational Resources Information Center

    Berk, Ronald A.

    2010-01-01

    This article discusses how to design and execute a faculty development workshop. The author first describes the characteristics of the faculty development event that can sabotage or facilitate attendance. They relate to: (a) format and frequency; (b) venues; (c) technical support; and (d) competing activities. Then, the author presents ten…

  3. OCLC Annual Report 1998/99. A Great Time for Libraries!

    ERIC Educational Resources Information Center

    OCLC Online Computer Library Center, Inc., Dublin, OH.

    Beginning this annual report is a letter to OCLC members from OCLC President and Chief Executive Jay Jordan. The report contains the following sections: (1) program and financial highlights; (2) the year in review, including membership events, online services, strategic alliances, Forest Press, preservation resources, research, and the OCLC…

  4. 78 FR 69517 - U.S. Advisory Commission on Public Diplomacy; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-19

    ... Commission may conduct studies, inquiries, and meetings, as it deems necessary. It may assemble and..., in consultation with the Executive Director. The Advisory Commission may undertake foreign travel in pursuit of its studies and coordinate, sponsor, or oversee projects, studies, events, or other activities...

  5. 77 FR 12456 - Eighth Coast Guard District Annual Marine Events and Safety Zones

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-01

    ... restricting and governing vessel movements are also short in duration. Additionally, the public is given.... Protection of Children We have analyzed this rule under Executive Order 13045, Protection of Children from... create an environmental risk to health or risk to safety that may disproportionately affect children...

  6. State of STEM

    NASA Image and Video Library

    2013-02-13

    NASA Deputy Administrator Lori Garver listens to a question during the first-ever State of Science, Technology, Engineering and Math Event (SoSTEM) held at the Eisenhower Executive Office Building, Wednesday, Feb. 13, 2013 in Washington. Garver was part of a panel that took questions from a crowd of STEM students. Photo Credit: (NASA/Bill Ingalls)

  7. Israel: Background and U.S. Relations

    DTIC Science & Technology

    2012-11-07

    often seek to determine how regional events and U.S. policy choices may affect Israel’s security, and Congress provides active oversight of executive...14 Issues Affecting U.S.-Israel Relations...Several reports identify Hezbollah as the perpetrator of the July 2012 suicide bus bombing in Burgas, Bulgaria that targeted an Israeli tourist

  8. 36 CFR 1004.13 - Obstructing traffic.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13... road, except as authorized by the Executive Director, or in the event of an accident or other condition...

  9. 36 CFR 1004.13 - Obstructing traffic.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Obstructing traffic. 1004.13 Section 1004.13 Parks, Forests, and Public Property PRESIDIO TRUST VEHICLES AND TRAFFIC SAFETY § 1004.13... road, except as authorized by the Executive Director, or in the event of an accident or other condition...

  10. Adult Brain and Spine Tumor Research - Facebook Live Event

    Cancer.gov

    Chief, Dr. Mark Gilbert and Senior Investigator, Dr. Terri Armstrong, of the NCI Center for Cancer Research, Neuro-Oncology Branch, will be joined by moderator and Chief Executive Officer, David Arons of the National Brain Tumor Society led a discussion on adult brain and spine tumor research and treatment.

  11. Planning in Young Children: A Review and Synthesis

    ERIC Educational Resources Information Center

    McCormack, Teresa; Atance, Cristina M.

    2011-01-01

    Research on the development of planning is reviewed in the context of a framework that considers the role of three types of cognitive flexibility in planning development: event-independent temporal representation, executive function, and self-projection. It is argued that the emergence of planning abilities in the preschool period is dependent…

  12. Time management situation assessment (TMSA)

    NASA Technical Reports Server (NTRS)

    Richardson, Michael B.; Ricci, Mark J.

    1992-01-01

    TMSA is a concept prototype developed to support NASA Test Directors (NTDs) in schedule execution monitoring during the later stages of a Shuttle countdown. The program detects qualitative and quantitative constraint violations in near real-time. The next version will support incremental rescheduling and reason over a substantially larger number of scheduled events.

  13. Composing, Analyzing and Validating Software Models

    NASA Astrophysics Data System (ADS)

    Sheldon, Frederick T.

    1998-10-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  14. Design for Verification: Using Design Patterns to Build Reliable Systems

    NASA Technical Reports Server (NTRS)

    Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)

    2003-01-01

    Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.

  15. Composing, Analyzing and Validating Software Models

    NASA Technical Reports Server (NTRS)

    Sheldon, Frederick T.

    1998-01-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  16. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  17. Psychometric Properties of an Instrument to Measure Mother-Infant Togetherness After Childbirth.

    PubMed

    Lawrence, Carol L; Norris, Anne E

    2016-01-01

    The purpose of this research was to evaluate the psychometric properties of a new instrument to measure mother-infant togetherness, Mother-Infant Togetherness Survey (MITS). Stage 1 examined content validity. Stage 2 pretested the readability and understandability and further examined content validity. Stage 3 examined women's ability to accurately self-report on the Delivery Events subscale. Stages 4 and 5 examined construct validity. Good content validity was obtained at the scale/subscale level (CVI = .91-1.00). Internal consistency reliability was evaluated at the scale/subscale level (α = .62-.89). Construct validity was supported with known groups testing and factor analysis. Study findings provide support for the reliability and validity of the MITS. Future research should be done to improve the internal consistency reliability of the Postpartum Events subscale.

  18. Considering context: reliable entity networks through contextual relationship extraction

    NASA Astrophysics Data System (ADS)

    David, Peter; Hawes, Timothy; Hansen, Nichole; Nolan, James J.

    2016-05-01

    Existing information extraction techniques can only partially address the problem of exploiting unreadable-large amounts text. When discussion of events and relationships is limited to simple, past-tense, factual descriptions of events, current NLP-based systems can identify events and relationships and extract a limited amount of additional information. But the simple subset of available information that existing tools can extract from text is only useful to a small set of users and problems. Automated systems need to find and separate information based on what is threatened or planned to occur, has occurred in the past, or could potentially occur. We address the problem of advanced event and relationship extraction with our event and relationship attribute recognition system, which labels generic, planned, recurring, and potential events. The approach is based on a combination of new machine learning methods, novel linguistic features, and crowd-sourced labeling. The attribute labeler closes the gap between structured event and relationship models and the complicated and nuanced language that people use to describe them. Our operational-quality event and relationship attribute labeler enables Warfighters and analysts to more thoroughly exploit information in unstructured text. This is made possible through 1) More precise event and relationship interpretation, 2) More detailed information about extracted events and relationships, and 3) More reliable and informative entity networks that acknowledge the different attributes of entity-entity relationships.

  19. Hazard function theory for nonstationary natural hazards

    NASA Astrophysics Data System (ADS)

    Read, L. K.; Vogel, R. M.

    2015-11-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied Generalized Pareto (GP) model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series X, with corresponding failure time series T, should have application to a wide class of natural hazards with rich opportunities for future extensions.

  20. Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

    NASA Astrophysics Data System (ADS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-06-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the involved GRID queues. CEs which do not meet the set criteria can be removed from the production chain by including them in an exception table. All of these monitoring actions lead to a more reliable and faster execution of MC requests.

Top