Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
A simulation for gravity fine structure recovery from low-low GRAVSAT SST data
NASA Technical Reports Server (NTRS)
Estes, R. H.; Lancaster, E. R.
1976-01-01
Covariance error analysis techniques were applied to investigate estimation strategies for the low-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. A 5 degree by 5 degree surface density block representation of the high order geopotential was utilized with the drag-free low-low GRAVSAT configuration in a circular polar orbit at 250 km altitude. Recovery of local sets of density blocks from long data arcs was found not to be feasible due to strong aliasing effects. The error analysis for the recovery of local sets of density blocks using independent short data arcs demonstrated that the estimation strategy of simultaneously estimating a local set of blocks covered by data and two "buffer layers" of blocks not covered by data greatly reduced aliasing errors.
Utilizing semantic networks to database and retrieve generalized stochastic colored Petri nets
NASA Technical Reports Server (NTRS)
Farah, Jeffrey J.; Kelley, Robert B.
1992-01-01
Previous work has introduced the Planning Coordinator (PCOORD), a coordinator functioning within the hierarchy of the Intelligent Machine Mode. Within the structure of the Planning Coordinator resides the Primitive Structure Database (PSDB) functioning to provide the primitive structures utilized by the Planning Coordinator in the establishing of error recovery or on-line path plans. This report further explores the Primitive Structure Database and establishes the potential of utilizing semantic networks as a means of efficiently storing and retrieving the Generalized Stochastic Colored Petri Nets from which the error recovery plans are derived.
Stress Recovery and Error Estimation for Shell Structures
NASA Technical Reports Server (NTRS)
Yazdani, A. A.; Riggs, H. R.; Tessler, A.
2000-01-01
The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.
Measuring Error Identification and Recovery Skills in Surgical Residents.
Sternbach, Joel M; Wang, Kevin; El Khoury, Rym; Teitelbaum, Ezra N; Meyerson, Shari L
2017-02-01
Although error identification and recovery skills are essential for the safe practice of surgery, they have not traditionally been taught or evaluated in residency training. This study validates a method for assessing error identification and recovery skills in surgical residents using a thoracoscopic lobectomy simulator. We developed a 5-station, simulator-based examination containing the most commonly encountered cognitive and technical errors occurring during division of the superior pulmonary vein for left upper lobectomy. Successful completion of each station requires identification and correction of these errors. Examinations were video recorded and scored in a blinded fashion using an examination-specific rating instrument evaluating task performance as well as error identification and recovery skills. Evidence of validity was collected in the categories of content, response process, internal structure, and relationship to other variables. Fifteen general surgical residents (9 interns and 6 third-year residents) completed the examination. Interrater reliability was high, with an intraclass correlation coefficient of 0.78 between 4 trained raters. Station scores ranged from 64% to 84% correct. All stations adequately discriminated between high- and low-performing residents, with discrimination ranging from 0.35 to 0.65. The overall examination score was significantly higher for intermediate residents than for interns (mean, 74 versus 64 of 90 possible; p = 0.03). The described simulator-based examination with embedded errors and its accompanying assessment tool can be used to measure error identification and recovery skills in surgical residents. This examination provides a valid method for comparing teaching strategies designed to improve error recognition and recovery to enhance patient safety. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Chien, Andrew A.; Balaji, Pavan; Dun, Nan; ...
2016-09-08
Exascale studies project reliability challenges for future HPC systems. We present the Global View Resilience (GVR) system, a library for portable resilience. GVR begins with a subset of the Global Arrays interface, and adds new capabilities to create versions, name versions, and compute on version data. Applications can focus versioning where and when it is most productive, and customize for each application structure independently. This control is portable, and its embedding in application source makes it natural to express and easy to maintain. The ability to name multiple versions and “partially materialize” them efficiently makes ambitious forward-recovery based on “datamore » slices” across versions or data structures both easy to express and efficient. Using several large applications (OpenMC, preconditioned conjugate gradient (PCG) solver, ddcMD, and Chombo), we evaluate the programming effort to add resilience. The required changes are small (< 2% lines of code (LOC)), localized and machine-independent, and perhaps most important, require no software architecture changes. We also measure the overhead of adding GVR versioning and show that overheads < 2% are generally achieved. This overhead suggests that GVR can be implemented in large-scale codes and support portable error recovery with modest investment and runtime impact. Our results are drawn from both IBM BG/Q and Cray XC30 experiments, demonstrating portability. We also present two case studies of flexible error recovery, illustrating how GVR can be used for multi-version rollback recovery, and several different forward-recovery schemes. GVR’s multi-version enables applications to survive latent errors (silent data corruption) with significant detection latency, and forward recovery can make that recovery extremely efficient. Lastly, our results suggest that GVR is scalable, portable, and efficient. GVR interfaces are flexible, supporting a variety of recovery schemes, and altogether GVR embodies a gentle-slope path to tolerate growing error rates in future extreme-scale systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, Andrew A.; Balaji, Pavan; Dun, Nan
Exascale studies project reliability challenges for future HPC systems. We present the Global View Resilience (GVR) system, a library for portable resilience. GVR begins with a subset of the Global Arrays interface, and adds new capabilities to create versions, name versions, and compute on version data. Applications can focus versioning where and when it is most productive, and customize for each application structure independently. This control is portable, and its embedding in application source makes it natural to express and easy to maintain. The ability to name multiple versions and “partially materialize” them efficiently makes ambitious forward-recovery based on “datamore » slices” across versions or data structures both easy to express and efficient. Using several large applications (OpenMC, preconditioned conjugate gradient (PCG) solver, ddcMD, and Chombo), we evaluate the programming effort to add resilience. The required changes are small (< 2% lines of code (LOC)), localized and machine-independent, and perhaps most important, require no software architecture changes. We also measure the overhead of adding GVR versioning and show that overheads < 2% are generally achieved. This overhead suggests that GVR can be implemented in large-scale codes and support portable error recovery with modest investment and runtime impact. Our results are drawn from both IBM BG/Q and Cray XC30 experiments, demonstrating portability. We also present two case studies of flexible error recovery, illustrating how GVR can be used for multi-version rollback recovery, and several different forward-recovery schemes. GVR’s multi-version enables applications to survive latent errors (silent data corruption) with significant detection latency, and forward recovery can make that recovery extremely efficient. Lastly, our results suggest that GVR is scalable, portable, and efficient. GVR interfaces are flexible, supporting a variety of recovery schemes, and altogether GVR embodies a gentle-slope path to tolerate growing error rates in future extreme-scale systems.« less
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
Stress Recovery and Error Estimation for 3-D Shell Structures
NASA Technical Reports Server (NTRS)
Riggs, H. R.
2000-01-01
The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).
Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M
The study aim was to determine whether residents' error management strategies changed across 2 simulated laparoscopic ventral hernia (LVH) repair procedures after receiving feedback on their initial performance. We hypothesize that error detection and recovery strategies would improve during the second procedure without hands-on practice. Retrospective review of participant procedural performances of simulated laparoscopic ventral herniorrhaphy. A total of 3 investigators reviewed procedure videos to identify surgical errors. Errors were deconstructed. Error management events were noted, including error identification and recovery. Residents performed the simulated LVH procedures during a course on advanced laparoscopy. Participants had 30 minutes to complete a LVH procedure. After verbal and simulator feedback, residents returned 24 hours later to perform a different, more difficult simulated LVH repair. Senior (N = 7; postgraduate year 4-5) residents in attendance at the course participated in this study. In the first LVH procedure, residents committed 121 errors (M = 17.14, standard deviation = 4.38). Although the number of errors increased to 146 (M = 20.86, standard deviation = 6.15) during the second procedure, residents progressed further in the second procedure. There was no significant difference in the number of errors committed for both procedures, but errors shifted to the late stage of the second procedure. Residents changed the error types that they attempted to recover (χ 2 5 =24.96, p<0.001). For the second procedure, recovery attempts increased for action and procedure errors, but decreased for strategy errors. Residents also recovered the most errors in the late stage of the second procedure (p < 0.001). Residents' error management strategies changed between procedures following verbal feedback on their initial performance and feedback from the simulator. Errors and recovery attempts shifted to later steps during the second procedure. This may reflect residents' error management success in the earlier stages, which allowed further progression in the second simulation. Incorporating error recognition and management opportunities into surgical training could help track residents' learning curve and provide detailed, structured feedback on technical and decision-making skills. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
STARS 2.0: 2nd-generation open-source archiving and query software
NASA Astrophysics Data System (ADS)
Winegar, Tom
2008-07-01
The Subaru Telescope is in process of developing an open-source alternative to the 1st-generation software and databases (STARS 1) used for archiving and query. For STARS 2, we have chosen PHP and Python for scripting and MySQL as the database software. We have collected feedback from staff and observers, and used this feedback to significantly improve the design and functionality of our future archiving and query software. Archiving - We identified two weaknesses in 1st-generation STARS archiving software: a complex and inflexible table structure and uncoordinated system administration for our business model: taking pictures from the summit and archiving them in both Hawaii and Japan. We adopted a simplified and normalized table structure with passive keyword collection, and we are designing an archive-to-archive file transfer system that automatically reports real-time status and error conditions and permits error recovery. Query - We identified several weaknesses in 1st-generation STARS query software: inflexible query tools, poor sharing of calibration data, and no automatic file transfer mechanisms to observers. We are developing improved query tools and sharing of calibration data, and multi-protocol unassisted file transfer mechanisms for observers. In the process, we have redefined a 'query': from an invisible search result that can only transfer once in-house right now, with little status and error reporting and no error recovery - to a stored search result that can be monitored, transferred to different locations with multiple protocols, reporting status and error conditions and permitting recovery from errors.
Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.
2006-01-01
Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
1988-09-01
analysis phase of the software life cycle (16:1-1). While editing a SADT diagram, the tool should be able to check whether or not structured analysis...diag-ams are valid for the SADT’s syntax, produce error messages, do error recovery, and perform editing suggestions. Thus, this tool must have the...directed editors are editors which use the syn- tax of the programming language while editing a program. While text editors treat programs as text, syntax
A framework for software fault tolerance in real-time systems
NASA Technical Reports Server (NTRS)
Anderson, T.; Knight, J. C.
1983-01-01
A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.
Design and scheduling for periodic concurrent error detection and recovery in processor arrays
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent
1992-01-01
Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.
NASA Technical Reports Server (NTRS)
Long, Junsheng
1994-01-01
This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2014-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2015-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058
Barriers and facilitators to recovering from e-prescribing errors in community pharmacies.
Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A
2015-01-01
To explore barriers and facilitators to recovery from e-prescribing errors in community pharmacies and to explore practical solutions for work system redesign to ensure successful recovery from errors. Cross-sectional qualitative design using direct observations, interviews, and focus groups. Five community pharmacies in Wisconsin. 13 pharmacists and 14 pharmacy technicians. Observational field notes and transcribed interviews and focus groups were subjected to thematic analysis guided by the Systems Engineering Initiative for Patient Safety (SEIPS) work system and patient safety model. Barriers and facilitators to recovering from e-prescription errors in community pharmacies. Organizational factors, such as communication, training, teamwork, and staffing levels, play an important role in recovering from e-prescription errors. Other factors that could positively or negatively affect recovery of e-prescription errors include level of experience, knowledge of the pharmacy personnel, availability or usability of tools and technology, interruptions and time pressure when performing tasks, and noise in the physical environment. The SEIPS model sheds light on key factors that may influence recovery from e-prescribing errors in pharmacies, including the environment, teamwork, communication, technology, tasks, and other organizational variables. To be successful in recovering from e-prescribing errors, pharmacies must provide the appropriate working conditions that support recovery from errors.
Cache-based error recovery for shared memory multiprocessor systems
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1989-01-01
A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.
Task planning with uncertainty for robotic systems. Thesis
NASA Technical Reports Server (NTRS)
Cao, Tiehua
1993-01-01
In a practical robotic system, it is important to represent and plan sequences of operations and to be able to choose an efficient sequence from them for a specific task. During the generation and execution of task plans, different kinds of uncertainty may occur and erroneous states need to be handled to ensure the efficiency and reliability of the system. An approach to task representation, planning, and error recovery for robotic systems is demonstrated. Our approach to task planning is based on an AND/OR net representation, which is then mapped to a Petri net representation of all feasible geometric states and associated feasibility criteria for net transitions. Task decomposition of robotic assembly plans based on this representation is performed on the Petri net for robotic assembly tasks, and the inheritance of properties of liveness, safeness, and reversibility at all levels of decomposition are explored. This approach provides a framework for robust execution of tasks through the properties of traceability and viability. Uncertainty in robotic systems are modeled by local fuzzy variables, fuzzy marking variables, and global fuzzy variables which are incorporated in fuzzy Petri nets. Analysis of properties and reasoning about uncertainty are investigated using fuzzy reasoning structures built into the net. Two applications of fuzzy Petri nets, robot task sequence planning and sensor-based error recovery, are explored. In the first application, the search space for feasible and complete task sequences with correct precedence relationships is reduced via the use of global fuzzy variables in reasoning about subgoals. In the second application, sensory verification operations are modeled by mutually exclusive transitions to reason about local and global fuzzy variables on-line and automatically select a retry or an alternative error recovery sequence when errors occur. Task sequencing and task execution with error recovery capability for one and multiple soft components in robotic systems are investigated.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN.
Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís
2018-01-11
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN
Pedreiras, Paulo; Almeida, Luís
2018-01-01
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Automatic Camera Orientation and Structure Recovery with Samantha
NASA Astrophysics Data System (ADS)
Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.
2011-09-01
SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.
Nikolic, Mark I; Sarter, Nadine B
2007-08-01
To examine operator strategies for diagnosing and recovering from errors and disturbances as well as the impact of automation design and time pressure on these processes. Considerable efforts have been directed at error prevention through training and design. However, because errors cannot be eliminated completely, their detection, diagnosis, and recovery must also be supported. Research has focused almost exclusively on error detection. Little is known about error diagnosis and recovery, especially in the context of event-driven tasks and domains. With a confederate pilot, 12 airline pilots flew a 1-hr simulator scenario that involved three challenging automation-related tasks and events that were likely to produce erroneous actions or assessments. Behavioral data were compared with a canonical path to examine pilots' error and disturbance management strategies. Debriefings were conducted to probe pilots' system knowledge. Pilots seldom followed the canonical path to cope with the scenario events. Detection of a disturbance was often delayed. Diagnostic episodes were rare because of pilots' knowledge gaps and time criticality. In many cases, generic inefficient recovery strategies were observed, and pilots relied on high levels of automation to manage the consequences of an error. Our findings describe and explain the nature and shortcomings of pilots' error management activities. They highlight the need for improved automation training and design to achieve more timely detection, accurate explanation, and effective recovery from errors and disturbances. Our findings can inform the design of tools and techniques that support disturbance management in various complex, event-driven environments.
Reliability, Safety and Error Recovery for Advanced Control Software
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2003-01-01
For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.
Logic design for dynamic and interactive recovery.
NASA Technical Reports Server (NTRS)
Carter, W. C.; Jessep, D. C.; Wadia, A. B.; Schneider, P. R.; Bouricius, W. G.
1971-01-01
Recovery in a fault-tolerant computer means the continuation of system operation with data integrity after an error occurs. This paper delineates two parallel concepts embodied in the hardware and software functions required for recovery; detection, diagnosis, and reconfiguration for hardware, data integrity, checkpointing, and restart for the software. The hardware relies on the recovery variable set, checking circuits, and diagnostics, and the software relies on the recovery information set, audit, and reconstruct routines, to characterize the system state and assist in recovery when required. Of particular utility is a handware unit, the recovery control unit, which serves as an interface between error detection and software recovery programs in the supervisor and provides dynamic interactive recovery.
Simulations in site error estimation for direction finders
NASA Astrophysics Data System (ADS)
López, Raúl E.; Passi, Ranjit M.
1991-08-01
The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.
A simulation for gravity fine structure recovery from high-low GRAVSAT SST data
NASA Technical Reports Server (NTRS)
Estes, R. H.; Lancaster, E. R.
1976-01-01
Covariance error analysis techniques were applied to investigate estimation strategies for the high-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. Surface density blocks of 5 deg x 5 deg and 2 1/2 deg x 2 1/2 deg resolution were utilized to represent the high order geopotential with the drag-free GRAVSAT configured in a nearly circular polar orbit at 250 km. altitude. GEOPAUSE and geosynchronous satellites were considered as high relay spacecraft. It is demonstrated that knowledge of gravitational fine structure can be significantly improved at 5 deg x 5 deg resolution using SST data from a high-low configuration with reasonably accurate orbits for the low GRAVSAT. The gravity fine structure recoverability of the high-low SST mission is compared with the low-low configuration and shown to be superior.
NASA Technical Reports Server (NTRS)
Farah, Jeffrey J.
1992-01-01
Developing a robust, task level, error recovery and on-line planning architecture is an open research area. There is previously published work on both error recovery and on-line planning; however, none incorporates error recovery and on-line planning into one integrated platform. The integration of these two functionalities requires an architecture that possesses the following characteristics. The architecture must provide for the inclusion of new information without the destruction of existing information. The architecture must provide for the relating of pieces of information, old and new, to one another in a non-trivial rather than trivial manner (e.g., object one is related to object two under the following constraints, versus, yes, they are related; no, they are not related). Finally, the architecture must be not only a stand alone architecture, but also one that can be easily integrated as a supplement to some existing architecture. This thesis proposal addresses architectural development. Its intent is to integrate error recovery and on-line planning onto a single, integrated, multi-processor platform. This intelligent x-autonomous platform, called the Planning Coordinator, will be used initially to supplement existing x-autonomous systems and eventually replace them.
Factors Influencing Error Recovery in Collections Databases: A Museum Case Study
ERIC Educational Resources Information Center
Marty, Paul F.
2005-01-01
This article offers an analysis of the process of error recovery as observed in the development and use of collections databases in a university museum. It presents results from a longitudinal case study of the development of collaborative systems and practices designed to reduce the number of errors found in the museum's databases as museum…
NASA Astrophysics Data System (ADS)
Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong
2018-01-01
Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
A representation for error detection and recovery in robot task plans
NASA Technical Reports Server (NTRS)
Lyons, D. M.; Vijaykumar, R.; Venkataraman, S. T.
1990-01-01
A general definition is given of the problem of error detection and recovery in robot assembly systems, and a general representation is developed for dealing with the problem. This invariant representation involves a monitoring process which is concurrent, with one monitor per task plan. A plan hierarchy is discussed, showing how diagnosis and recovery can be handled using the representation.
NASA Astrophysics Data System (ADS)
Salas, P. J.; Sanz, A. L.
2004-05-01
In this work we discuss the ability of different types of ancillas to control the decoherence of a qubit interacting with an environment. The error is introduced into the numerical simulation via a depolarizing isotropic channel. The ranges of values considered are 10-4 ⩽ɛ⩽ 10-2 for memory errors and 3× 10-5 ⩽γ/7⩽ 10-2 for gate errors. After the correction we calculate the fidelity as a quality criterion for the qubit recovered. We observe that a recovery method with a three-qubit ancilla provides reasonably good results bearing in mind its economy. If we want to go further, we have to use fault tolerant ancillas with a high degree of parallelism, even if this condition implies introducing additional ancilla verification qubits.
Fault tolerance in an inner-outer solver: A GVR-enabled case study
Zhang, Ziming; Chien, Andrew A.; Teranishi, Keita
2015-04-18
Resilience is a major challenge for large-scale systems. It is particularly important for iterative linear solvers, since they take much of the time of many scientific applications. We show that single bit flip errors in the Flexible GMRES iterative linear solver can lead to high computational overhead or even failure to converge to the right answer. Informed by these results, we design and evaluate several strategies for fault tolerance in both inner and outer solvers appropriate across a range of error rates. We implement them, extending Trilinos’ solver library with the Global View Resilience (GVR) programming model, which provides multi-streammore » snapshots, multi-version data structures with portable and rich error checking/recovery. Lastly, experimental results validate correct execution with low performance overhead under varied error conditions.« less
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
Carrier recovery methods for a dual-mode modem: A design approach
NASA Technical Reports Server (NTRS)
Richards, C. W.; Wilson, S. G.
1984-01-01
A dual mode model with selectable QPSK or 16-QASK modulation schemes is discussed. The theoretical reasoning as well as the practical trade-offs made during the development of a modem are presented, with attention given to the carrier recovery method used for coherent demodulation. Particular attention is given to carrier recovery methods that can provide little degradation due to phase error for both QPSK and 16-QASK, while being insensitive to the amplitude characteristic of a 16-QASK modulation scheme. A computer analysis of the degradation is symbol error rate (SER) for QPSK and 16-QASK due to phase error is prresented. Results find that an energy increase of roughly 4 dB is needed to maintain a SER of 1X10(-5) for QPSK with 20 deg of phase error and 16-QASK with 7 deg phase error.
A posteriori error estimates in voice source recovery
NASA Astrophysics Data System (ADS)
Leonov, A. S.; Sorokin, V. N.
2017-12-01
The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.
Exception handling for sensor fusion
NASA Astrophysics Data System (ADS)
Chavez, G. T.; Murphy, Robin R.
1993-08-01
This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
Regional Brain Dysfunction Associated with Semantic Errors in Comprehension.
Shahid, Hinna; Sebastian, Rajani; Tippett, Donna C; Saxena, Sadhvi; Wright, Amy; Hanayik, Taylor; Breining, Bonnie; Bonilha, Leonardo; Fridriksson, Julius; Rorden, Chris; Hillis, Argye E
2018-02-01
Here we illustrate how investigation of individuals acutely after stroke, before structure/function reorganization through recovery or rehabilitation, can be helpful in answering questions about the role of specific brain regions in language functions. Although there is converging evidence from a variety of sources that the left posterior-superior temporal gyrus plays some role in spoken word comprehension, its precise role in this function has not been established. We hypothesized that this region is essential for distinguishing between semantically related words, because it is critical for linking the spoken word to the complete semantic representation. We tested this hypothesis in 127 individuals with 48 hours of acute ischemic stroke, before the opportunity for reorganization or recovery. We identified tissue dysfunction (acute infarct and/or hypoperfusion) in gray and white matter parcels of the left hemisphere, and we evaluated the association between rate of semantic errors in a word-picture verification tasks and extent of tissue dysfunction in each region. We found that after correcting for lesion volume and multiple comparisons, the rate of semantic errors correlated with the extent of tissue dysfunction in left posterior-superior temporal gyrus and retrolenticular white matter. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Liu, Xuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya
2018-01-01
Traditional clock recovery scheme achieves timing adjustment by digital interpolation, thus recovering the sampling sequence. Based on this, an improved clock recovery architecture joint channel equalization for coherent optical communication system is presented in this paper. The loop is different from the traditional clock recovery. In order to reduce the interpolation error caused by the distortion in the frequency domain of the interpolator and to suppress the spectral mirroring generated by the sampling rate change, the proposed algorithm joint equalization, improves the original interpolator in the loop, along with adaptive filtering, and makes error compensation for the original signals according to the balanced pre-filtering signals. Then the signals are adaptive interpolated through the feedback loop. Furthermore, the phase splitting timing recovery algorithm is adopted in this paper. The time error is calculated according to the improved algorithm when there is no transition between the adjacent symbols, making calculated timing error more accurate. Meanwhile, Carrier coarse synchronization module is placed before the beginning of timing recovery to eliminate the larger frequency offset interference, which effectively adjust the sampling clock phase. In this paper, the simulation results show that the timing error is greatly reduced after the loop is changed. Based on the phase splitting algorithm, the BER and MSE are better than those in the unvaried architecture. In the fiber channel, using MQAM modulation format, after 100 km-transmission of single-mode fiber, especially when ROF(roll-off factor) values tends to 0, the algorithm shows a better clock performance under different ROFs. When SNR values are less than 8, the BER could achieve 10-2 to 10-1 magnitude. Furthermore, the proposed timing recovery is more suitable for the situation with low SNR values.
Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals
NASA Technical Reports Server (NTRS)
Dempsey, Brian Paul
1997-01-01
Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.
Distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A real-time multi-tasking digital control system with rapid recovery capability is disclosed. The control system includes a plurality of computing units comprising a plurality of redundant processing units, with each of the processing units configured to generate one or more redundant control commands. One or more internal monitors are employed for detecting data errors in the control commands. One or more recovery triggers are provided for initiating rapid recovery of a processing unit if data errors are detected. The control system also includes a plurality of actuator control units each in operative communication with the computing units. The actuator control units are configured to initiate a rapid recovery if data errors are detected in one or more of the processing units. A plurality of smart actuators communicates with the actuator control units, and a plurality of redundant sensors communicates with the computing units.
Error recovery to enable error-free message transfer between nodes of a computer network
Blumrich, Matthias A.; Coteus, Paul W.; Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd; Steinmacher-Burow, Burkhard; Vranas, Pavlos M.
2016-01-26
An error-recovery method to enable error-free message transfer between nodes of a computer network. A first node of the network sends a packet to a second node of the network over a link between the nodes, and the first node keeps a copy of the packet on a sending end of the link until the first node receives acknowledgment from the second node that the packet was received without error. The second node tests the packet to determine if the packet is error free. If the packet is not error free, the second node sets a flag to mark the packet as corrupt. The second node returns acknowledgement to the first node specifying whether the packet was received with or without error. When the packet is received with error, the link is returned to a known state and the packet is sent again to the second node.
Care 3 phase 2 report, maintenance manual
NASA Technical Reports Server (NTRS)
Bryant, L. A.; Stiffler, J. J.
1982-01-01
CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.
Progressive retry for software error recovery in distributed systems
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.
1993-01-01
In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.
Tg and Structural Recovery of Single Ultrathin Films
NASA Astrophysics Data System (ADS)
Simon, Sindee
The behavior of materials confined at the nanoscale has been of considerable interest over the past two decades. Here, the focus is on recent results for single polystyrene ultrathin films studied with ultrafast scanning chip calorimetry. The Tg depression of a 20 nm-thick high-molecular-weight polystyrene film is found to be a function of cooling rate, decreasing with increasing cooling rate; whereas, at high enough cooling rates (e.g., 1000 K/s), Tg is the same as the bulk within the error of the measurements. Structural recovery is also performed with chip calorimetry as a function of aging time and temperature, and the evolution of the fictive temperature is followed. The advantages of the Flash DSC include sufficient sensitivity to measure enthalpy recovery for a single 20 nm-thick film, as well as extension of the measurements to aging temperatures as high as 15 K above nominal Tg and to aging times as short as 0.01 s. The aging behavior and relaxation time-temperature map for single ultrathin films are compared to those for bulk material. Comparison to behavior in other geometries will also be discussed.
Huang, Juan; Hung, Li-Fang; Smith, Earl L.
2012-01-01
This study aimed to investigate the changes in ocular shape and relative peripheral refraction during the recovery from myopia produced by form deprivation (FD) and hyperopic defocus. FD was imposed in 6 monkeys by securing a diffuser lens over one eye; hyperopic defocus was produced in another 6 monkeys by fitting one eye with -3D spectacle. When unrestricted vision was re-established, the treated eyes recovered from the vision-induced central and peripheral refractive errors. The recovery of peripheral refractive errors was associated with corresponding changes in the shape of the posterior globe. The results suggest that vision can actively regulate ocular shape and the development of central and peripheral refractions in infant primates. PMID:23026012
Structured Matrix Completion with Applications to Genomic Data Integration.
Cai, Tianxi; Cai, T Tony; Zhang, Anru
2016-01-01
Matrix completion has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Fault Injection Techniques and Tools
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.
1997-01-01
Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.
System reliability and recovery.
DOT National Transportation Integrated Search
1971-06-01
The paper exhibits a variety of reliability techniques applicable to future ATC data processing systems. Presently envisioned schemes for error detection, error interrupt and error analysis are considered, along with methods of retry, reconfiguration...
Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion
NASA Astrophysics Data System (ADS)
Zou, Cuiming; Kou, Kit Ian
2018-05-01
Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.
Blood transfusion sampling and a greater role for error recovery.
Oldham, Jane
Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.
Recovery from unusual attitudes: HUD vs. back-up display in a static F/A-18 simulator.
Huber, Samuel W
2006-04-01
Spatial disorientation (SD) remains one of the most important causes of fatal fighter aircraft accidents. The aim of this study was to give a recommendation for the use of the head-up display (HUD) or back-up attitude directional indicator (ADI) in a state of spatial disorientation based on the respective performance in an unusual attitude recovery task. Seven fighter pilots joining a conversion course to the F/A-18 participated in this study. Flight time will be presented as range (and mean in parentheses). Total military flight experience of the subjects was 835-1759 h (1412 h). Flight time on the F/A-18 was 41-123 h (70 h). The study was performed in a fixed base F/A-18D Weapons Tactics Trainer. We tested the recovery from 11 unusual attitudes and analyzed decision time (DT), total recovery time (TRT), and error rates for the HUD or the back-up ADI. We found no differences regarding either reaction times or error rates. For the HUD we found a DT (mean +/- SD) of 1.3 +/- 0.4 s, a TRT of 9.1 +/- 4.1 s, and an error rate of 29%. For the ADI the respective values were a DT of 1.4 +/- 0.4 s, a TRT of 8.3 +/- 3.8 s, and an error rate of 27%. Unusual attitude recoveries are performed equally well using the HUD or the back-up ADI. Switching from one instrument to the other during recovery should be avoided since it would probably result in a loss of time without benefit.
Implementation of an experimental fault-tolerant memory system
NASA Technical Reports Server (NTRS)
Carter, W. C.; Mccarthy, C. E.
1976-01-01
The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation
NASA Technical Reports Server (NTRS)
Tessler, A.; Riggs, H. R.; Dambach, M.
1998-01-01
A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.
UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers
NASA Technical Reports Server (NTRS)
Cone, Andrew C.; Thipphavong, David; Lee, Seung Man; Santiago, Confesor
2017-01-01
This paper documents a study that drove the development of a mathematical expression in the detect-and-avoid (DAA) minimum operational performance standards (MOPS) for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance should be provided during recovery of DAA well clear separation with a non-cooperative VFR aircraft. Although the original hypothesis was that vertical maneuvers for DAA well clear recovery should only be offered when sensor vertical rate errors are small, this paper suggests that UAS climb and descent performance should be considered-in addition to sensor errors for vertical position and vertical rate-when determining whether to offer vertical guidance. A fast-time simulation study involving 108,000 encounters between a UAS and a non-cooperative visual-flight-rules aircraft was conducted. Results are presented showing that, when vertical maneuver guidance for DAA well clear recovery was suppressed, the minimum vertical separation increased by roughly 50 feet (or horizontal separation by 500 to 800 feet). However, the percentage of encounters that had a risk of collision when performing vertical well clear recovery maneuvers was reduced as UAS vertical rate performance increased and sensor vertical rate errors decreased. A class of encounter is identified for which vertical-rate error had a large effect on the efficacy of horizontal maneuvers due to the difficulty of making the correct left/right turn decision: crossing conflict with intruder changing altitude. Overall, these results support logic that would allow vertical maneuvers when UAS vertical performance is sufficient to avoid the intruder, based on the intruder's estimated vertical position and vertical rate, as well as the vertical rate error of the UAS' sensor.
Jitter model and signal processing techniques for pulse width modulation optical recording
NASA Technical Reports Server (NTRS)
Liu, Max M.-K.
1991-01-01
A jitter model and signal processing techniques are discussed for data recovery in Pulse Width Modulation (PWM) optical recording. In PWM, information is stored through modulating sizes of sequential marks alternating in magnetic polarization or in material structure. Jitter, defined as the deviation from the original mark size in the time domain, will result in error detection if it is excessively large. A new approach is taken in data recovery by first using a high speed counter clock to convert time marks to amplitude marks, and signal processing techniques are used to minimize jitter according to the jitter model. The signal processing techniques include motor speed and intersymbol interference equalization, differential and additive detection, and differential and additive modulation.
Measurement and analysis of operating system fault tolerance
NASA Technical Reports Server (NTRS)
Lee, I.; Tang, D.; Iyer, R. K.
1992-01-01
This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.
How do Community Pharmacies Recover from E-prescription Errors?
Odukoya, Olufunmilola K.; Stone, Jamie A.; Chui, Michelle A.
2014-01-01
Background The use of e-prescribing is increasing annually, with over 788 million e-prescriptions received in US pharmacies in 2012. Approximately 9% of e-prescriptions have medication errors. Objective To describe the process used by community pharmacy staff to detect, explain, and correct e-prescription errors. Methods The error recovery conceptual framework was employed for data collection and analysis. 13 pharmacists and 14 technicians from five community pharmacies in Wisconsin participated in the study. A combination of data collection methods were utilized, including direct observations, interviews, and focus groups. The transcription and content analysis of recordings were guided by the three-step error recovery model. Results Most of the e-prescription errors were detected during the entering of information into the pharmacy system. These errors were detected by both pharmacists and technicians using a variety of strategies which included: (1) performing double checks of e-prescription information; (2) printing the e-prescription to paper and confirming the information on the computer screen with information from the paper printout; and (3) using colored pens to highlight important information. Strategies used for explaining errors included: (1) careful review of patient’ medication history; (2) pharmacist consultation with patients; (3) consultation with another pharmacy team member; and (4) use of online resources. In order to correct e-prescription errors, participants made educated guesses of the prescriber’s intent or contacted the prescriber via telephone or fax. When e-prescription errors were encountered in the community pharmacies, the primary goal of participants was to get the order right for patients by verifying the prescriber’s intent. Conclusion Pharmacists and technicians play an important role in preventing e-prescription errors through the detection of errors and the verification of prescribers’ intent. Future studies are needed to examine factors that facilitate or hinder recovery from e-prescription errors. PMID:24373898
Response of fish assemblages to decreasing acid deposition in Adirondack Mountain lakes
Baldigo, Barry P.; Roy, Karen; Driscoll, Charles T.
2016-01-01
The CAA and other federal regulations have clearly reduced emissions of NOx and SOx, acidic deposition, and the acidity and toxicity of waters in the ALTM lakes, but these changes have not triggered widespread recovery of brook trout populations or fish communities. The lack of detectable biological recovery appears to result from relatively recent chemical recovery and an insufficient period for species populations to take advantage of improved water quality. Recovery of extirpated species’ populations may simply require more time for individuals to migrate to and repopulate formerly occupied lakes. Supplemental stocking of selected species may be required in some lakes with no remnant (or nearby) populations or with physical barriers between the recovered lake and source populations. The lack of detectable biological recovery could also be related to our inability to calculate measures of uncertainty or error and, thus, examine temporal changes or differences in populations and community metrics in more depth (e.g., within individual lakes) using existing datasets. Indeed, recovery of brook trout populations and partial recovery of fish communities are documented in several lakes of the region, both with and without human intervention. Multiple fish surveys (annually or within the same year) or the use of mark and recapture methods within individual lakes would help alleviate the issue (provide measures of error for key fishery metrics) within the context of a more focused sampling strategy. Efforts to evaluate and detect recovery in fish assemblages from streams may be more effective than in lakes because various life stages, species’ populations, and entire assemblages are easier to quantify, with known levels of error, in streams than in lakes. Such long-term monitoring efforts could increase our ability to detect and quantify biological recovery in recovering (neutralizing) surface waters throughout the Adirondack Region.
Protein structure estimation from NMR data by matrix completion.
Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing
2017-09-01
Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.
Baran, Timothy M; Foster, Thomas H
2013-10-01
We developed a method for the recovery of intrinsic fluorescence from single-point measurements in highly scattering and absorbing samples without a priori knowledge of the sample optical properties. The goal of the study was to demonstrate accurate recovery of fluorophore concentration in samples with widely varying background optical properties, while simultaneously recovering the optical properties. Tissue-simulating phantoms containing doxorubicin, MnTPPS, and Intralipid-20% were created, and fluorescence measurements were performed using a single isotropic probe. The resulting spectra were analyzed using a forward-adjoint fluorescence model in order to recover the fluorophore concentration and background optical properties. We demonstrated recovery of doxorubicin concentration with a mean error of 11.8%. The concentration of the background absorber was recovered with an average error of 23.2% and the scattering spectrum was recovered with a mean error of 19.8%. This method will allow for the determination of local concentrations of fluorescent drugs, such as doxorubicin, from minimally invasive fluorescence measurements. This is particularly interesting in the context of transarterial chemoembolization (TACE) treatment of liver cancer. © 2013 Wiley Periodicals, Inc.
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1977-01-01
The feasibility of recovering parameters from one-way range rate between two earth orbiting spacecraft during occultation of the tracking signal by the earth's lower atmosphere. The tracking data is inverted by an integral transformation (Abel transform) to obtain a vertical refractivity profile above the point of closest approach of the ray connecting the satellites. Pressure and temperature distributions can be obtained from values of dry refractivity using the hydrostatic equation and perfect gas law. Two methods are investigated for recovering pressure and temperature parameters. Results show that recovery is much more sensitive to satellite velocity errors than to satellite position errors. An error analysis is performed. An example is given demonstrating recovery of parameters from radio occultation data obtained during satellite-to-satellite tracking of Nimbus 6 by the ATS 6 satellite.
Experimental evaluation of multiprocessor cache-based error recovery
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. K.
1991-01-01
Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.
General specifications for the development of a PC-based simulator of the NASA RECON system
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1984-01-01
The general specifications for the design and implementation of an IBM PC/XT-based simulator of the NASA RECON system, including record designs, file structure designs, command language analysis, program design issues, error recovery considerations, and usage monitoring facilities are discussed. Once implemented, such a simulator will be utilized to evaluate the effectiveness of simulated information system access in addition to actual system usage as part of the total educational programs being developed within the NASA contract.
Preventing Data Ambiguity in Infectious Diseases with Four-Dimensional and Personalized Evaluations
Iandiorio, Michelle J.; Fair, Jeanne M.; Chatzipanagiotou, Stylianos; Ioannidis, Anastasios; Trikka-Graphakos, Eleftheria; Charalampaki, Nikoletta; Sereti, Christina; Tegos, George P.; Hoogesteijn, Almira L.; Rivas, Ariel L.
2016-01-01
Background Diagnostic errors can occur, in infectious diseases, when anti-microbial immune responses involve several temporal scales. When responses span from nanosecond to week and larger temporal scales, any pre-selected temporal scale is likely to miss some (faster or slower) responses. Hoping to prevent diagnostic errors, a pilot study was conducted to evaluate a four-dimensional (4D) method that captures the complexity and dynamics of infectious diseases. Methods Leukocyte-microbial-temporal data were explored in canine and human (bacterial and/or viral) infections, with: (i) a non-structured approach, which measures leukocytes or microbes in isolation; and (ii) a structured method that assesses numerous combinations of interacting variables. Four alternatives of the structured method were tested: (i) a noise-reduction oriented version, which generates a single (one data point-wide) line of observations; (ii) a version that measures complex, three-dimensional (3D) data interactions; (iii) a non-numerical version that displays temporal data directionality (arrows that connect pairs of consecutive observations); and (iv) a full 4D (single line-, complexity-, directionality-based) version. Results In all studies, the non-structured approach revealed non-interpretable (ambiguous) data: observations numerically similar expressed different biological conditions, such as recovery and lack of recovery from infections. Ambiguity was also found when the data were structured as single lines. In contrast, two or more data subsets were distinguished and ambiguity was avoided when the data were structured as complex, 3D, single lines and, in addition, temporal data directionality was determined. The 4D method detected, even within one day, changes in immune profiles that occurred after antibiotics were prescribed. Conclusions Infectious disease data may be ambiguous. Four-dimensional methods may prevent ambiguity, providing earlier, in vivo, dynamic, complex, and personalized information that facilitates both diagnostics and selection or evaluation of anti-microbial therapies. PMID:27411058
Liggett, Kristen K; Gallimore, Jennie J
2002-02-01
Spatial disorientation (SD) refers to pilots' inability to accurately interpret the attitude of their aircraft with respect to Earth. Unfortunately, SD statistics have held constant for the past few decades, through the transition from the head-down attitude indicator (Al) to the head-up display (HUD) as the attitude instrument. The newest attitude-indicating device to find its way into military cockpits is the helmet-mounted display (HMD). HMDs were initially introduced into the cockpit to enhance target location and weapon-pointing, but there is currently an effort to make HMDs attitude reference displays so pilots need not go head-down to obtain attitude information. However, unintuitive information or inappropriate implementation of on-boresight attitude symbology on the HMD may contribute to the SD problem. The occurrence of control reversal errors (CREs) during unusual attitude recovery tasks when using an HMD to provide attitude information was investigated. The effect of such errors was evaluated in terms of altitude changes during recovery and time to recover. There were 12 pilot-subjects who completed 8 unusual attitude recovery tasks. Results showed that CREs did occur, and there was a significant negative effect of these errors on absolute altitude change, but not on total recovery time. Results failed to show a decrease in the number of CREs occurring when using the HMD as compared with data from other studies that used an Al or a HUD. Results suggest that new HMD attitude symbology needs to be designed to help reduce CREs and, perhaps, SD incidences.
Zhang, Ruirui; Mak, Winnie W S; Chan, Randolph C H
2017-01-01
Although people in recovery from mental illness can continue to live a personally meaningful life despite their mental illness, their perception of mental illness as being a threat to their basic needs may influence the way they view themselves as a person with mental illness and their sense of mastery over their condition. The present study explored the effects of perceived primal threat on the recovery of people with mental illness, considering the mediating roles of self-stigma and self-empowerment. Latent variable structural equation modeling was conducted among 376 individuals with mental illness in Hong Kong. The model had excellent fit to the data (χ2 = 123.96, df = 60, χ2/df = 2.07, comparative fit index [CFI] = .98, Tucker-Lewis index [TLI] = .97, root mean square error of approximation [RMSEA] = .05, standardized root mean squared residual [SRMR] = .04). The influence of perceived primal threat on recovery was mediated by self-stigma and self-empowerment. Specifically, perceived primal threat was associated positively with self-stigma, which was negatively related to recovery; in contrast, it was negatively related to self-empowerment, which was positively related to recovery. This study adds to the understanding of the mechanism underlying the influence of perceived primal threat on recovery and suggests that perceived primal threat should be considered in the recovery process among people with mental illness. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Analysis of MMU FDIR expert system
NASA Technical Reports Server (NTRS)
Landauer, Christopher
1990-01-01
This paper describes the analysis of a rulebase for fault diagnosis, isolation, and recovery for NASA's Manned Maneuvering Unit (MMU). The MMU is used by a human astronaut to move around a spacecraft in space. In order to provide maneuverability, there are several thrusters oriented in various directions, and hand-controlled devices for useful groups of them. The rulebase describes some error detection procedures, and corrective actions that can be applied in a few cases. The approach taken in this paper is to treat rulebases as symbolic objects and compute correctness and 'reasonableness' criteria that use the statistical distribution of various syntactic structures within the rulebase. The criteria should identify awkward situations, and otherwise signal anomalies that may be errors. The rulebase analysis agorithms are derived from mathematical and computational criteria that implement certain principles developed for rulebase evaluation. The principles are Consistency, Completeness, Irredundancy, Connectivity, and finally, Distribution. Several errors were detected in the delivered rulebase. Some of these errors were easily fixed. Some errors could not be fixed with the available information. A geometric model of the thruster arrangement is needed to show how to correct certain other distribution nomalies that are in fact errors. The investigations reported here were partially supported by The Aerospace Corporation's Sponsored Research Program.
Carbon Consequences of Forest Disturbance and Recovery Across the Conterminous United States
NASA Technical Reports Server (NTRS)
Williams, Christopher A.; Collatz, G. James; Masek, Jeffrey; Goward, Samuel N.
2012-01-01
Forests of North America are thought to constitute a significant long term sink for atmospheric carbon. The United States Forest Service Forest Inventory and Analysis (FIA) program has developed a large data base of stock changes derived from consecutive estimates of growing stock volume in the US. These data reveal a large and relatively stable increase in forest carbon stocks over the last two decades or more. The mechanisms underlying this national increase in forest stocks may include recovery of forests from past disturbances, net increases in forest area, and growth enhancement driven by climate or fertilization by CO2 and Nitrogen. Here we estimate the forest recovery component of the observed stock changes using FIA data on the age structure of US forests and carbon stocks as a function of age. The latter are used to parameterize forest disturbance and recovery processes in a carbon cycle model. We then apply resulting disturbance/recovery dynamics to landscapes and regions based on the forest age distributions. The analysis centers on 28 representative climate settings spread about forested regions of the conterminous US. We estimate carbon fluxes for each region and propagate uncertainties in calibration data through to the predicted fluxes. The largest recovery-driven carbon sinks are found in the South central, Pacific Northwest, and Pacific Southwest regions, with spatially averaged net ecosystem productivity (NEP) of about 100 g C / square m / a driven by forest age structure. Carbon sinks from recovery in the Northeast and Northern Lake States remain moderate to large owing to the legacy of historical clearing and relatively low modern disturbance rates from harvest and fire. At the continental scale, we find a conterminous U.S. forest NEP of only 0.16 Pg C/a from age structure in 2005, or only 0.047 Pg C/a of forest stock change after accounting for fire emissions and harvest transfers. Recent estimates of NEP derived from inventory stock change, harvest, and fire data show twice the NEP sink we derive from forest age distributions. We discuss possible reasons for the discrepancies including modeling errors and the possibility of climate and/or fertilization (CO2 or N) growth enhancements.
Performance and structure of single-mode bosonic codes
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
2018-03-01
The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.
Baran, Timothy M.; Foster, Thomas H.
2014-01-01
Background and Objective We developed a method for the recovery of intrinsic fluorescence from single-point measurements in highly scattering and absorbing samples without a priori knowledge of the sample optical properties. The goal of the study was to demonstrate accurate recovery of fluorophore concentration in samples with widely varying background optical properties, while simultaneously recovering the optical properties. Materials and Methods Tissue-simulating phantoms containing doxorubicin, MnTPPS, and Intralipid-20% were created, and fluorescence measurements were performed using a single isotropic probe. The resulting spectra were analyzed using a forward-adjoint fluorescence model in order to recover the fluorophore concentration and background optical properties. Results We demonstrated recovery of doxorubicin concentration with a mean error of 11.8%. The concentration of the background absorber was recovered with an average error of 23.2% and the scattering spectrum was recovered with a mean error of 19.8%. Conclusion This method will allow for the determination of local concentrations of fluorescent drugs, such as doxorubicin, from minimally invasive fluorescence measurements. This is particularly interesting in the context of transarterial chemoembolization (TACE) treatment of liver cancer. PMID:24037853
Recovery from the DNA Replication Checkpoint
Chaudhury, Indrajit; Koepp, Deanna M.
2016-01-01
Checkpoint recovery is integral to a successful checkpoint response. Checkpoint pathways monitor progress during cell division so that in the event of an error, the checkpoint is activated to block the cell cycle and activate repair pathways. Intrinsic to this process is that once repair has been achieved, the checkpoint signaling pathway is inactivated and cell cycle progression resumes. We use the term “checkpoint recovery” to describe the pathways responsible for the inactivation of checkpoint signaling and cell cycle re-entry after the initial stress has been alleviated. The DNA replication or S-phase checkpoint monitors the integrity of DNA synthesis. When replication stress is encountered, replication forks are stalled, and the checkpoint signaling pathway is activated. Central to recovery from the S-phase checkpoint is the restart of stalled replication forks. If checkpoint recovery fails, stalled forks may become unstable and lead to DNA breaks or unusual DNA structures that are difficult to resolve, causing genomic instability. Alternatively, if cell cycle resumption mechanisms become uncoupled from checkpoint inactivation, cells with under-replicated DNA might proceed through the cell cycle, also diminishing genomic stability. In this review, we discuss the molecular mechanisms that contribute to inactivation of the S-phase checkpoint signaling pathway and the restart of replication forks during recovery from replication stress. PMID:27801838
Speed and Deceleration Trials of U.S.S. Los Angeles
NASA Technical Reports Server (NTRS)
De France, S J; Burgess, C P
1930-01-01
The trials reported in this report were instigated by the Bureau of Aeronautics of the Navy Department for the purpose of determining accurately the speed and resistance of the U. S. S. "Los Angeles" with and without water recovery apparatus, and to clear up the apparent discrepancies between the speed attained in service and in the original trials in Germany. The trials proved very conclusively that the water recovery apparatus increases the resistance about 20 per cent, which is serious, and shows the importance of developing a type of recovery having less resistance. Between the American and the German speed trials without water recovery there remains an unexplained discrepancy of nearly 6 per cent in speed at a given rate of engine revolutions. Warping of the propeller blades and small cumulative errors of observation seem the most probable causes of the discrepancy. It was found that the customary resistance coefficients C, are 0.0242 and 0.0293 without and with the water recovery apparatus, respectively. The corresponding values of the propulsive coefficient K, are 56.7 and 44.6. If there is an error in these figures, it is probably in a slight overestimate of C, and an underestimate of K. The maximum errors are almost certainly less than 5 per cent. No scale effect was detected indicating variation of C with respect to velocity (author)
Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E
2009-07-01
The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.
Analysis of backward error recovery for concurrent processes with recovery blocks
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1982-01-01
Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Entry flight control system downmoding evaluation
NASA Technical Reports Server (NTRS)
Barnes, H. A.
1978-01-01
A method to desensitize the entry flight control system to structural vibration feedback which might induce an oscillatory instability is described. Trends in vehicle response and handling characteristics as a function of gain combinations in the FCS forward and rate feedback loops were described as observed in a man-in-the-loop simulation. Among the flight conditions considered are the effects of downmoding with APU failures, off-nominal trajectory conditions, sensed angle of attack errors, the impact on RCS fuel consumption, performance in the presence of aero variations, recovery from large FCS upsets, and default gains.
USDA-ARS?s Scientific Manuscript database
Measurement error in assessment of sodium and potassium intake obscures associations with health outcomes. The level of this error in a diverse US Hispanic/Latino population is unknown. We investigated the measurement error in self-reported dietary intake of sodium and potassium and examined differe...
Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L
2010-02-01
This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.
Reliable vision-guided grasping
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Self-recovery fragile watermarking algorithm based on SPHIT
NASA Astrophysics Data System (ADS)
Xin, Li Ping
2015-12-01
A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.
Cohen, Trevor; Blatter, Brett; Almeida, Carlos; Patel, Vimla L.
2007-01-01
Objective Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice. Design Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED. Results Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis. Conclusions The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts. PMID:17329728
Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)
NASA Technical Reports Server (NTRS)
Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.
2006-01-01
Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.
The Watchdog Task: Concurrent error detection using assertions
NASA Technical Reports Server (NTRS)
Ersoz, A.; Andrews, D. M.; Mccluskey, E. J.
1985-01-01
The Watchdog Task, a software abstraction of the Watchdog-processor, is shown to be a powerful error detection tool with a great deal of flexibility and the advantages of watchdog techniques. A Watchdog Task system in Ada is presented; issues of recovery, latency, efficiency (communication) and preprocessing are discussed. Different applications, one of which is error detection on a single processor, are examined.
Implementing forward recovery using checkpointing in distributed systems
NASA Technical Reports Server (NTRS)
Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.
1991-01-01
The paper describes the implementation of a forward recovery scheme using checkpoints and replicated tasks. The implementation is based on the concept of lookahead execution and rollback validation. In the experiment, two tasks are selected for the normal execution and one for rollback validation. It is shown that the recovery strategy has nearly error-free execution time and an average redundancy lower than TMR.
EVALUATION OF ANALYTICAL REPORTING ERRORS GENERATED AS DESCRIBED IN SW-846 METHOD 8261A
SW-846 Method 8261A incorporates the vacuum distillation of analytes from samples, and their recoveries are characterized by internal standards. The internal standards measure recoveries with confidence intervals as functions of physical properties. The frequency the calculate...
Does the Assessment of Recovery Capital scale reflect a single or multiple domains?
Arndt, Stephan; Sahker, Ethan; Hedden, Suzy
2017-01-01
The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.
Fault Modeling of Extreme Scale Applications Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.
Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less
Fault Modeling of Extreme Scale Applications Using Machine Learning
Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...
2016-05-01
Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less
Mass change from GRACE: a simulated comparison of Level-1B analysis techniques
NASA Astrophysics Data System (ADS)
Andrews, Stuart B.; Moore, Philip; King, Matt. A.
2015-01-01
Spherical harmonic and mascon parameters have both been successfully applied in the recovery of time-varying gravity fields from Gravity Recovery and Climate Experiment (GRACE). However, direct comparison of any mass flux is difficult with solutions generated by different groups using different codes and algorithms. It is therefore opportune to compare these methodologies, within a common software base, to understand potential limitations associated with each technique. Here we use simulations to recover a known monthly surface mass distribution from GRACE KBRR data. The ability of spherical harmonic and mascon parameters to resolve basin-level mass change is quantified with an assessment of how the noise and errors, inherent in GRACE solutions, are handled. Recovery of a noise and error free GLDAS anomaly revealed no quantifiable difference between spherical harmonic and mascon parameters. Expansion of the GLDAS anomaly to degree and order 120 shows that both spherical harmonic and mascon parameters are affected by comparable omission errors. However, the inclusion of realistic KBRR noise and errors in the simulations reveals the advantage of the mascon parameters over spherical harmonics at reducing noise and errors in the higher degree and order harmonics with an rms (cm of EWH) to the GLDAS anomaly of 10.0 for the spherical harmonic solution and 8.8 (8.6) for the 4°(2°) mascon solutions. The introduction of a constraint matrix in the mascon solution based on parameters that share geophysical similarities is shown to further reduce the signal lost at all degrees. The recovery of a simulated Antarctic mass loss signal shows that the mascon methodology is superior to spherical harmonics for this region with an rms (cm of EWH) of 8.7 for the 2° mascon solution compared to 10.0 for the spherical harmonic solution. Investigating the noise and errors for a month when the satellites were in resonance revealed both the spherical harmonic and mascon methodologies are able to recover the GLDAS and Antarctic mass loss signal with either a comparable (spherical harmonic) or improved (mascon) rms compared to non-resonance periods.
Distributed Compressive CSIT Estimation and Feedback for FDD Multi-User Massive MIMO Systems
NASA Astrophysics Data System (ADS)
Rao, Xiongbin; Lau, Vincent K. N.
2014-06-01
To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.
Carrier recovery techniques on satellite mobile channels
NASA Technical Reports Server (NTRS)
Vucetic, B.; Du, J.
1990-01-01
An analytical method and a stored channel model were used to evaluate error performance of uncoded quadrature phase shift keying (QPSK) and M-ary phase shift keying (MPSK) trellis coded modulation (TCM) over shadowed satellite mobile channels in the presence of phase jitter for various carrier recovery techniques.
Gravity Field Recovery from the Cartwheel Formation by the Semi-analytical Approach
NASA Astrophysics Data System (ADS)
Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico; Zhong, Min; Zhou, Zebing
2016-04-01
Past and current gravimetric satellite missions have contributed drastically to our knowledge of the Earth's gravity field. Nevertheless, several geoscience disciplines push for even higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure. With respect to other methods, one significant advantage of the semi-analytical approach is its effective pre-mission error assessment for gravity field missions. The semi-analytical approach builds a linear analytical relationship between the Fourier spectrum of the observables and the spherical harmonic spectrum of the gravity field. The spectral link between observables and gravity field parameters is given by the transfer coefficients, which constitutes the observation model. In connection with a stochastic model, it can be used for pre-mission error assessment of gravity field mission. The cartwheel formation is formed by two satellites on elliptic orbits in the same plane. The time dependent ranging will be considered in the transfer coefficients via convolution including the series expansion of the eccentricity functions. The transfer coefficients are applied to assess the error patterns, which are caused by different orientation of the cartwheel for range-rate and range acceleration. This work will present the isotropy and magnitude of the formal errors of the gravity field coefficients, for different orientations of the cartwheel.
QOS-aware error recovery in wireless body sensor networks using adaptive network coding.
Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah
2014-12-29
Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.
MacCourt, Duncan; Bernstein, Joseph
2009-01-01
The current medical malpractice system is broken. Many patients injured by malpractice are not compensated, whereas some patients who recover in tort have not suffered medical negligence; furthermore, the system's failures demoralize patients and physicians. But most importantly, the system perpetuates medical error because the adversarial nature of litigation induces a so-called "Culture of Silence" in physicians eager to shield themselves from liability. This silence leads to the pointless repetition of error, as the open discussion and analysis of the root causes of medical mistakes does not take place as fully as it should. In 1993, President Clinton's Task Force on National Health Care Reform considered a solution characterized by Enterprise Medical Liability (EML), Alternative Dispute Resolution (ADR), some limits on recovery for non-pecuniary damages (Caps), and offsets for collateral source recovery. Yet this list of ingredients did not include a strategy to surmount the difficulties associated with each element. Specifically, EML might be efficient, but none of the enterprises contemplated to assume responsibility, i.e., hospitals and payers, control physician behavior enough so that it would be fair to foist liability on them. Likewise, although ADR might be efficient, it will be resisted by individual litigants who perceive themselves as harmed by it. Finally, while limitations on collateral source recovery and damages might effectively reduce costs, patients and trial lawyers likely would not accept them without recompense. The task force also did not place error reduction at the center of malpractice tort reform -a logical and strategic error, in our view. In response, we propose a new system that employs the ingredients suggested by the task force but also addresses the problems with each. We also explicitly consider steps to rebuff the Culture of Silence and promote error reduction. We assert that patients would be better off with a system where physicians cede their implicit "right to remain silent", even if some injured patients will receive less than they do today. Likewise, physicians will be happier with a system that avoids blame-even if this system placed strict requirements for high quality care and disclosure of error. We therefore conceive of de facto trade between patients and physicians, a Pareto improvement, taking form via the establishment of "Societies of Quality Medicine." Physicians working within these societies would consent to onerous processes for disclosing, rectifying and preventing medical error. Patients would in turn contractually agree to assert their claims in arbitration and with limits on recovery. The role of plaintiffs' lawyers would be unchanged, but due to increased disclosure, discovery costs would diminish and the likelihood of prevailing will more than triple. This article examines the legal and policy issues surrounding the establishment of Societies of Quality Medicine, particularly the issues of contracting over liability, and outlines a means of overcoming the theoretical and practical difficulties with enterprise liability, alternative dispute resolution and the imposition of limits on recovery for non-pecuniary damages. We aim to build a welfare enhancing system that rebuffs the culture of silence and promotes error reduction, a system that is at the same time legally sound, fiscally prudent and politically possible.
Characterizing the impact of model error in hydrologic time series recovery inverse problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Characterizing the impact of model error in hydrologic time series recovery inverse problems
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
2017-10-28
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
Open quantum systems and error correction
NASA Astrophysics Data System (ADS)
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This is a complementary to the second chapter which is published in [Shabani and Lidar, 2007]. In the last chapter 7 before the conclusion, a formulation for evaluating the performance of quantum error correcting codes for a general error model is presented, also published in [Shabani, 2005]. In this formulation, the correlation between errors is quantified by a Hamiltonian description of the noise process. In particular, we consider Calderbank-Shor-Steane codes and observe a better performance in the presence of correlated errors depending on the timing of the error recovery.
Gravity field recovery in the framework of a Geodesy and Time Reference in Space (GETRIS)
NASA Astrophysics Data System (ADS)
Hauk, Markus; Schlicht, Anja; Pail, Roland; Murböck, Michael
2017-04-01
The study ;Geodesy and Time Reference in Space; (GETRIS), funded by European Space Agency (ESA), evaluates the potential and opportunities coming along with a global space-borne infrastructure for data transfer, clock synchronization and ranging. Gravity field recovery could be one of the first beneficiary applications of such an infrastructure. This paper analyzes and evaluates the two-way high-low satellite-to-satellite-tracking as a novel method and as a long-term perspective for the determination of the Earth's gravitational field, using it as a synergy of one-way high-low combined with low-low satellite-to-satellite-tracking, in order to generate adequate de-aliasing products. First planned as a constellation of geostationary satellites, it turned out, that an integration of European Union Global Navigation Satellite System (Galileo) satellites (equipped with inter-Galileo links) into a Geostationary Earth Orbit (GEO) constellation would extend the capability of such a mission constellation remarkably. We report about simulations of different Galileo and Low Earth Orbiter (LEO) satellite constellations, computed using time variable geophysical background models, to determine temporal changes in the Earth's gravitational field. Our work aims at an error analysis of this new satellite/instrument scenario by investigating the impact of different error sources. Compared to a low-low satellite-to-satellite-tracking mission, results show reduced temporal aliasing errors due to a more isotropic error behavior caused by an improved observation geometry, predominantly in near-radial direction within the inter-satellite-links, as well as the potential of an improved gravity recovery with higher spatial and temporal resolution. The major error contributors of temporal gravity retrieval are aliasing errors due to undersampling of high frequency signals (mainly atmosphere, ocean and ocean tides). In this context, we investigate adequate methods to reduce these errors. We vary the number of Galileo and LEO satellites and show reduced errors in the temporal gravity field solutions for this enhanced inter-satellite-links. Based on the GETRIS infrastructure, the multiplicity of satellites enables co-estimating short-period long-wavelength gravity field signals, indicating it as powerful method for non-tidal aliasing reduction.
QPPM receiver for free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, J. M.; Mohamed, J. H.; Nagy, L. A.; Lizanich, P. J.; Mortensen, D. J.
1994-01-01
A prototype receiver developed at NASA Lewis Research Center for direct detection and demodulation of quaternary pulse position modulated (QPPM) optical carriers is described. The receiver enables dual-channel communications at 325-Megabits per second (Mbps) per channel. The optical components of the prototype receiver are briefly described. The electronic components, comprising the analog signal conditioning, slot clock recovery, matched filter and maximum likelihood data recovery circuits are described in more detail. A novel digital symbol clock recovery technique is presented as an alternative to conventional analog methods. Simulated link degradations including noise and pointing-error induced amplitude variations are applied. The bit-error-rate performance of the electronic portion of the prototype receiver under varying optical signal-to-noise power ratios is found to be within 1.5-dB of theory. Implementation of the receiver as a hybrid of analog and digital application specific integrated circuits is planned.
Acousto-thermometric recovery of the deep temperature profile using heat conduction equations
NASA Astrophysics Data System (ADS)
Anosov, A. A.; Belyaev, R. V.; Vilkov, V. A.; Dvornikova, M. V.; Dvornikova, V. V.; Kazanskii, A. S.; Kuryatnikova, N. A.; Mansfel'd, A. D.
2012-09-01
In a model experiment using the acousto-thermographic method, deep temperature profiles varying in time are recovered. In the recovery algorithm, we used a priori information in the form of a requirement that the calculated temperature must satisfy the heat conduction equation. The problem is reduced to determining two parameters: the initial temperature and the temperature conductivity coefficient of the object under consideration (the plasticine band). During the experiment, there was independent inspection using electronic thermometers mounted inside the plasticine. The error in the temperature conductivity coefficient was about 17% and the error in initial temperature determination was less than one degree. Such recovery results allow application of this approach to solving a number of medical problems. It is experimentally proved that acoustic irregularities influence the acousto-thermometric results as well. It is shown that in the chosen scheme of experiment (which corresponds to measurements of human muscle tissue), this influence can be neglected.
The implementation and use of ADA on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1985-01-01
The use and implementation of Ada in distributed environments in which reliability is the primary concern is investigated. Emphasis is placed on the possibility that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware. A new linguistic construct, the colloquy, is introduced which solves the problems identified in an earlier proposal, the conversation. It was shown that the colloquy is at least as powerful as recovery blocks, but it is also as powerful as all the other language facilities proposed for other situations requiring backward error recovery: recovery blocks, deadlines, generalized exception handlers, traditional conversations, s-conversations, and exchanges. The major features that distinguish the colloquy are described. Sample programs that were written, but not executed, using the colloquy show that extensive backward error recovery can be included in these programs simply and elegantly. These ideas are being implemented in an experimental Ada test bed.
Decentralized control of sound radiation using iterative loop recovery.
Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R
2010-10-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Decentralized Control of Sound Radiation Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2009-01-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
2011-09-30
Ratio TAFS Treasury Appropriation Fund Symbol INSPECTOR GENERAL DEPARTMENT OF DEFENSE 400 ARMY NAVY DRIVE ARLINGTON, VIRGINIA 22202-4704...Symbol ( TAFS ) to the www.recovery.gov Web site. As a result of our review, officials at AFCESA took action to correct the errors in the...projects in a timely manner, and the funding authorization documents properly identified a Recovery Act designation. Funding documents cited a TAFS of
Advanced NASA Earth Science Mission Concept for Vegetation 3D Structure, Biomass and Disturbance
NASA Technical Reports Server (NTRS)
Ranson, K. Jon
2007-01-01
Carbon in forest canopies represents about 85% of the total carbon in the Earth's aboveground biomass (Olson et al., 1983). A major source of uncertainty in global carbon budgets derives from large errors in the current estimates of these carbon stocks (IPCC, 2001). The magnitudes and distributions of terrestrial carbon storage along with changes in sources and sinks for atmospheric C02 due to land use change remain the most significant uncertainties in Earth's carbon budget. These uncertainties severely limit accurate terrestrial carbon accounting; our ability to evaluate terrestrial carbon management schemes; and the veracity of atmospheric C02 projections in response to further fossil fuel combustion and other human activities. Measurements of vegetation three-dimensional (3D) structural characteristics over the Earth's land surface are needed to estimate biomass and carbon stocks and to quantify biomass recovery following disturbance. These measurements include vegetation height, the vertical profile of canopy elements (i.e., leaves, stems, branches), andlor the volume scattering of canopy elements. They are critical for reducing uncertainties in the global carbon budget. Disturbance by natural phenomena, such as fire or wind, as well as by human activities, such as forest harvest, and subsequent recovery, complicate the quantification of carbon storage and release. The resulting spatial and temporal heterogeneity of terrestrial biomass and carbon in vegetation make it very difficult to estimate terrestrial carbon stocks and quantify their dynamics. Vegetation height profiles and disturbance recovery patterns are also required to assess ecosystem health and characterize habitat. The three-dimensional structure of vegetation provides habitats for many species and is a control on biodiversity. Canopy height and structure influence habitat use and specialization, two fundamental processes that modify species richness and abundance across ecosystems. Accurate and consistent 3D measurements of forest structure at the landscape scale are needed for assessing impacts to animal habitats and biodiversity following disturbance.
Flexible methods for segmentation evaluation: results from CT-based luggage screening.
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2014-01-01
Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
Bedoya, Cesar; Cardona, Andrés; Galeano, July; Cortés-Mancera, Fabián; Sandoz, Patrick; Zarzycki, Artur
2017-12-01
The wound healing assay is widely used for the quantitative analysis of highly regulated cellular events. In this essay, a wound is voluntarily produced on a confluent cell monolayer, and then the rate of wound reduction (WR) is characterized by processing images of the same regions of interest (ROIs) recorded at different time intervals. In this method, sharp-image ROI recovery is indispensable to compensate for displacements of the cell cultures due either to the exploration of multiple sites of the same culture or to transfers from the microscope stage to a cell incubator. ROI recovery is usually done manually and, despite a low-magnification microscope objective is generally used (10x), repositioning imperfections constitute a major source of errors detrimental to the WR measurement accuracy. We address this ROI recovery issue by using pseudoperiodic patterns fixed onto the cell culture dishes, allowing the easy localization of ROIs and the accurate quantification of positioning errors. The method is applied to a tumor-derived cell line, and the WR rates are measured by means of two different image processing software. Sharp ROI recovery based on the proposed method is found to improve significantly the accuracy of the WR measurement and the positioning under the microscope.
Software fault tolerance in computer operating systems
NASA Technical Reports Server (NTRS)
Iyer, Ravishankar K.; Lee, Inhwan
1994-01-01
This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.
NASA Astrophysics Data System (ADS)
Endress, E.; Weigelt, S.; Reents, G.; Bayerl, T. M.
2005-01-01
Measurements of very slow diffusive processes in membranes, like the diffusion of integral membrane proteins, by fluorescence recovery after photo bleaching (FRAP) are hampered by bleaching of the probe during the read out of the fluorescence recovery. In the limit of long observation time (very slow diffusion as in the case of large membrane proteins), this bleaching may cause errors to the recovery function and thus provides error-prone diffusion coefficients. In this work we present a new approach to a two-dimensional closed form analytical solution of the reaction-diffusion equation, based on the addition of a dissipative term to the conventional diffusion equation. The calculation was done assuming (i) a Gaussian laser beam profile for bleaching the spot and (ii) that the fluorescence intensity profile emerging from the spot can be approximated by a two-dimensional Gaussian. The detection scheme derived from the analytical solution allows for diffusion measurements without the constraint of observation bleaching. Recovery curves of experimental FRAP data obtained under non-negligible read-out bleaching for native membranes (rabbit endoplasmic reticulum) on a planar solid support showed excellent agreement with the analytical solution and allowed the calculation of the lipid diffusion coefficient.
A new coherent demodulation technique for land-mobile satellite communications
NASA Technical Reports Server (NTRS)
Yoshida, Shousei; Tomita, Hideho
1990-01-01
An advanced coherent demodulation technique is described for land mobile satellite (LMS) communications. The proposed technique features a combined narrow/wind band dual open loop carrier phase estimator, which is effectively able to compensate the fast carrier phase fluctuation by fading with sacrificing a phase slip rate. Also included is the realization of quick carrier and clock reacquisition after shadowing by taking open loop structure. Its bit error rate (BER) performance is superior to that of existing detection schemes, showing a BER of 1 x 10(exp -2) at 6.3 dB E sub b/N sub o over the Rician channel with 10 dB C/M and 200 Hz (1/16 modulation rate) fading pitch f sub d for QPSK. The proposed scheme consists of a fast response carrier recovery and a quick bit timing recovery with an interpolation. An experimental terminal model was developed to evaluate its performance at fading conditions. The results are quite satisfactory, giving prospects for future LMS applications.
Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, G. M.
2002-01-01
We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.
NASA Astrophysics Data System (ADS)
Mazarico, Erwan; Genova, Antonio; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.
2015-05-01
The fundamental scientific objectives for future spacecraft exploration of Jupiter's moon Europa include confirmation of the existence of subsurface ocean beneath the surface ice shell and constraints on the physical properties of the ocean. Here we conduct a comprehensive simulation of a multiple-flyby mission. We demonstrate that radio tracking data can provide an estimate of the gravitational tidal Love number k2 with sufficient precision to confirm the presence of a liquid layer. We further show that a capable long-range laser altimeter can improve determination of the spacecraft position, improve the k2 determination (<1% error), and enable the estimation of the planetary shape and Love number h2 (3-4% error), which is directly related to the amplitude of the surface tidal deformation. These measurements, in addition to the global shape accurately constrained by the long altimetric profiles, can yield further constraints on the interior structure of Europa.
76 FR 67315 - Supplemental Nutrition Assistance Program: Quality Control Error Tolerance Threshold
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-01
...This direct final rule is amending the Quality Control (QC) review error threshold in our regulations from $25.00 to $50.00. The purpose for raising the QC error threshold is to make permanent the temporary threshold change that was required by the American Recovery and Reinvestment Act of 2008. This change does not have an impact on the public. The QC system measures the accuracy of the eligibility system for the Supplemental Nutrition Assistance Program (SNAP).
Drift Recovery and Station Keeping for the CanX-4 & CanX-5 Nanosatellite Formation Flying Mission
NASA Astrophysics Data System (ADS)
Newman, Joshua Zachary
Canadian Advanced Nanospace eXperiments 4 & 5 (CanX-4&5) are a pair of formation flying nanosatellites that demonstrated autonomous sub-metre formation control at ranges of 1000 to 50 m. To facilitate the autonomous formation flight mission, it is necessary that the two spacecraft be brought within a few kilometres of one another, with a low relative velocity. Therefore, a system to calculate fuel-efficient recovery trajectories and produce the corresponding spacecraft commands was required. This system was also extended to provide station keeping capabilities. In this thesis, the overall drift recovery strategy is outlined, and the design of the controller is detailed. A method of putting the formation into a passively safe state, where the spacecraft cannot collide, is also presented. Monte-Carlo simulations are used to estimate the fuel losses associated with navigational and attitude errors. Finally, on-orbit results are presented, validating both the design and the error expectations.
NASA Astrophysics Data System (ADS)
Mazarico, Erwan; Rowlands, David D.; Sabaka, Terence J.; Getzandanner, Kenneth M.; Rubincam, David P.; Nicholas, Joseph B.; Moreau, Michael C.
2017-10-01
The goal of the OSIRIS-REx mission is to return a sample of asteroid material from near-Earth asteroid (101955) Bennu. The role of the navigation and flight dynamics team is critical for the spacecraft to execute a precisely planned sampling maneuver over a specifically selected landing site. In particular, the orientation of Bennu needs to be recovered with good accuracy during orbital operations to contribute as small an error as possible to the landing error budget. Although Bennu is well characterized from Earth-based radar observations, its orientation dynamics are not sufficiently known to exclude the presence of a small wobble. To better understand this contingency and evaluate how well the orientation can be recovered in the presence of a large 1° wobble, we conduct a comprehensive simulation with the NASA GSFC GEODYN orbit determination and geodetic parameter estimation software. We describe the dynamic orientation modeling implemented in GEODYN in support of OSIRIS-REx operations and show how both altimetry and imagery data can be used as either undifferenced (landmark, direct altimetry) or differenced (image crossover, altimetry crossover) measurements. We find that these two different types of data contribute differently to the recovery of instrument pointing or planetary orientation. When upweighted, the absolute measurements help reduce the geolocation errors, despite poorer astrometric (inertial) performance. We find that with no wobble present, all the geolocation requirements are met. While the presence of a large wobble is detrimental, the recovery is still reliable thanks to the combined use of altimetry and imagery data.
Boyle, Todd A; Mahaffey, Thomas; Mackinnon, Neil J; Deal, Heidi; Hallstrom, Lars K; Morgan, Holly
2011-03-01
Evidence suggests that the underreporting of medication errors and near misses, collectively referred to as medication incidents (MIs), in the community pharmacy setting, is high. Despite the obvious negative implications, MIs present opportunities for pharmacy staff and regulatory authorities to learn from these mistakes and take steps to reduce the likelihood that they reoccur. However, these activities can only take place if such errors are reported and openly discussed. This research proposes a model of factors influencing the reporting, service recovery, and organizational learning resulting from MIs within Canadian community pharmacies. The conceptual model is based on a synthesis of the literature and findings from a pilot study conducted among pharmacy management, pharmacists, and pharmacy technicians from 13 community pharmacies in Nova Scotia, Canada. The purpose of the pilot study was to identify various actions that should be taken to improve MI reporting and included staff perceptions of the strengths and weaknesses of their current MI-reporting process, desired characteristics of a new process, and broader external and internal activities that would likely improve reporting. Out of the 109 surveys sent, 72 usable surveys were returned (66.1% response rate). Multivariate analysis of variance found no significant differences among staff type in their perceptions of the current or new desired system but were found for broader initiatives to improve MI reporting. These findings were used for a proposed structural equation model (SEM). The SEM proposes that individual-perceived self-efficacy, MI process capability, MI process support, organizational culture, management support, and regulatory authority all influence the completeness of MI reporting, which, in turn, influences MI service recovery and learning. This model may eventually be used to enable pharmacy managers to make better decisions. By identifying risk factors that contribute to low MI reporting, recovery, and learning, it will be possible for regulators to focus their efforts on high-risk sectors and begin to undertake preventative educational interventions rather than relying solely on remedial activities. Copyright © 2011 Elsevier Inc. All rights reserved.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
Software fault tolerance for real-time avionics systems
NASA Technical Reports Server (NTRS)
Anderson, T.; Knight, J. C.
1983-01-01
Avionics systems have very high reliability requirements and are therefore prime candidates for the inclusion of fault tolerance techniques. In order to provide tolerance to software faults, some form of state restoration is usually advocated as a means of recovery. State restoration can be very expensive for systems which utilize concurrent processes. The concurrency present in most avionics systems and the further difficulties introduced by timing constraints imply that providing tolerance for software faults may be inordinately expensive or complex. A straightforward pragmatic approach to software fault tolerance which is believed to be applicable to many real-time avionics systems is proposed. A classification system for software errors is presented together with approaches to recovery and continued service for each error type.
Standardization of Freeze Frame TV Codecs
1990-06-01
Kodak SV9600 Still Video Transceiver Colorado Video, Inc.286 Digital Transceiver Image Data Corp. CP-200 Photophone Interand Corp. DISCON Imagephone...error recovery Proprietary Proprby retransmission errorIMAGE BUILD-UP Sequential Sequential PHOTOPHONE Video Teleconferenc- DISCON Imaqephone GENERIC...and information transfer is effected among terminals. An indication of the function and power of these commands can be obtained by reviewing Table
NASA Astrophysics Data System (ADS)
Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia
The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.
Flexible methods for segmentation evaluation: Results from CT-based luggage screening
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2017-01-01
BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346
Self-recovery reversible image watermarking algorithm
Sun, He; Gao, Shangbing; Jin, Shenghua
2018-01-01
The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528
Respiratory-gated CT as a tool for the simulation of breathing artifacts in PET and PET/CT.
Hamill, J J; Bosmans, G; Dekker, A
2008-02-01
Respiratory motion in PET and PET/CT blurs the images and can cause attenuation-related errors in quantitative parameters such as standard uptake values. In rare instances, this problem even causes localization errors and the disappearance of tumors that should be detectable. Attenuation errors are severe near the diaphragm and can be enhanced when the attenuation correction is based on a CT series acquired during a breath-hold. To quantify the errors and identify the parameters associated with them, the authors performed a simulated PET scan based on respiratory-gated CT studies of five lung cancer patients. Diaphragmatic motion ranged from 8 to 25 mm in the five patients. The CT series were converted to 511-keV attenuation maps which were forward-projected and exponentiated to form sinograms of PET attenuation factors at each phase of respiration. The CT images were also segmented to form a PET object, moving with the same motion as the CT series. In the moving PET object, spherical 20 mm mobile tumors were created in the vicinity of the dome of the liver and immobile 20 mm tumors in the midchest region. The moving PET objects were forward-projected and attenuated, then reconstructed in several ways: phase-matched PET and CT, gated PET with ungated CT, ungated PET with gated CT, and conventional PET. Spatial resolution and statistical noise were not modeled. In each case, tumor uptake recovery factor was defined by comparing the maximum reconstructed pixel value with the known correct value. Mobile 10 and 30 mm tumors were also simulated in the case of a patient with 11 mm of breathing motion. Phase-matched gated PET and CT gave essentially perfect PET reconstructions in the simulation. Gated PET with ungated CT gave tumors of the correct shape, but recovery was too large by an amount that depended on the extent of the motion, as much as 90% for mobile tumors and 60% for immobile tumors. Gated CT with ungated PET resulted in blurred tumors and caused recovery errors between -50% and +75%. Recovery in clinical scans would be 0%-20% lower than stated because spatial resolution was not included in the simulation. Mobile tumors near the dome of the liver were subject to the largest errors in either case. Conventional PET for 20 mm tumors was quantitative in cases of motion less than 15 mm because of canceling errors in blurring and attenuation, but the recovery factors were too low by as much as 30% in cases of motion greater than 15 mm. The 10 mm tumors were blurred by motion to a greater extent, causing a greater SUV underestimation than in the case of 20 mm tumors, and the 30 mm tumors were blurred less. Quantitative PET imaging near the diaphragm requires proper matching of attenuation information to the emission information. The problem of missed tumors near the diaphragm can be reduced by acquiring attenuation-correction information near end expiration. A simple PET/CT protocol requiring no gating equipment also addresses this problem.
Decentralized Control of Sound Radiation from an Aircraft-Style Panel Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2008-01-01
A decentralized LQG-based control strategy is designed to reduce low-frequency sound transmission through periodically stiffened panels. While modern control strategies have been used to reduce sound radiation from relatively simple structural acoustic systems, significant implementation issues have to be addressed before these control strategies can be extended to large systems such as the fuselage of an aircraft. For instance, centralized approaches typically require a high level of connectivity and are computationally intensive, while decentralized strategies face stability problems caused by the unmodeled interaction between neighboring control units. Since accurate uncertainty bounds are not known a priori, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is validated using real-time control experiments performed on a built-up aluminum test structure representative of the fuselage of an aircraft. Experiments demonstrate that the iterative approach is capable of achieving 12 dB peak reductions and a 3.6 dB integrated reduction in radiated sound power from the stiffened panel.
BI-sparsity pursuit for robust subspace recovery
Bian, Xiao; Krim, Hamid
2015-09-01
Here, the success of sparse models in computer vision and machine learning in many real-world applications, may be attributed in large part, to the fact that many high dimensional data are distributed in a union of low dimensional subspaces. The underlying structure may, however, be adversely affected by sparse errors, thus inducing additional complexity in recovering it. In this paper, we propose a bi-sparse model as a framework to investigate and analyze this problem, and provide as a result , a novel algorithm to recover the union of subspaces in presence of sparse corruptions. We additionally demonstrate the effectiveness ofmore » our method by experiments on real-world vision data.« less
Increased User Satisfaction Through an Improved Message System
NASA Technical Reports Server (NTRS)
Weissert, C. L.
1997-01-01
With all of the enhancements in software methodology and testing, there is no guarantee that software can be delivered such that no user errors occur, How to handle these errors when they occur has become a major research topic within human-computer interaction (HCI). Users of the Multimission Spacecraft Analysis Subsystem(MSAS) at the Jet Propulsion Laboratory (JPL), a system of X and motif graphical user interfaces for analyzing spacecraft data, complained about the lack of information about the error cause and have suggested that recovery actions be included in the system error messages...The system was evaluated through usability surveys and was shown to be successful.
Errors made by animals in memory paradigms are not always due to failure of memory.
Wilkie, D M; Willson, R J; Carr, J A
1999-01-01
It is commonly assumed that errors in animal memory paradigms such as delayed matching to sample, radial mazes, and food-cache recovery are due to failures in memory for information necessary to perform the task successfully. A body of research, reviewed here, suggests that this is not always the case: animals sometimes make errors despite apparently being able to remember the appropriate information. In this paper a case study of this phenomenon is described, along with a demonstration of a simple procedural modification that successfully reduced these non-memory errors, thereby producing a better measure of memory.
A fault-tolerant information processing concept for space vehicles.
NASA Technical Reports Server (NTRS)
Hopkins, A. L., Jr.
1971-01-01
A distributed fault-tolerant information processing system is proposed, comprising a central multiprocessor, dedicated local processors, and multiplexed input-output buses connecting them together. The processors in the multiprocessor are duplicated for error detection, which is felt to be less expensive than using coded redundancy of comparable effectiveness. Error recovery is made possible by a triplicated scratchpad memory in each processor. The main multiprocessor memory uses replicated memory for error detection and correction. Local processors use any of three conventional redundancy techniques: voting, duplex pairs with backup, and duplex pairs in independent subsystems.
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Sea-Based Automated Launch and Recovery System Virtual Testbed
2013-12-02
integrated with an Extended Kalman Filter to study sensor fusion in a fixed wing aircraft shipboard recovery scenario. 15. SUBJECT TERMS...the sensors and filter performance are graded both on pure estimation error, and by examining the touchdown performance of the aircraft on the ship...v, and w body-axis velocity components of the aircraft , while the velocities applied to the extremities are used to calculate estimated rotational
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1983-01-01
New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.
Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang
2017-05-30
In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.
Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument
NASA Astrophysics Data System (ADS)
Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.
2015-06-01
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.
Shear Recovery Accuracy in Weak-Lensing Analysis with the Elliptical Gauss-Laguerre Method
NASA Astrophysics Data System (ADS)
Nakajima, Reiko; Bernstein, Gary
2007-04-01
We implement the elliptical Gauss-Laguerre (EGL) galaxy-shape measurement method proposed by Bernstein & Jarvis and quantify the shear recovery accuracy in weak-lensing analysis. This method uses a deconvolution fitting scheme to remove the effects of the point-spread function (PSF). The test simulates >107 noisy galaxy images convolved with anisotropic PSFs and attempts to recover an input shear. The tests are designed to be immune to statistical (random) distributions of shapes, selection biases, and crowding, in order to test more rigorously the effects of detection significance (signal-to-noise ratio [S/N]), PSF, and galaxy resolution. The systematic error in shear recovery is divided into two classes, calibration (multiplicative) and additive, with the latter arising from PSF anisotropy. At S/N > 50, the deconvolution method measures the galaxy shape and input shear to ~1% multiplicative accuracy and suppresses >99% of the PSF anisotropy. These systematic errors increase to ~4% for the worst conditions, with poorly resolved galaxies at S/N simeq 20. The EGL weak-lensing analysis has the best demonstrated accuracy to date, sufficient for the next generation of weak-lensing surveys.
Mapping GRACE Accelerometer Error
NASA Astrophysics Data System (ADS)
Sakumura, C.; Harvey, N.; McCullough, C. M.; Bandikova, T.; Kruizinga, G. L. H.
2017-12-01
After more than fifteen years in orbit, instrument noise, and accelerometer noise in particular, remains one of the limiting error sources for the NASA/DLR Gravity Recovery and Climate Experiment mission. The recent V03 Level-1 reprocessing campaign used a Kalman filter approach to produce a high fidelity, smooth attitude solution fusing star camera and angular acceleration data. This process provided an unprecedented method for analysis and error estimation of each instrument. The accelerometer exhibited signal aliasing, differential scale factors between electrode plates, and magnetic effects. By applying the noise model developed for the angular acceleration data to the linear measurements, we explore the magnitude and geophysical pattern of gravity field error due to the electrostatic accelerometer.
Unequal error control scheme for dimmable visible light communication systems
NASA Astrophysics Data System (ADS)
Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan
2017-01-01
Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.
Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0
NASA Astrophysics Data System (ADS)
Millán-Otoya, Jorge E.; Boettcher, Stefan
2014-03-01
Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3
Duan, Hanjun; Wu, Haifeng; Zeng, Yu; Chen, Yuebin
2016-03-26
In a passive ultra-high frequency (UHF) radio-frequency identification (RFID) system, tag collision is generally resolved on a medium access control (MAC) layer. However, some of collided tag signals could be recovered on a physical (PHY) layer and, thus, enhance the identification efficiency of the RFID system. For the recovery on the PHY layer, channel estimation is a critical issue. Good channel estimation will help to recover the collided signals. Existing channel estimates work well for two collided tags. When the number of collided tags is beyond two, however, the existing estimates have more estimation errors. In this paper, we propose a novel channel estimate for the UHF RFID system. It adopts an orthogonal matrix based on the information of preambles which is known for a reader and applies a minimum-mean-square-error (MMSE) criterion to estimate channels. From the estimated channel, we could accurately separate the collided signals and recover them. By means of numerical results, we show that the proposed estimate has lower estimation errors and higher separation efficiency than the existing estimates.
Locked-mode avoidance and recovery without momentum input
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L.; Rice, J. E.; Wolfe, S.; Cziegler, I.; Gao, C.; Granetz, R.; Wukitch, S.; Terry, J.; Greenwald, M.; Sugiyama, L.; Hubbard, A.; Hugges, J.; Marmar, E.; Phillips, P.; Rowan, W.
2015-11-01
Error-field-induced locked-modes (LMs) have been studied in Alcator C-Mod at ITER-Bϕ, without NBI fueling and momentum input. Delay of the mode-onset and locked-mode recovery has been successfully obtained without external momentum input using Ion Cyclotron Resonance Heating (ICRH). The use of external heating in-sync with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW, which demonstrates the existence of a power threshold to ``unlock'' the mode; in the presence of an error field the L-mode discharge can transition into H-mode only when PICRH > 2 MW and at high densities, avoiding also the density pump-out. The effects of ion heating observed on unlocking the core plasma may be due to ICRH induced flows in the plasma boundary, or modifications of plasma profiles that changed the underlying turbulence. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT, DE-FG03-96ER-54373 at University of Texas at Austin, and DE-AC02-09CH11466 at PPPL.
Recovery of Sparse Positive Signals on the Sphere from Low Resolution Measurements
NASA Astrophysics Data System (ADS)
Bendory, Tamir; Eldar, Yonina C.
2015-12-01
This letter considers the problem of recovering a positive stream of Diracs on a sphere from its projection onto the space of low-degree spherical harmonics, namely, from its low-resolution version. We suggest recovering the Diracs via a tractable convex optimization problem. The resulting recovery error is proportional to the noise level and depends on the density of the Diracs. We validate the theory by numerical experiments.
Isobaric Reconstruction of the Baryonic Acoustic Oscillation
NASA Astrophysics Data System (ADS)
Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li
2017-06-01
In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.
Telecommunications end-to-end systems monitoring on TOPEX/Poseidon: Tools and techniques
NASA Technical Reports Server (NTRS)
Calanche, Bruno J.
1994-01-01
The TOPEX/Poseidon Project Satellite Performance Analysis Team's (SPAT) roles and responsibilities have grown to include functions that are typically performed by other teams on JPL Flight Projects. In particular, SPAT Telecommunication's role has expanded beyond the nominal function of monitoring, assessing, characterizing, and trending the spacecraft (S/C) RF/Telecom subsystem to one of End-to-End Information Systems (EEIS) monitoring. This has been accomplished by taking advantage of the spacecraft and ground data system structures and protocols. By processing both the received spacecraft telemetry minor frame ground generated CRC flags and NASCOM block poly error flags, bit error rates (BER) for each link segment can be determined. This provides the capability to characterize the separate link segments, determine science data recovery, and perform fault/anomaly detection and isolation. By monitoring and managing the links, TOPEX has successfully recovered approximately 99.9 percent of the science data with an integrity (BER) of better than 1 x 10(exp 8). This paper presents the algorithms used to process the above flags and the techniques used for EEIS monitoring.
Bayesian Inference for Source Reconstruction: A Real-World Application
Yee, Eugene; Hoffman, Ian; Ungar, Kurt
2014-01-01
This paper applies a Bayesian probabilistic inferential methodology for the reconstruction of the location and emission rate from an actual contaminant source (emission from the Chalk River Laboratories medical isotope production facility) using a small number of activity concentration measurements of a noble gas (Xenon-133) obtained from three stations that form part of the International Monitoring System radionuclide network. The sampling of the resulting posterior distribution of the source parameters is undertaken using a very efficient Markov chain Monte Carlo technique that utilizes a multiple-try differential evolution adaptive Metropolis algorithm with an archive of past states. It is shown that the principal difficulty in the reconstruction lay in the correct specification of the model errors (both scale and structure) for use in the Bayesian inferential methodology. In this context, two different measurement models for incorporation of the model error of the predicted concentrations are considered. The performance of both of these measurement models with respect to their accuracy and precision in the recovery of the source parameters is compared and contrasted. PMID:27379292
Digital Mirror Device Application in Reduction of Wave-front Phase Errors
Zhang, Yaping; Liu, Yan; Wang, Shuxue
2009-01-01
In order to correct the image distortion created by the mixing/shear layer, creative and effectual correction methods are necessary. First, a method combining adaptive optics (AO) correction with a digital micro-mirror device (DMD) is presented. Second, performance of an AO system using the Phase Diverse Speckle (PDS) principle is characterized in detail. Through combining the DMD method with PDS, a significant reduction in wavefront phase error is achieved in simulations and experiments. This kind of complex correction principle can be used to recovery the degraded images caused by unforeseen error sources. PMID:22574016
76 FR 35006 - Recovery Policy RP9523.4, Demolition of Private Structures
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-15
...] Recovery Policy RP9523.4, Demolition of Private Structures AGENCY: Federal Emergency Management Agency, DHS... (FEMA) is accepting comments on Recovery Policy RP9523.4, Demolition of Private Structures. DATES... guidance in determining the eligibility of demolition of private structures under the provisions of the...
Effort to recover SOHO spacecraft continue as investigation board focuses on most likely causes
NASA Astrophysics Data System (ADS)
1998-07-01
Meanwhile, the ESA/NASA investigation board concentrates its inquiry on three errors that appear to have led to the interruption of communications with SOHO on June 25. Officials remain hopeful that, based on ESA's successful recovery of the Olympus spacecraft after four weeks under similar conditions in 1991, recovery of SOHO may be possible. The SOHO Mission Interruption Joint ESA/NASA Investigation Board has determined that the first two errors were contained in preprogrammed command sequences executed on ground system computers, while the last error was a decision to send a command to the spacecraft in response to unexpected telemetry readings. The spacecraft is controlled by the Flight Operations Team, based at NASA's Goddard Space Flight Center, Greenbelt, MD. The first error was in a preprogrammed command sequence that lacked a command to enable an on-board software function designed to activate a gyro needed for control in Emergency Sun Reacquisition (ESR) mode. ESR mode is entered by the spacecraft in the event of anomalies. The second error, which was in a different preprogrammed command sequence, resulted in incorrect readings from one of the spacecraft's three gyroscopes, which in turn triggered an ESR. At the current stage of the investigation, the board believes that the two anomalous command sequences, in combination with a decision to send a command to SOHO to turn off a gyro in response to unexpected telemetry values, caused the spacecraft to enter a series of ESRs, and ultimately led to the loss of control. The efforts of the investigation board are now directed at identifying the circumstances that led to the errors, and at developing a recovery plan should efforts to regain contact with the spacecraft succeed. ESA and NASA engineers believe the spacecraft is currently spinning with its solar panels nearly edge-on towards the Sun, and thus not generating any power. Since the spacecraft is spinning around a fixed axis, as the spacecraft progresses in its orbit around the Sun, the orientation of the panels with respect to the Sun should gradually change. The orbit of the spacecraft and the seasonal change in the spacecraft-Sun alignment should result in the increased solar illumination of the spacecraft solar arrays over the next few months. The engineers predict that in late September 1998, illumination of the solar arrays and, consequently, power supplied to the spacecraft, should approach a maximum. The probability of successfully establishing contact reaches a maximum at this point. After this time, illumination of the solar arrays gradually diminishes as the spacecraft-Sun alignment continues to change. In an attempt to recover SOHO as soon as possible, the Flight Operations Team is uplinking commands to the spacecraft via NASA's Deep Space Network, managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, approximately 12 hours per day with no success to date. A recovery plan is under development by ESA and NASA to provide for orderly restart of the spacecraft and to mitigate risks involved. The recovery of the Olympus spacecraft by ESA in 1991 under similar conditions leads to optimism that the SOHO spacecraft may be recoverable once contact is re-established. In May 1991, ESA's Olympus telecommunications satellite experienced a similar major anomaly which resulted in the loss of attitude, leading to intermittent power availability. As a consequence, there was inadequate communication, and the batteries and fuel froze. From analysis of the data available prior to the loss, there was confidence that the power situation would improve over the coming months. A recovery plan was prepared, supported by laboratory tests, to assess the characteristics of thawing batteries and propellants. Telecommand access of Olympus was regained four weeks later, and batteries and propellant tanks were thawed out progressively over the next four weeks. The attitude was then fully recovered and the payload switched back on three months after the incident. Equipment damage was sustained as a result of the low temperatures, but nothing significant enough to prevent the successful resumption of the mission. The experience of Olympus is being applied, where possible, to SOHO and increases the hope of also recovering this mission. Estimating the probability of recovery is made difficult by a number of unknown spacecraft conditions. Like Olympus, the hydrazine fuel and batteries may be frozen. Thermal stress may have damaged some of the scientific instruments as well. If the rate of spin is excessive, there may have been structural damage. SOHO engineers can reliably predict the spacecraft's orbit through November 1998. After that time, the long-term orbital behavior becomes dependent on the initial velocity conditions of the spacecraft at the time of the telemetry loss. These are not known precisely, due to spacecraft thruster activity that continued after loss of telemetry, so orbital prediction becomes very difficult.
Spitzer Telemetry Processing System
NASA Technical Reports Server (NTRS)
Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.
2013-01-01
The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.
Economic analysis for transmission operation and planning
NASA Astrophysics Data System (ADS)
Zhou, Qun
2011-12-01
Restructuring of the electric power industry has caused dramatic changes in the use of transmission system. The increasing congestion conditions as well as the necessity of integrating renewable energy introduce new challenges and uncertainties to transmission operation and planning. Accurate short-term congestion forecasting facilitates market traders in bidding and trading activities. Cost sharing and recovery issue is a major impediment for long-term transmission investment to integrate renewable energy. In this research, a new short-term forecasting algorithm is proposed for predicting congestion, LMPs, and other power system variables based on the concept of system patterns. The advantage of this algorithm relative to standard statistical forecasting methods is that structural aspects underlying power market operations are exploited to reduce the forecasting error. The advantage relative to previously proposed structural forecasting methods is that data requirements are substantially reduced. Forecasting results based on a NYISO case study demonstrate the feasibility and accuracy of the proposed algorithm. Moreover, a negotiation methodology is developed to guide transmission investment for integrating renewable energy. Built on Nash Bargaining theory, the negotiation of investment plans and payment rate can proceed between renewable generation and transmission companies for cost sharing and recovery. The proposed approach is applied to Garver's six bus system. The numerical results demonstrate fairness and efficiency of the approach, and hence can be used as guidelines for renewable energy investors. The results also shed light on policy-making of renewable energy subsidies.
NASA Technical Reports Server (NTRS)
Miller, J. M.
1980-01-01
ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.
Self-checking self-repairing computer nodes using the mirror processor
NASA Technical Reports Server (NTRS)
Tamir, Yuval
1992-01-01
Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.
1990-02-01
copies Pl ,...,P. of a multiple module fp resolve nondeterminism (local or global) in an identical manner. 5. The copies PI,...,P, axe physically...recovery block. A recovery block consists of a conventional block (like in ALGOL or PL /I) which is provided with a means of error detection, called an...improved failures model for communicating processes. In Proceeding. NSF- SERC Seminar on Concurrency, volume 197 of Lecture Notes in Computer Science
NASA Astrophysics Data System (ADS)
Liu, Tingting; Zhang, Ling; Wang, Shutao; Cui, Yaoyao; Wang, Yutian; Liu, Lingfei; Yang, Zhe
2018-03-01
Qualitative and quantitative analysis of polycyclic aromatic hydrocarbons (PAHs) was carried out by three-dimensional fluorescence spectroscopy combining with Alternating Weighted Residue Constraint Quadrilinear Decomposition (AWRCQLD). The experimental subjects were acenaphthene (ANA) and naphthalene (NAP). Firstly, in order to solve the redundant information of the three-dimensional fluorescence spectral data, the wavelet transform was used to compress data in preprocessing. Then, the four-dimensional data was constructed by using the excitation-emission fluorescence spectra of different concentration PAHs. The sample data was obtained from three solvents that are methanol, ethanol and Ultra-pure water. The four-dimensional spectral data was analyzed by AWRCQLD, then the recovery rate of PAHs was obtained from the three solvents and compared respectively. On one hand, the results showed that PAHs can be measured more accurately by the high-order data, and the recovery rate was higher. On the other hand, the results presented that AWRCQLD can better reflect the superiority of four-dimensional algorithm than the second-order calibration and other third-order calibration algorithms. The recovery rate of ANA was 96.5% 103.3% and the root mean square error of prediction was 0.04 μgL- 1. The recovery rate of NAP was 96.7% 115.7% and the root mean square error of prediction was 0.06 μgL- 1.
Timing Recovery Strategies in Magnetic Recording Systems
NASA Astrophysics Data System (ADS)
Kovintavewat, Piya
At some point in a digital communications receiver, the received analog signal must be sampled. Good performance requires that these samples be taken at the right times. The process of synchronizing the sampler with the received analog waveform is known as timing recovery. Conventional timing recovery techniques perform well only when operating at high signal-to-noise ratio (SNR). Nonetheless, iterative error-control codes allow reliable communication at very low SNR, where conventional techniques fail. This paper provides a detailed review on the timing recovery strategies based on per-survivor processing (PSP) that are capable of working at low SNR. We also investigate their performance in magnetic recording systems because magnetic recording is a primary method of storage for a variety of applications, including desktop, mobile, and server systems. Results indicate that the timing recovery strategies based on PSP perform better than the conventional ones and are thus worth being employed in magnetic recording systems.
An anthropomorphic phantom for quantitative evaluation of breast MRI.
Freed, Melanie; de Zwart, Jacco A; Loud, Jennifer T; El Khouli, Riham H; Myers, Kyle J; Greene, Mark H; Duyn, Jeff H; Badano, Aldo
2011-02-01
In this study, the authors aim to develop a physical, tissue-mimicking phantom for quantitative evaluation of breast MRI protocols. The objective of this phantom is to address the need for improved standardization in breast MRI and provide a platform for evaluating the influence of image protocol parameters on lesion detection and discrimination. Quantitative comparisons between patient and phantom image properties are presented. The phantom is constructed using a mixture of lard and egg whites, resulting in a random structure with separate adipose- and glandular-mimicking components. T1 and T2 relaxation times of the lard and egg components of the phantom were estimated at 1.5 T from inversion recovery and spin-echo scans, respectively, using maximum-likelihood methods. The image structure was examined quantitatively by calculating and comparing spatial covariance matrices of phantom and patient images. A static, enhancing lesion was introduced by creating a hollow mold with stereolithography and filling it with a gadolinium-doped water solution. Measured phantom relaxation values fall within 2 standard errors of human values from the literature and are reasonably stable over 9 months of testing. Comparison of the covariance matrices of phantom and patient data demonstrates that the phantom and patient data have similar image structure. Their covariance matrices are the same to within error bars in the anterior-posterior direction and to within about two error bars in the right-left direction. The signal from the phantom's adipose-mimicking material can be suppressed using active fat-suppression protocols. A static, enhancing lesion can also be included with the ability to change morphology and contrast agent concentration. The authors have constructed a phantom and demonstrated its ability to mimic human breast images in terms of key physical properties that are relevant to breast MRI. This phantom provides a platform for the optimization and standardization of breast MRI imaging protocols for lesion detection and characterization.
Recovery Characteristics of Anomalous Stress-Induced Leakage Current of 5.6 nm Oxide Films
NASA Astrophysics Data System (ADS)
Inatsuka, Takuya; Kumagai, Yuki; Kuroda, Rihito; Teramoto, Akinobu; Sugawa, Shigetoshi; Ohmi, Tadahiro
2012-04-01
Anomalous stress-induced leakage current (SILC), which has a much larger current density than average SILC, causes severe bit error in flash memories. To suppress anomalous SILC, detailed evaluations are strongly required. We evaluate the characteristics of anomalous SILC of 5.6 nm oxide films using a fabricated array test pattern, and recovery characteristics are observed. Some characteristics of typical anomalous cells in the time domain are measured, and the recovery characteristics of average and anomalous SILCs are examined. Some of the anomalous cells have random telegraph signals (RTSs) of gate leakage current, which are characterized as discrete and random switching phenomena. The dependence of RTSs on the applied electric field is investigated, and the recovery tendency of anomalous SILC with and without RTSs are also discussed.
Counteracting structural errors in ensemble forecast of influenza outbreaks.
Pei, Sen; Shaman, Jeffrey
2017-10-13
For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.
2016-12-01
Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.
Foot Structure in Japanese Speech Errors: Normal vs. Pathological
ERIC Educational Resources Information Center
Miyakoda, Haruko
2008-01-01
Although many studies of speech errors have been presented in the literature, most have focused on errors occurring at either the segmental or feature level. Few, if any, studies have dealt with the prosodic structure of errors. This paper aims to fill this gap by taking up the issue of prosodic structure in Japanese speech errors, with a focus on…
Large-Area Visually Augmented Navigation for Autonomous Underwater Vehicles
2005-06-01
constrain position drift . Correction of errors in position and orientation are made each time the mosaic is updated, which occurs every Lth video frame. They...are the greatest strength of a VAN methodology. It is these measurements which help to correct dead-reckoned drift error and enforce recovery of a...systems. [INSTRUMENT [VARIABLE I INTENAL? I UPDATE RATE PRECISION FRANGE J DRIFT Acoustic Altimeter Z - Altitude yes varies: 0.1-10 Hz 0.01-1.0 m varies
Error Detection and Recovery for Robot Motion Planning with Uncertainty.
1987-07-01
plans for these problems . This intuition-which is a heuristic claim, so the reader is advised to proceed with caution--should be verified or disproven...that might work. but fail in a --reasonable" way when they cannot. While EDR is largely motivated by the problems of uncertainty and model error. its...definition for EDR strategies and show how they can be computed. This theory represents what is perhaps the first systematic attack on the problem of
Adaptive artificial neural network for autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.
Electrocortical measures of information processing biases in social anxiety disorder: A review.
Harrewijn, Anita; Schmidt, Louis A; Westenberg, P Michiel; Tang, Alva; van der Molen, Melle J W
2017-10-01
Social anxiety disorder (SAD) is characterized by information processing biases, however, their underlying neural mechanisms remain poorly understood. The goal of this review was to give a comprehensive overview of the most frequently studied EEG spectral and event-related potential (ERP) measures in social anxiety during rest, anticipation, stimulus processing, and recovery. A Web of Science search yielded 35 studies reporting on electrocortical measures in individuals with social anxiety or related constructs. Social anxiety was related to increased delta-beta cross-frequency correlation during anticipation and recovery, and information processing biases during early processing of faces (P1) and errors (error-related negativity). These electrocortical measures are discussed in relation to the persistent cycle of information processing biases maintaining SAD. Future research should further investigate the mechanisms of this persistent cycle and study the utility of electrocortical measures in early detection, prevention, treatment and endophenotype research. Copyright © 2017 Elsevier B.V. All rights reserved.
Modelling the viability of heat recovery from combined sewers.
Abdel-Aal, M; Smits, R; Mohamed, M; De Gussem, K; Schellart, A; Tait, S
2014-01-01
Modelling of wastewater temperatures along a sewer pipe using energy balance equations and assuming steady-state conditions was achieved. Modelling error was calculated, by comparing the predicted temperature drop to measured ones in three combined sewers, and was found to have an overall root mean squared error of 0.37 K. Downstream measured wastewater temperature was plotted against modelled values; their line gradients were found to be within the range of 0.9995-1.0012. The ultimate aim of the modelling is to assess the viability of recovering heat from sewer pipes. This is done by evaluating an appropriate location for a heat exchanger within a sewer network that can recover heat without impacting negatively on the downstream wastewater treatment plant (WWTP). Long sewers may prove to be more viable for heat recovery, as heat lost can be reclaimed before wastewater reaching the WWTP.
AND/OR graph representation of assembly plans
NASA Astrophysics Data System (ADS)
Homem de Mello, Luiz S.; Sanderson, Arthur C.
1990-04-01
A compact representation of all possible assembly plans of a product using AND/OR graphs is presented as a basis for efficient planning algorithms that allow an intelligent robot to pick a course of action according to instantaneous conditions. The AND/OR graph is equivalent to a state transition graph but requires fewer nodes and simplifies the search for feasible plans. Three applications are discussed: (1) the preselection of the best assembly plan, (2) the recovery from execution errors, and (3) the opportunistic scheduling of tasks. An example of an assembly with four parts illustrates the use of the AND/OR graph representation in assembly-plan preselection, based on the weighting of operations according to complexity of manipulation and stability of subassemblies. A hypothetical error situation is discussed to show how a bottom-up search of the AND/OR graph leads to an efficient recovery.
AND/OR graph representation of assembly plans
NASA Technical Reports Server (NTRS)
Homem De Mello, Luiz S.; Sanderson, Arthur C.
1990-01-01
A compact representation of all possible assembly plans of a product using AND/OR graphs is presented as a basis for efficient planning algorithms that allow an intelligent robot to pick a course of action according to instantaneous conditions. The AND/OR graph is equivalent to a state transition graph but requires fewer nodes and simplifies the search for feasible plans. Three applications are discussed: (1) the preselection of the best assembly plan, (2) the recovery from execution errors, and (3) the opportunistic scheduling of tasks. An example of an assembly with four parts illustrates the use of the AND/OR graph representation in assembly-plan preselection, based on the weighting of operations according to complexity of manipulation and stability of subassemblies. A hypothetical error situation is discussed to show how a bottom-up search of the AND/OR graph leads to an efficient recovery.
A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.
Ahn, C B; Cho, Z H
1987-01-01
A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.
Testing and performance analysis of a 650 Mbps QPPM modem for free-space laser communications
NASA Astrophysics Data System (ADS)
Mortensen, Dale J.
1994-08-01
The testing and performance of a prototype modem developed at NASA Lewis Research Center for high-speed free-space direct detection optical communications is described. The testing was performed under laboratory conditions using computer control with specially developed test equipment that simulates free-space link conditions. The modem employs quaternary pulse position modulation (QPPM) at 325 Megabits per second (Mbps) on two optical channels, which are multiplexed to transmit a single 650 Mbps data stream. The measured results indicate that the receiver's automatic gain control (AGC), phased-locked-loop slot clock recovery, digital symbol clock recovery, matched filtering, and maximum likelihood data recovery circuits were found to have only 1.5 dB combined implementation loss during bit-error-rate (BER) performance measurements. Pseudo random bit sequences and real-time high quality video sources were used to supply 650 Mbps and 325 Mbps data streams to the modem. Additional testing revealed that Doppler frequency shifting can be easily tracked by the receiver, that simulated pointing errors are readily compensated for by the AGC circuits, and that channel timing skew affects the BER performance in an expected manner. Overall, the needed technologies for a high-speed laser communications modem were demonstrated.
NASA Astrophysics Data System (ADS)
Liu, Bo; Xin, Xiangjun; Zhang, Lijia; Wang, Fu; Zhang, Qi
2018-02-01
A new feedback symbol timing recovery technique using timing estimation joint equalization is proposed for digital receivers with two samples/symbol or higher sampling rate. Different from traditional methods, the clock recovery algorithm in this paper adopts another algorithm distinguishing the phases of adjacent symbols, so as to accurately estimate the timing offset based on the adjacent signals with the same phase. The addition of the module for eliminating phase modulation interference before timing estimation further reduce the variance, thus resulting in a smoothed timing estimate. The Mean Square Error (MSE) and Bit Error Rate (BER) of the resulting timing estimate are simulated to allow a satisfactory estimation performance. The obtained clock tone performance is satisfactory for MQAM modulation formats and the Roll-off Factor (ROF) close to 0. In the back-to-back system, when ROF= 0, the maximum of MSE obtained with the proposed approach reaches 0 . 0125. After 100-km fiber transmission, BER decreases to 10-3 with ROF= 0 and OSNR = 11 dB. With the increase in ROF, the performances of MSE and BER become better.
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.
2015-01-01
Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212
CSAC Characterization and Its Impact on GNSS Clock Augmentation Performance
Fernández, Enric; Calero, David; Parés, M. Eulàlia
2017-01-01
Chip Scale Atomic Clocks (CSAC) are recently-developed electronic instruments that, when used together with a Global Navigation Satellite Systems (GNSS) receiver, help improve the performance of GNSS navigation solutions in certain conditions (i.e., low satellite visibility). Current GNSS receivers include a Temperature Compensated Cristal Oscillator (TCXO) clock characterized by a short-term stability (τ = 1 s) of 10−9 s that leads to an error of 0.3 m in pseudorange measurements. The CSAC can achieve a short-term stability of 2.5 × 10−12 s, which implies a range error of 0.075 m, making for an 87.5% improvement over TCXO. Replacing the internal TCXO clock of GNSS receivers with a higher frequency stability clock such as a CSAC oscillator improves the navigation solution in terms of low satellite visibility positioning accuracy, solution availability, signal recovery (holdover), multipath and jamming mitigation and spoofing attack detection. However, CSAC suffers from internal systematic instabilities and errors that should be minimized if optimal performance is desired. Hence, for operating CSAC at its best, the deterministic errors from the CSAC need to be properly modelled. Currently, this modelling is done by determining and predicting the clock frequency stability (i.e., clock bias and bias rate) within the positioning estimation process. The research presented in this paper aims to go a step further, analysing the correlation between temperature and clock stability noise and the impact of its proper modelling in the holdover recovery time and in the positioning performance. Moreover, it shows the potential of fine clock coasting modelling. With the proposed model, an improvement in vertical positioning precision of around 50% with only three satellites can be achieved. Moreover, an increase in the navigation solution availability is also observed, a reduction of holdover recovery time from dozens of seconds to only a few can be achieved. PMID:28216600
CSAC Characterization and Its Impact on GNSS Clock Augmentation Performance.
Fernández, Enric; Calero, David; Parés, M Eulàlia
2017-02-14
Chip Scale Atomic Clocks (CSAC) are recently-developed electronic instruments that, when used together with a Global Navigation Satellite Systems (GNSS) receiver, help improve the performance of GNSS navigation solutions in certain conditions (i.e., low satellite visibility). Current GNSS receivers include a Temperature Compensated Cristal Oscillator (TCXO) clock characterized by a short-term stability ( τ = 1 s) of 10 -9 s that leads to an error of 0.3 m in pseudorange measurements. The CSAC can achieve a short-term stability of 2.5 × 10 -12 s, which implies a range error of 0.075 m, making for an 87.5% improvement over TCXO. Replacing the internal TCXO clock of GNSS receivers with a higher frequency stability clock such as a CSAC oscillator improves the navigation solution in terms of low satellite visibility positioning accuracy, solution availability, signal recovery (holdover), multipath and jamming mitigation and spoofing attack detection. However, CSAC suffers from internal systematic instabilities and errors that should be minimized if optimal performance is desired. Hence, for operating CSAC at its best, the deterministic errors from the CSAC need to be properly modelled. Currently, this modelling is done by determining and predicting the clock frequency stability (i.e., clock bias and bias rate) within the positioning estimation process. The research presented in this paper aims to go a step further, analysing the correlation between temperature and clock stability noise and the impact of its proper modelling in the holdover recovery time and in the positioning performance. Moreover, it shows the potential of fine clock coasting modelling. With the proposed model, an improvement in vertical positioning precision of around 50% with only three satellites can be achieved. Moreover, an increase in the navigation solution availability is also observed, a reduction of holdover recovery time from dozens of seconds to only a few can be achieved.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
High-dimensional statistical inference: From vector to matrix
NASA Astrophysics Data System (ADS)
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
Association rule mining on grid monitoring data to detect error sources
NASA Astrophysics Data System (ADS)
Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin
2010-04-01
Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.
SOHO Mission Interruption Joint NASA/ESA Investigation Board
NASA Technical Reports Server (NTRS)
1998-01-01
Contact with the SOlar Heliospheric Observatory (SOHO) spacecraft was lost in the early morning hours of June 25, 1998, Eastern Daylight Time (EDT), during a planned period of calibrations, maneuvers, and spacecraft reconfigurations. Prior to this the SOHO operations team had concluded two years of extremely successful science operations. A joint European Space Agency (ESA)/National Aeronautics and Space Administration (NASA) engineering team has been planning and executing recovery efforts since loss of contact with some success to date. ESA and NASA management established the SOHO Mission Interruption Joint Investigation Board to determine the actual or probable cause(s) of the SOHO spacecraft mishap. The Board has concluded that there were no anomalies on-board the SOHO spacecraft but that a number of ground errors led to the major loss of attitude experienced by the spacecraft. The Board finds that the loss of the SOHO spacecraft was a direct result of operational errors, a failure to adequately monitor spacecraft status, and an erroneous decision which disabled part of the on-board autonomous failure detection. Further, following the occurrence of the emergency situation, the Board finds that insufficient time was taken by the operations team to fully assess the spacecraft status prior to initiating recovery operations. The Board discovered that a number of factors contributed to the circumstances that allowed the direct causes to occur. The Board strongly recommends that the two Agencies proceed immediately with a comprehensive review of SOHO operations addressing issues in the ground procedures, procedure implementation, management structure and process, and ground systems. This review process should be completed and process improvements initiated prior to the resumption of SOHO normal operations.
The navigation system of the JPL robot
NASA Technical Reports Server (NTRS)
Thompson, A. M.
1977-01-01
The control structure of the JPL research robot and the operations of the navigation subsystem are discussed. The robot functions as a network of interacting concurrent processes distributed among several computers and coordinated by a central executive. The results of scene analysis are used to create a segmented terrain model in which surface regions are classified by traversibility. The model is used by a path planning algorithm, PATH, which uses tree search methods to find the optimal path to a goal. In PATH, the search space is defined dynamically as a consequence of node testing. Maze-solving and the use of an associative data base for context dependent node generation are also discussed. Execution of a planned path is accomplished by a feedback guidance process with automatic error recovery.
A proposal of an architecture for the coordination level of intelligent machines
NASA Technical Reports Server (NTRS)
Beard, Randall; Farah, Jeff; Lima, Pedro
1993-01-01
The issue of obtaining a practical, structured, and detailed description of an architecture for the Coordination Level of Center for Intelligent Robotic Systems for Sapce Exploration (CIRSSE) Testbed Intelligent Controller is addressed. Previous theoretical and implementation works were the departure point for the discussion. The document is organized as follows: after this introductory section, section 2 summarizes the overall view of the Intelligent Machine (IM) as a control system, proposing a performance measure on which to base its design. Section 3 addresses with some detail implementation issues. An hierarchic petri-net with feedback-based learning capabilities is proposed. Finally, section 4 is an attempt to address the feedback problem. Feedback is used for two functions: error recovery and reinforcement learning of the correct translations for the petri-net transitions.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
NASA Astrophysics Data System (ADS)
Wiese, D. N.; McCullough, C. M.
2017-12-01
Studies have shown that both single pair low-low satellite-to-satellite tracking (LL-SST) and dual-pair LL-SST hypothetical future satellite gravimetry missions utilizing improved onboard measurement systems relative to the Gravity Recovery and Climate Experiment (GRACE) will be limited by temporal aliasing errors; that is, the error introduced through deficiencies in models of high frequency mass variations required for the data processing. Here, we probe the spatio-temporal characteristics of temporal aliasing errors to understand their impact on satellite gravity retrievals using high fidelity numerical simulations. We find that while aliasing errors are dominant at long wavelengths and multi-day timescales, improving knowledge of high frequency mass variations at these resolutions translates into only modest improvements (i.e. spatial resolution/accuracy) in the ability to measure temporal gravity variations at monthly timescales. This result highlights the reliance on accurate models of high frequency mass variations for gravity processing, and the difficult nature of reducing temporal aliasing errors and their impact on satellite gravity retrievals.
Liu, Tingting; Zhang, Ling; Wang, Shutao; Cui, Yaoyao; Wang, Yutian; Liu, Lingfei; Yang, Zhe
2018-03-15
Qualitative and quantitative analysis of polycyclic aromatic hydrocarbons (PAHs) was carried out by three-dimensional fluorescence spectroscopy combining with Alternating Weighted Residue Constraint Quadrilinear Decomposition (AWRCQLD). The experimental subjects were acenaphthene (ANA) and naphthalene (NAP). Firstly, in order to solve the redundant information of the three-dimensional fluorescence spectral data, the wavelet transform was used to compress data in preprocessing. Then, the four-dimensional data was constructed by using the excitation-emission fluorescence spectra of different concentration PAHs. The sample data was obtained from three solvents that are methanol, ethanol and Ultra-pure water. The four-dimensional spectral data was analyzed by AWRCQLD, then the recovery rate of PAHs was obtained from the three solvents and compared respectively. On one hand, the results showed that PAHs can be measured more accurately by the high-order data, and the recovery rate was higher. On the other hand, the results presented that AWRCQLD can better reflect the superiority of four-dimensional algorithm than the second-order calibration and other third-order calibration algorithms. The recovery rate of ANA was 96.5%~103.3% and the root mean square error of prediction was 0.04μgL -1 . The recovery rate of NAP was 96.7%~115.7% and the root mean square error of prediction was 0.06μgL -1 . Copyright © 2017 Elsevier B.V. All rights reserved.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
JPL-ANTOPT antenna structure optimization program
NASA Technical Reports Server (NTRS)
Strain, D. M.
1994-01-01
New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.
Dynamics of functional failures and recovery in complex road networks
NASA Astrophysics Data System (ADS)
Zhan, Xianyuan; Ukkusuri, Satish V.; Rao, P. Suresh C.
2017-11-01
We propose a new framework for modeling the evolution of functional failures and recoveries in complex networks, with traffic congestion on road networks as the case study. Differently from conventional approaches, we transform the evolution of functional states into an equivalent dynamic structural process: dual-vertex splitting and coalescing embedded within the original network structure. The proposed model successfully explains traffic congestion and recovery patterns at the city scale based on high-resolution data from two megacities. Numerical analysis shows that certain network structural attributes can amplify or suppress cascading functional failures. Our approach represents a new general framework to model functional failures and recoveries in flow-based networks and allows understanding of the interplay between structure and function for flow-induced failure propagation and recovery.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
NASA Astrophysics Data System (ADS)
Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua
2014-07-01
Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.
Multimodal Deep Autoencoder for Human Pose Recovery.
Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng
2015-12-01
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
Local concurrent error detection and correction in data structures using virtual backpointers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.C.J.; Chen, P.P.; Fuchs, W.K.
1989-11-01
A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.
The influence of the structure and culture of medical group practices on prescription drug errors.
Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer
2005-08-01
This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.
Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M
2014-01-01
Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.
Green, Rebekah; Bates, Lisa K; Smyth, Andrew
2007-12-01
In the aftermath of Hurricane Katrina, a rapid succession of plans put forward a host of recovery options for the Upper and Lower Ninth Ward in New Orleans. Much of the debate focused on catastrophic damage to residential structures and discussions of the capacity of low-income residents to repair their neighbourhoods. This article examines impediments to the current recovery process of the Upper and Lower Ninth Ward, reporting results of an October 2006 survey of 3,211 plots for structural damage, flood damage and post-storm recovery. By examining recovery one year after Hurricane Katrina, and by doing so in the light of flood and structural damage, it is possible to identify impediments to recovery that may disproportionately affect these neighbourhoods. This paper concludes with a discussion of how pre- and post-disaster inequalities have slowed recovery in the Lower Ninth Ward and of the implications this has for post-disaster recovery planning there and elsewhere.
Chow, Gary C C; Yam, Timothy T T; Chung, Joanne W Y; Fong, Shirley S M
2017-02-01
This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. One minute of postexercise IWI or RWI did not impair rugby players' sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention.
Chow, Gary C.C.; Yam, Timothy T.T.; Chung, Joanne W.Y.; Fong, Shirley S.M.
2017-01-01
Abstract Background: This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Methods: Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. Results: There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. Conclusion: One minute of postexercise IWI or RWI did not impair rugby players’ sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention. PMID:28207546
Skutan, Stefan; Aschenbrenner, Philipp
2012-12-01
Components with extraordinarily high analyte contents, for example copper metal from wires or plastics stabilized with heavy metal compounds, are presumed to be a crucial source of errors in refuse-derived fuel (RDF) analysis. In order to study the error generation of those 'analyte carrier components', synthetic samples spiked with defined amounts of carrier materials were mixed, milled in a high speed rotor mill to particle sizes <1 mm, <0.5 mm and <0.2 mm, respectively, and analyzed repeatedly. Copper (Cu) metal and brass were used as Cu carriers, three kinds of polyvinylchloride (PVC) materials as lead (Pb) and cadmium (Cd) carriers, and paper and polyethylene as bulk components. In most cases, samples <0.2 mm delivered good recovery rates (rec), and low or moderate relative standard deviations (rsd), i.e. metallic Cu 87-91% rec, 14-35% rsd, Cd from flexible PVC yellow 90-92% rec, 8-10% rsd and Pb from rigid PVC 92-96% rec, 3-4% rsd. Cu from brass was overestimated (138-150% rec, 13-42% rsd), Cd from flexible PVC grey underestimated (72-75% rec, 4-7% rsd) in <0.2 mm samples. Samples <0.5 mm and <1 mm spiked with Cu or brass produced errors of up to 220% rsd (<0.5 mm) and 370% rsd (<1 mm). In the case of Pb from rigid PVC, poor recoveries (54-75%) were observed in spite of moderate variations (rsd 11-29%). In conclusion, time-consuming milling to <0.2 mm can reduce variation to acceptable levels, even given the presence of analyte carrier materials. Yet, the sources of systematic errors observed (likely segregation effects) remain uncertain.
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
Using Utility Functions to Control a Distributed Storage System
2008-05-01
Pinheiro et al. [2007] suggest this is not an accurate assumption. Nicola and Goyal [1990] examined correlated failures across multiversion software...F. and Goyal, A. (1990). Modeling of correlated failures and community error recovery in multiversion software. IEEE Transactions on Software
An experimental evaluation of the REE SIFT environment for spaceborne applications
NASA Technical Reports Server (NTRS)
Whistnant, K.; Iyer, R. K.; Jones, P.; Some, R.; Rennels, D.
2002-01-01
This paper presents an experimental evaluation of a software-implemented fault tolerance environment built around a set of self-checking ARMOR proceses running on different machines that provide error detection and recovery services to themselves and to spaceborne scientific applications.
Aphasia with elation, hypermusia, musicophilia and compulsive whistling.
Jacome, D E
1984-01-01
A musically naive patient with dominant fronto-temporal and anterior parietal infarct developed transcortical mixed aphasia. From early convalescence, he exhibited elated mood with hyperprosody and repetitive, spontaneous whistling and whistling in response to questions. He often spontaneously sang without error in pitch, melody, rhythm and lyrics, and spent long periods of time listening to music. His behaviour progressively improved in parallel with very good recovery of verbal skills. Musicality and singing are rarely tested at the bedside. Preservation of these abilities in aphasics might portend eventual recovery. PMID:6707680
NASA Astrophysics Data System (ADS)
Mortensen, Dale J.
1995-04-01
The testing and performance of a prototype modem developed at NASA Lewis Research Center for high-speed free-space direct detection optical communications is described. The testing was performed under laboratory conditions using computer control with specially developed test equipment that simulates free-space link conditions. The modem employs quaternary pulse position modulation at 325 Megabits per second (Mbps) on two optical channels, which are multiplexed to transmit a single 650 Mbps data stream. The measured results indicate that the receiver's automatic gain control (AGC), phased-locked-loop slot clock recovery, digital symbol clock recovery, matched filtering, and maximum likelihood data recovery circuits were found to have only 1.5 dB combined implementation loss during bit-error-rate (BER) performance measurements. Pseudo random bit sequences and real-time high quality video sources were used to supply 650 Mbps and 325 Mbps data streams to the modem. Additional testing revealed that Doppler frequency shifting can be easily tracked by the receiver, that simulated pointing errors are readily compensated for by the AGC circuits, and that channel timing skew affects the BER performance in an expected manner. Overall, the needed technologies for a high-speed laser communications modem were demonstrated.
High-Resolution Gravity and Time-Varying Gravity Field Recovery using GRACE and CHAMP
NASA Technical Reports Server (NTRS)
Shum, C. K.
2002-01-01
This progress report summarizes the research work conducted under NASA's Solid Earth and Natural Hazards Program 1998 (SENH98) entitled High Resolution Gravity and Time Varying Gravity Field Recovery Using GRACE (Gravity Recovery and Climate Experiment) and CHAMP (Challenging Mini-satellite Package for Geophysical Research and Applications), which included a no-cost extension time period. The investigation has conducted pilot studies to use the simulated GRACE and CHAMP data and other in situ and space geodetic observable, satellite altimeter data, and ocean mass variation data to study the dynamic processes of the Earth which affect climate change. Results from this investigation include: (1) a new method to use the energy approach for expressing gravity mission data as in situ measurements with the possibility to enhance the spatial resolution of the gravity signal; (2) the method was tested using CHAMP and validated with the development of a mean gravity field model using CHAMP data, (3) elaborate simulation to quantify errors of tides and atmosphere and to recover hydrological and oceanic signals using GRACE, results show that there are significant aliasing effect and errors being amplified in the GRACE resonant geopotential and it is not trivial to remove these errors, and (4) quantification of oceanic and ice sheet mass changes in a geophysical constraint study to assess their contributions to global sea level change, while the results improved significant over the use of previous studies using only the SLR (Satellite Laser Ranging)-determined zonal gravity change data, the constraint could be further improved with additional information on mantle rheology, PGR (Post-Glacial Rebound) and ice loading history. A list of relevant presentations and publications is attached, along with a summary of the SENH investigation generated in 2000.
NASA Astrophysics Data System (ADS)
Schout, Gilian; Drijver, Benno; Gutierrez-Neri, Mariene; Schotting, Ruud
2014-01-01
High-temperature aquifer thermal energy storage (HT-ATES) is an important technique for energy conservation. A controlling factor for the economic feasibility of HT-ATES is the recovery efficiency. Due to the effects of density-driven flow (free convection), HT-ATES systems applied in permeable aquifers typically have lower recovery efficiencies than conventional (low-temperature) ATES systems. For a reliable estimation of the recovery efficiency it is, therefore, important to take the effect of density-driven flow into account. A numerical evaluation of the prime factors influencing the recovery efficiency of HT-ATES systems is presented. Sensitivity runs evaluating the effects of aquifer properties, as well as operational variables, were performed to deduce the most important factors that control the recovery efficiency. A correlation was found between the dimensionless Rayleigh number (a measure of the relative strength of free convection) and the calculated recovery efficiencies. Based on a modified Rayleigh number, two simple analytical solutions are proposed to calculate the recovery efficiency, each one covering a different range of aquifer thicknesses. The analytical solutions accurately reproduce all numerically modeled scenarios with an average error of less than 3 %. The proposed method can be of practical use when considering or designing an HT-ATES system.
The Electrolyte Genome project: A big data approach in battery materials discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xiaohui; Jain, Anubhav; Rajput, Nav Nidhi
2015-06-01
We present a high-throughput infrastructure for the automated calculation of molecular properties with a focus on battery electrolytes. The infrastructure is largely open-source and handles both practical aspects (input file generation, output file parsing, and information management) as well as more complex problems (structure matching, salt complex generation, and failure recovery). Using this infrastructure, we have computed the ionization potential (IP) and electron affinities (EA) of 4830 molecules relevant to battery electrolytes (encompassing almost 55,000 quantum mechanics calculations) at the B3LYP/6-31+G(*) level. We describe automated workflows for computing redox potential, dissociation constant, and salt-molecule binding complex structure generation. We presentmore » routines for automatic recovery from calculation errors, which brings the failure rate from 9.2% to 0.8% for the QChem DFT code. Automated algorithms to check duplication between two arbitrary molecules and structures are described. We present benchmark data on basis sets and functionals on the G2-97 test set; one finding is that a IP/EA calculation method that combines PBE geometry optimization and B3LYP energy evaluation requires less computational cost and yields nearly identical results as compared to a full B3LYP calculation, and could be suitable for the calculation of large molecules. Our data indicates that among the 8 functionals tested, XYGJ-OS and B3LYP are the two best functionals to predict IP/EA with an RMSE of 0.12 and 0.27 eV, respectively. Application of our automated workflow to a large set of quinoxaline derivative molecules shows that functional group effect and substitution position effect can be separated for IP/EA of quinoxaline derivatives, and the most sensitive position is different for IP and EA. Published by Elsevier B.V« less
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, C. C.; Chen, P. P.; Fuchs, W. K.
1987-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.
NASA Astrophysics Data System (ADS)
Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan
2012-10-01
The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.
Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy
2011-01-01
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685
Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.
Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N
2011-04-15
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.
NASA Astrophysics Data System (ADS)
Gurovich, B. A.; Kuleshova, E. A.; Frolov, A. S.; Maltsev, D. A.; Prikhodko, K. E.; Fedotova, S. V.; Margolin, B. Z.; Sorokin, A. A.
2015-10-01
A complex study of structural state and properties of 18Cr-10Ni-Ti austenitic stainless steel after irradiation in BOR-60 fast research reactor (in the temperature range 330-400 °С up to damaging doses of 145 dpa) and in VVER-1000 light water reactor (at temperature ∼320 °С and damaging doses ∼12-14 dpa) was performed. The possibility of recovery of structural-phase state and mechanical properties to the level almost corresponding to the initial state by the recovery annealing was studied. The principal possibility of the recovery annealing of pressurized water reactor internals that ensures almost complete recovery of its mechanical properties and microstructure was shown. The optimal mode of recovery annealing was established: 1000 °C during 120 h.
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert
2012-01-01
The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.
Specification-based Error Recovery: Theory, Algorithms, and Usability
2013-02-01
transmuting the specification to an implementation at run-time and reducing the performance overhead. A suite of techniques and tools were designed...in the specification, thereby transmuting the specification to an implementation at run-time and reducing the perfor- mance overhead. A suite of
NASA Astrophysics Data System (ADS)
Allchin, Douglas
2003-05-01
Using several familiar examples - Gregor Mendel, H. B. D. Kettlewell, Alexander Fleming, Ignaz Semmelweis, and William Harvey - I analyze how educators currently frame historical stories to portray the process of science. They share a rhetorical architecture of myth, which misleads students about how science derives its authority. Narratives of error and recovery from error, alternatively, may importantly illustrate the nature of science, especially its limits. Contrary to recent claims for reform, we do not need more history in science education. Rather, we need different types of history that convey the nature of science more effectively.
Note: Focus error detection device for thermal expansion-recovery microscopy (ThERM).
Domené, E A; Martínez, O E
2013-01-01
An innovative focus error detection method is presented that is only sensitive to surface curvature variations, canceling both thermoreflectance and photodefelection effects. The detection scheme consists of an astigmatic probe laser and a four-quadrant detector. Nonlinear curve fitting of the defocusing signal allows the retrieval of a cutoff frequency, which only depends on the thermal diffusivity of the sample and the pump beam size. Therefore, a straightforward retrieval of the thermal diffusivity of the sample is possible with microscopic lateral resolution and high axial resolution (~100 pm).
Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung
1989-01-01
Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.
Hsu, Nina S.; Novick, Jared M.
2016-01-01
Speech unfolds swiftly, yet listeners keep pace by rapidly assigning meaning to what they hear. Sometimes though, initial interpretations turn out wrong. How do listeners revise misinterpretations of language input moment-by-moment, to avoid comprehension errors? Cognitive control may play a role by detecting when processing has gone awry, and then initiating behavioral adjustments accordingly. However, no research has investigated a cause-and-effect interplay between cognitive control engagement and overriding erroneous interpretations in real-time. Using a novel cross-task paradigm, we show that Stroop-conflict detection, which mobilizes cognitive control procedures, subsequently facilitates listeners’ incremental processing of temporarily ambiguous spoken instructions that induce brief misinterpretation. When instructions followed Stroop-incongruent versus-congruent items, listeners’ eye-movements to objects in a scene reflected more transient consideration of the false interpretation and earlier recovery of the correct one. Comprehension errors also decreased. Cognitive control engagement therefore accelerates sentence re-interpretation processes, even as linguistic input is still unfolding. PMID:26957521
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Software design for automated assembly of truss structures
NASA Technical Reports Server (NTRS)
Herstrom, Catherine L.; Grantham, Carolyn; Allen, Cheryl L.; Doggett, William R.; Will, Ralph W.
1992-01-01
Concern over the limited intravehicular activity time has increased the interest in performing in-space assembly and construction operations with automated robotic systems. A technique being considered at LaRC is a supervised-autonomy approach, which can be monitored by an Earth-based supervisor that intervenes only when the automated system encounters a problem. A test-bed to support evaluation of the hardware and software requirements for supervised-autonomy assembly methods was developed. This report describes the design of the software system necessary to support the assembly process. The software is hierarchical and supports both automated assembly operations and supervisor error-recovery procedures, including the capability to pause and reverse any operation. The software design serves as a model for the development of software for more sophisticated automated systems and as a test-bed for evaluation of new concepts and hardware components.
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Recoveries of rat lymph FA after administration of specific structured 13C-TAG.
Vistisen, Bodil; Mu, Huiling; Høy, Carl-Erik
2003-09-01
The potential of the specific structured TAG MLM [where M = caprylic acid (8:0) and L = linoleic acid (18:2n-6)] is the simultaneous delivery of energy and EFA. Compared with long-chain TAG (LLL), they may be more rapidly hydrolyzed and absorbed. This study examined the lymphatic recoveries of intragastrically administered L*L*L*, M*M*M*, ML*M, and ML*L* (where * = 13C-labeled FA) in rats. Lymph lipids were separated into lipid classes and analyzed by GC combustion isotope ratio MS. The recoveries of lymph TAG 18:2n-6 8 h after administration of L*L*L*, ML*M, and ML*L* were 38.6, 48.4, and 49.1%, respectively, whereas after 24 h the recoveries were approximately 50% in all experimental groups. The exogenous contribution to lymph TAG 18:2n-6 was approximately 80 and 60% at maximum absorption of the specific structured TAG and L*L*L*, respectively, 3-6 h after administration. The tendency toward more rapid recovery of exogenous long-chain FA following administration of specific structured TAG compared with long-chain TAG was probably due to fast hydrolysis. The lymphatic recovery of 8:0 was 2.2% 24 h after administration of M*M*M*. This minor lymphatic recovery of exogenous 8:0 was probably due to low stimulation of chylomicron formation. These results demonstrate tendencies toward faster lymphatic recovery of long-chain FA after administration of specific structured TAG compared with long-chain TAG.
Method for Real-Time Model Based Structural Anomaly Detection
NASA Technical Reports Server (NTRS)
Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)
2015-01-01
A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.
High-speed clock recovery unit based on a phase aligner
NASA Astrophysics Data System (ADS)
Tejera, Efrain; Esper-Chain, Roberto; Tobajas, Felix; De Armas, Valentin; Sarmiento, Roberto
2003-04-01
Nowadays clock recovery units are key elements in high speed digital communication systems. For an efficient operation, this units should generate a low jitter clock based on the NRZ received data, and be tolerant to long absence of transitions. Architectures based on Hogge phase detectors have been widely used, nevertheless, they are very sensitive to jitter of the received data and they have a limited tolerance to the absence of transitions. This paper shows a novel high speed clock recovery unit based on a phase aligner. The system allows a very fast clock recovery with a low jitter, moreover, it is very resistant to absence of transitions. The design is based on eight phases obtained from a reference clock running at the nominal frequency of the received signal. This high speed reference clock is generated using a crystal and a clock multiplier unit. The phase alignment system chooses, as starting point, the two phases closest to the data phase. This allows a maximum error of 45 degrees between the clock and data signal phases. Furthermore, the system includes a feed-back loop that interpolates the chosen phases to reduce the phase error to zero. Due to the high stability and reduced tolerance of the local reference clock, the jitter obtained is highly reduced and the system becomes able to operate under long absence of transitions. This performances make this design suitable for systems such as high speed serial link technologies. This system has been designed for CMOS 0.25μm at 1.25GHz and has been verified through HSpice simulations.
A hybrid frame concealment algorithm for H.264/AVC.
Yan, Bo; Gharavi, Hamid
2010-01-01
In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.
Runtime Verification in Context : Can Optimizing Error Detection Improve Fault Diagnosis
NASA Technical Reports Server (NTRS)
Dwyer, Matthew B.; Purandare, Rahul; Person, Suzette
2010-01-01
Runtime verification has primarily been developed and evaluated as a means of enriching the software testing process. While many researchers have pointed to its potential applicability in online approaches to software fault tolerance, there has been a dearth of work exploring the details of how that might be accomplished. In this paper, we describe how a component-oriented approach to software health management exposes the connections between program execution, error detection, fault diagnosis, and recovery. We identify both research challenges and opportunities in exploiting those connections. Specifically, we describe how recent approaches to reducing the overhead of runtime monitoring aimed at error detection might be adapted to reduce the overhead and improve the effectiveness of fault diagnosis.
NASA Technical Reports Server (NTRS)
Stanley, H. R.; Martin, C. F.; Roy, N. A.; Vetter, J. R.
1971-01-01
Error analyses were performed to examine the height error in a relative sea-surface profile as determined by a combination of land-based multistation C-band radars and optical lasers and one ship-based radar tracking the GEOS 2 satellite. It was shown that two relative profiles can be obtained: one using available south-to-north passes of the satellite and one using available north-to-south type passes. An analysis of multi-station tracking capability determined that only Antigua and Grand Turk radars are required to provide satisfactory orbits for south-to-north type satellite passes, while a combination of Merritt Island, Bermuda, and Wallops radars provide secondary orbits for north-to-south passes. Analysis of ship tracking capabilities shows that high elevation single pass range-only solutions are necessary to give only moderate sensitivity to systematic error effects.
PREVAIL: Predicting Recovery through Estimation and Visualization of Active and Incident Lesions.
Dworkin, Jordan D; Sweeney, Elizabeth M; Schindler, Matthew K; Chahin, Salim; Reich, Daniel S; Shinohara, Russell T
2016-01-01
The goal of this study was to develop a model that integrates imaging and clinical information observed at lesion incidence for predicting the recovery of white matter lesions in multiple sclerosis (MS) patients. Demographic, clinical, and magnetic resonance imaging (MRI) data were obtained from 60 subjects with MS as part of a natural history study at the National Institute of Neurological Disorders and Stroke. A total of 401 lesions met the inclusion criteria and were used in the study. Imaging features were extracted from the intensity-normalized T1-weighted (T1w) and T2-weighted sequences as well as magnetization transfer ratio (MTR) sequence acquired at lesion incidence. T1w and MTR signatures were also extracted from images acquired one-year post-incidence. Imaging features were integrated with clinical and demographic data observed at lesion incidence to create statistical prediction models for long-term damage within the lesion. The performance of the T1w and MTR predictions was assessed in two ways: first, the predictive accuracy was measured quantitatively using leave-one-lesion-out cross-validated (CV) mean-squared predictive error. Then, to assess the prediction performance from the perspective of expert clinicians, three board-certified MS clinicians were asked to individually score how similar the CV model-predicted one-year appearance was to the true one-year appearance for a random sample of 100 lesions. The cross-validated root-mean-square predictive error was 0.95 for normalized T1w and 0.064 for MTR, compared to the estimated measurement errors of 0.48 and 0.078 respectively. The three expert raters agreed that T1w and MTR predictions closely resembled the true one-year follow-up appearance of the lesions in both degree and pattern of recovery within lesions. This study demonstrates that by using only information from a single visit at incidence, we can predict how a new lesion will recover using relatively simple statistical techniques. The potential to visualize the likely course of recovery has implications for clinical decision-making, as well as trial enrichment.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
75 FR 18492 - Investing in Innovation Fund; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-12
... those disciplines, we intended to include computer science rather than science. To correct this error... ``including computer science.'' Program Authority: Section 14007 of division A of the American Recovery and....g., braille, large print, audiotape, or computer diskette) on request to the contact listed in this...
Resources and Long-Range Forecasts
ERIC Educational Resources Information Center
Smith, Waldo E.
1973-01-01
The author argues that forecasts of quick depletion of resources in the environment as a result of overpopulation and increased usage may not be free from error. Ignorance still exists in understanding the recovery mechanisms of nature. Long-range forecasts are likely to be wrong in such situations. (PS)
38 CFR 3.343 - Continuance of total disability ratings.
Code of Federal Regulations, 2013 CFR
2013-07-01
... error, without examination showing material improvement in physical or mental condition. Examination... period during which a total rating is provided will not be able to maintain inactivity of the disease... reason thereof unless there is received evidence of marked improvement or recovery in physical or mental...
38 CFR 3.343 - Continuance of total disability ratings.
Code of Federal Regulations, 2014 CFR
2014-07-01
... error, without examination showing material improvement in physical or mental condition. Examination... period during which a total rating is provided will not be able to maintain inactivity of the disease... reason thereof unless there is received evidence of marked improvement or recovery in physical or mental...
38 CFR 3.343 - Continuance of total disability ratings.
Code of Federal Regulations, 2012 CFR
2012-07-01
... error, without examination showing material improvement in physical or mental condition. Examination... period during which a total rating is provided will not be able to maintain inactivity of the disease... reason thereof unless there is received evidence of marked improvement or recovery in physical or mental...
38 CFR 3.343 - Continuance of total disability ratings.
Code of Federal Regulations, 2011 CFR
2011-07-01
... error, without examination showing material improvement in physical or mental condition. Examination... period during which a total rating is provided will not be able to maintain inactivity of the disease... reason thereof unless there is received evidence of marked improvement or recovery in physical or mental...
Simultaneous Translation: Idiom Interpretation and Parsing Heuristics.
ERIC Educational Resources Information Center
McDonald, Janet L.; Carpenter, Patricia A.
1981-01-01
Presents a model of interpretation, parsing and error recovery in simultaneous translation using two experts and two amateur German-English bilingual translators orally translating from English to German. Argues that the translator first comprehends the text in English and divides it into meaningful units before translating. Study also…
Investigating Learning with an Interactive Tutorial: A Mixed-Methods Strategy
ERIC Educational Resources Information Center
de Villiers, M. R.; Becker, Daphne
2017-01-01
From the perspective of parallel mixed-methods research, this paper describes interactivity research that employed usability-testing technology to analyse cognitive learning processes; personal learning styles and times; and errors-and-recovery of learners using an interactive e-learning tutorial called "Relations." "Relations"…
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1989-01-01
Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.
An assessment of gravity model improvements using TOPEX/Poseidon TDRSS observations
NASA Technical Reports Server (NTRS)
Putney, B. H.; Teles, J.; Eddy, W. F.; Klosko, S. M.
1992-01-01
The contribution of TOPEX/Poseidon (T/P) TDRSS data to geopotential model recovery is assessed. Simulated TDRSS one-way and Bilateration Ranging Transponder System (BRTS) observations have been generated and orbitally reduced to form normal equations for geopotential parameters. These normals have been combined with those of the latest prelaunch T/P gravity model solution using data from over 30 satellites. A study of the resulting solution error covariance shows that TDRSS can make important contributions to geopotential recovery, especially for improving T/P specific effects like those arising from orbital resonance. It is argued that future effort is desirable both to establish TDRSS orbit determination limits in a reference frame compatible with that used for the precise laser/DORIS orbits, and the reduction of these TDRSS data for geopotential recovery.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
2013-01-01
Background Cardiovascular magnetic resonance (CMR) T1 mapping indices, such as T1 time and partition coefficient (λ), have shown potential to assess diffuse myocardial fibrosis. The purpose of this study was to investigate how scanner and field strength variation affect the accuracy and precision/reproducibility of T1 mapping indices. Methods CMR studies were performed on two 1.5T and three 3T scanners. Eight phantoms were made to mimic the T1/T2 of pre- and post-contrast myocardium and blood at 1.5T and 3T. T1 mapping using MOLLI was performed with simulated heart rate of 40-100 bpm. Inversion recovery spin echo (IR-SE) was the reference standard for T1 determination. Accuracy was defined as the percent error between MOLLI and IR-SE, and scan/re-scan reproducibility was defined as the relative percent mean difference between repeat MOLLI scans. Partition coefficient was estimated by ΔR1myocardium phantom/ΔR1blood phantom. Generalized linear mixed model was used to compare the accuracy and precision/reproducibility of T1 and λ across field strength, scanners, and protocols. Results Field strength significantly affected MOLLI T1 accuracy (6.3% error for 1.5T vs. 10.8% error for 3T, p<0.001) but not λ accuracy (8.8% error for 1.5T vs. 8.0% error for 3T, p=0.11). Partition coefficients of MOLLI were not different between two 1.5T scanners (47.2% vs. 47.9%, p=0.13), and showed only slight variation across three 3T scanners (49.2% vs. 49.8% vs. 49.9%, p=0.016). Partition coefficient also had significantly lower percent error for precision (better scan/re-scan reproducibility) than measurement of individual T1 values (3.6% for λ vs. 4.3%-4.8% for T1 values, approximately, for pre/post blood and myocardium values). Conclusion Based on phantom studies, T1 errors using MOLLI ranged from 6-14% across various MR scanners while errors for partition coefficient were less (6-10%). Compared with absolute T1 times, partition coefficient showed less variability across platforms and field strengths as well as higher precision. PMID:23890156
Sequence-structure mapping errors in the PDB: OB-fold domains
Venclovas, Česlovas; Ginalski, Krzysztof; Kang, Chulhee
2004-01-01
The Protein Data Bank (PDB) is the single most important repository of structural data for proteins and other biologically relevant molecules. Therefore, it is critically important to keep the PDB data, as much as possible, error-free. In this study, we have analyzed PDB crystal structures possessing oligonucleotide/oligosaccharide binding (OB)-fold, one of the highly populated folds, for the presence of sequence-structure mapping errors. Using energy-based structure quality assessment coupled with sequence analyses, we have found that there are at least five OB-structures in the PDB that have regions where sequences have been incorrectly mapped onto the structure. We have demonstrated that the combination of these computation techniques is effective not only in detecting sequence-structure mapping errors, but also in providing guidance to correct them. Namely, we have used results of computational analysis to direct a revision of X-ray data for one of the PDB entries containing a fairly inconspicuous sequence-structure mapping error. The revised structure has been deposited with the PDB. We suggest use of computational energy assessment and sequence analysis techniques to facilitate structure determination when homologs having known structure are available to use as a reference. Such computational analysis may be useful in either guiding the sequence-structure assignment process or verifying the sequence mapping within poorly defined regions. PMID:15133161
Network topology and resilience analysis of South Korean power grid
NASA Astrophysics Data System (ADS)
Kim, Dong Hwan; Eisenberg, Daniel A.; Chun, Yeong Han; Park, Jeryang
2017-01-01
In this work, we present topological and resilience analyses of the South Korean power grid (KPG) with a broad voltage level. While topological analysis of KPG only with high-voltage infrastructure shows an exponential degree distribution, providing another empirical evidence of power grid topology, the inclusion of low voltage components generates a distribution with a larger variance and a smaller average degree. This result suggests that the topology of a power grid may converge to a highly skewed degree distribution if more low-voltage data is considered. Moreover, when compared to ER random and BA scale-free networks, the KPG has a lower efficiency and a higher clustering coefficient, implying that highly clustered structure does not necessarily guarantee a functional efficiency of a network. Error and attack tolerance analysis, evaluated with efficiency, indicate that the KPG is more vulnerable to random or degree-based attacks than betweenness-based intentional attack. Cascading failure analysis with recovery mechanism demonstrates that resilience of the network depends on both tolerance capacity and recovery initiation time. Also, when the two factors are fixed, the KPG is most vulnerable among the three networks. Based on our analysis, we propose that the topology of power grids should be designed so the loads are homogeneously distributed, or functional hubs and their neighbors have high tolerance capacity to enhance resilience.
Simulation studies of the fidelity of biomolecular structure ensemble recreation
NASA Astrophysics Data System (ADS)
Lätzer, Joachim; Eastwood, Michael P.; Wolynes, Peter G.
2006-12-01
We examine the ability of Bayesian methods to recreate structural ensembles for partially folded molecules from averaged data. Specifically we test the ability of various algorithms to recreate different transition state ensembles for folding proteins using a multiple replica simulation algorithm using input from "gold standard" reference ensembles that were first generated with a Gō-like Hamiltonian having nonpairwise additive terms. A set of low resolution data, which function as the "experimental" ϕ values, were first constructed from this reference ensemble. The resulting ϕ values were then treated as one would treat laboratory experimental data and were used as input in the replica reconstruction algorithm. The resulting ensembles of structures obtained by the replica algorithm were compared to the gold standard reference ensemble, from which those "data" were, in fact, obtained. It is found that for a unimodal transition state ensemble with a low barrier, the multiple replica algorithm does recreate the reference ensemble fairly successfully when no experimental error is assumed. The Kolmogorov-Smirnov test as well as principal component analysis show that the overlap of the recovered and reference ensembles is significantly enhanced when multiple replicas are used. Reduction of the multiple replica ensembles by clustering successfully yields subensembles with close similarity to the reference ensembles. On the other hand, for a high barrier transition state with two distinct transition state ensembles, the single replica algorithm only samples a few structures of one of the reference ensemble basins. This is due to the fact that the ϕ values are intrinsically ensemble averaged quantities. The replica algorithm with multiple copies does sample both reference ensemble basins. In contrast to the single replica case, the multiple replicas are constrained to reproduce the average ϕ values, but allow fluctuations in ϕ for each individual copy. These fluctuations facilitate a more faithful sampling of the reference ensemble basins. Finally, we test how robustly the reconstruction algorithm can function by introducing errors in ϕ comparable in magnitude to those suggested by some authors. In this circumstance we observe that the chances of ensemble recovery with the replica algorithm are poor using a single replica, but are improved when multiple copies are used. A multimodal transition state ensemble, however, turns out to be more sensitive to large errors in ϕ (if appropriately gauged) and attempts at successful recreation of the reference ensemble with simple replica algorithms can fall short.
Linking models and data on vegetation structure
NASA Astrophysics Data System (ADS)
Hurtt, G. C.; Fisk, J.; Thomas, R. Q.; Dubayah, R.; Moorcroft, P. R.; Shugart, H. H.
2010-06-01
For more than a century, scientists have recognized the importance of vegetation structure in understanding forest dynamics. Now future satellite missions such as Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) hold the potential to provide unprecedented global data on vegetation structure needed to reduce uncertainties in terrestrial carbon dynamics. Here, we briefly review the uses of data on vegetation structure in ecosystem models, develop and analyze theoretical models to quantify model-data requirements, and describe recent progress using a mechanistic modeling approach utilizing a formal scaling method and data on vegetation structure to improve model predictions. Generally, both limited sampling and coarse resolution averaging lead to model initialization error, which in turn is propagated in subsequent model prediction uncertainty and error. In cases with representative sampling, sufficient resolution, and linear dynamics, errors in initialization tend to compensate at larger spatial scales. However, with inadequate sampling, overly coarse resolution data or models, and nonlinear dynamics, errors in initialization lead to prediction error. A robust model-data framework will require both models and data on vegetation structure sufficient to resolve important environmental gradients and tree-level heterogeneity in forest structure globally.
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Nagarajaiah, Satish
2016-06-01
Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.
Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed
2015-01-01
This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Gottlieb, David; Carpenter, Mark H.
1994-01-01
It has been previously shown that the temporal integration of hyperbolic partial differential equations (PDE's) may, because of boundary conditions, lead to deterioration of accuracy of the solution. A procedure for removal of this error in the linear case has been established previously. In the present paper we consider hyperbolic (PDE's) (linear and non-linear) whose boundary treatment is done via the SAT-procedure. A methodology is present for recovery of the full order of accuracy, and has been applied to the case of a 4th order explicit finite difference scheme.
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
Arneson, Michael R [Chippewa Falls, WI; Bowman, Terrance L [Sumner, WA; Cornett, Frank N [Chippewa Falls, WI; DeRyckere, John F [Eau Claire, WI; Hillert, Brian T [Chippewa Falls, WI; Jenkins, Philip N [Eau Claire, WI; Ma, Nan [Chippewa Falls, WI; Placek, Joseph M [Chippewa Falls, WI; Ruesch, Rodney [Eau Claire, WI; Thorson, Gregory M [Altoona, WI
2007-07-24
The present invention is directed toward a communications channel comprising a link level protocol, a driver, a receiver, and a canceller/equalizer. The link level protocol provides logic for DC-free signal encoding and recovery as well as supporting many features including CRC error detection and message resend to accommodate infrequent bit errors across the medium. The canceller/equalizer provides equalization for destabilized data signals and also provides simultaneous bi-directional data transfer. The receiver provides bit deskewing by removing synchronization error, or skewing, between data signals. The driver provides impedance controlling by monitoring the characteristics of the communications medium, like voltage or temperature, and providing a matching output impedance in the signal driver so that fewer distortions occur while the data travels across the communications medium.
Moran, Galia S; Zisman-Ilani, Yaara; Garber-Epstein, Paula; Roe, David
2014-03-01
Recovery is supported by relationships that are characterized by human centeredness, empowerment and a hopeful approach. The Recovery Promoting Relationships Scale (RPRS; Russinova, Rogers, & Ellison, 2006) assesses consumer-provider relationships from the consumer perspective. Here we present the adaptation and psychometric assessment of a Hebrew version of the RPRS. The RPRS was translated to Hebrew (RPRS-Heb) using multiple strategies to assure conceptual soundness. Then 216 mental health consumers were administered the RPRS-Heb as part of a larger project initiative implementing illness management and recovery intervention (IMR) in community settings. Psychometric testing included assessment of the factor structure, reliability, and validity using the Hope Scale, the Working Alliance Inventory, and the Recovery Assessment Scale. The RPRS-Heb factor structure replicated the two factor structures found in the original scale with minor exceptions. Reliability estimates were good: Cronbach's alpha for the total scale was 0.94. An estimate of 0.93 for the Recovery-Promoting Strategies factor, and 0.86 for the Core Relationship. Concurrent validity was confirmed using the Working Alliance Scale (rp = .51, p < .001) and the Hope Scale (rp = .43, p < .001). Criterion validity was examined using the Recovery Assessment Scale (rp = .355, p < .05). The study yielded a 23-item RPRS-Heb version with a psychometrically sound factor structure, satisfactory reliability, and concurrent validity tested against the Hope, Alliance, and Recovery Assessment scales. Outcomes are discussed in the context of the original scale properties and a similar Dutch initiative. The RPRS-Heb can serve as a valuable tool for studying recovery promoting relationships with Hebrew speaking population.
Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A
2015-10-01
Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Software Fault Tolerance: A Tutorial
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2000-01-01
Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Short-term memory capacity in networks via the restricted isometry property.
Charles, Adam S; Yap, Han Lun; Rozell, Christopher J
2014-06-01
Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2009-10-01
The American Recovery and Reinvestment Act of 2009 (Recovery Act) was established to jumpstart the U.S. economy, create or save millions of jobs, spur technological advances in health and science, and invest in the Nation's energy future. The Department of Energy (Department) will receive an unprecedented $37 billion in Recovery Act funding to support a variety of science, energy, and environmental initiatives. The majority of the funding received by the Department will be allocated to various recipients through grants, cooperative agreements, contracts, and other financial instruments. To ensure transparency and accountability, the Office of Management and Budget (OMB) requires thatmore » recipients report on their receipt and use of Recovery Act funds on a quarterly basis to FederalReporting.gov. OMB also specifies that Federal agencies should develop and implement formal procedures to help ensure the quality of recipient reported information. Data that must be reported by recipients includes total funding received; funds expended or obligated; projects or activities for which funds were obligated or expended; and the number of jobs created and/or retained. OMB requires that Federal agencies perform limited data quality reviews of recipient data to identify material omissions and/or significant reporting errors and notify the recipients of the need to make appropriate and timely changes to erroneous reports. As part of a larger audit of recipient Recovery Act reporting and performance measurement and in support of a Government-wide review sponsored by the Recovery Accountability and Transparency Board, we completed an interim review to determine whether the Department had established a process to ensure the quality and accuracy of recipient reports. Our review revealed that the Department had developed a quality assurance process to facilitate the quarterly reviews of recipient data. The process included procedures to compare existing information from the Department's financial information systems with that reported to FederalReporting.gov by recipients. In addition, plans were in place to notify recipients of anomalies and/or errors exposed by the quality assurance process. While the Department has made a good deal of progress in this area, we did, however, identify several issues which could, if not addressed, impact the effectiveness of the quality assurance process.« less
Thermally Activated Composite with Two-Way and Multi-Shape Memory Effects
Basit, Abdul; L’Hostis, Gildas; Pac, Marie José; Durand, Bernard
2013-01-01
The use of shape memory polymer composites is growing rapidly in smart structure applications. In this work, an active asymmetric composite called “controlled behavior composite material (CBCM)” is used as shape memory polymer composite. The programming and the corresponding initial fixity of the composite structure is obtained during a bending test, by heating CBCM above thermal glass transition temperature of the used Epoxy polymer. The shape memory properties of these composites are investigated by a bending test. Three types of recoveries are conducted, two classical recovery tests: unconstrained recovery and constrained recovery, and a new test of partial recovery under load. During recovery, high recovery displacement and force are produced that enables the composite to perform strong two-way actuations along with multi-shape memory effect. The recovery force confirms full recovery with two-way actuation even under a high load. This unique property of CBCM is characterized by the recovered mechanical work. PMID:28788316
Iwata, Akira; Fuchioka, Satoshi; Hiraoka, Koichi; Masuhara, Mitsuhiko; Kami, Katsuya
2010-05-01
Although numerous studies have aimed to elucidate the mechanisms used to repair the structure and function of injured skeletal muscles, it remains unclear how and when movement recovers following damage. We performed a temporal analysis to characterize the changes in movement, muscle function, and muscle structure after muscle injury induced by the drop-mass technique. At each time-point, movement recovery was determined by ankle kinematic analysis of locomotion, and functional recovery was represented by isometric force. As a histological analysis, the cross-sectional area of myotubes was measured to examine structural regeneration. The dorsiflexion angle of the ankle, as assessed by kinematic analysis of locomotion, increased after injury and then returned to control levels by day 14 post-injury. The isometric force returned to normal levels by day 21 post-injury. However, the size of the myotubes did not reach normal levels, even at day 21 post-injury. These results indicate that recovery of locomotion occurs prior to recovery of isometric force and that functional recovery occurs earlier than structural regeneration. Thus, it is suggested that recovery of the movement and function of injured skeletal muscles might be insufficient as markers for estimating the degree of neuromuscular system reconstitution.
Structure and Processing in Tunisian Arabic: Speech Error Data
ERIC Educational Resources Information Center
Hamrouni, Nadia
2010-01-01
This dissertation presents experimental research on speech errors in Tunisian Arabic. The nonconcatenative morphology of Arabic shows interesting interactions of phrasal and lexical constraints with morphological structure during language production. The central empirical questions revolve around properties of "exchange errors". These…
An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-01-01
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods. PMID:27669250
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-09-22
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.
NASA Technical Reports Server (NTRS)
Borgia, Andrea; Spera, Frank J.
1990-01-01
This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.
Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams
NASA Astrophysics Data System (ADS)
Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng
2006-12-01
This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).
Karsten, Bettina; Baker, Jonathan; Naclerio, Fernando; Klose, Andreas; Bianco, Antonino; Nimmerichter, Alfred
2018-02-01
To investigate single-day time-to-exhaustion (TTE) and time-trial (TT) -based laboratory tests values of critical power (CP), W prime (W'), and respective oxygen-uptake-kinetic responses. Twelve cyclists performed a maximal ramp test followed by 3 TTE and 3 TT efforts interspersed by 60 min recovery between efforts. Oxygen uptake ( V ˙ O 2 ) was measured during all trials. The mean response time was calculated as a description of the overall [Formula: see text]-kinetic response from the onset to 2 min of exercise. TTE-determined CP was 279 ± 52 W, and TT-determined CP was 276 ± 50 W (P = .237). Values of W' were 14.3 ± 3.4 kJ (TTE W') and 16.5 ± 4.2 kJ (TT W') (P = .028). While a high level of agreement (-12 to 17 W) and a low prediction error of 2.7% were established for CP, for W' limits of agreements were markedly lower (-8 to 3.7 kJ), with a prediction error of 18.8%. The mean standard error for TTE CP values was significantly higher than that for TT CP values (2.4% ± 1.9% vs 1.2% ± 0.7% W). The standard errors for TTE W' and TT W' were 11.2% ± 8.1% and 5.6% ± 3.6%, respectively. The [Formula: see text] response was significantly faster during TT (~22 s) than TTE (~28 s). The TT protocol with a 60-min recovery period offers a valid, time-saving, and less error-filled alternative to conventional and more recent testing methods. Results, however, cannot be transferred to W'.
NASA Astrophysics Data System (ADS)
Xu, Tianhua; Jacobsen, Gunnar; Popov, Sergei; Li, Jie; Liu, Tiegen; Zhang, Yimo
2016-10-01
The performance of long-haul high speed coherent optical fiber communication systems is significantly degraded by the laser phase noise and the equalization enhanced phase noise (EEPN). In this paper, the analysis of the one-tap normalized least-mean-square (LMS) carrier phase recovery (CPR) is carried out and the close-form expression is investigated for quadrature phase shift keying (QPSK) coherent optical fiber communication systems, in compensating both laser phase noise and equalization enhanced phase noise. Numerical simulations have also been implemented to verify the theoretical analysis. It is found that the one-tap normalized least-mean-square algorithm gives the same analytical expression for predicting CPR bit-error-rate (BER) floors as the traditional differential carrier phase recovery, when both the laser phase noise and the equalization enhanced phase noise are taken into account.
Synthesis and optimization of four bar mechanism with six design parameters
NASA Astrophysics Data System (ADS)
Jaiswal, Ankur; Jawale, H. P.
2018-04-01
Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.
Piller, Kyle R; Geheber, Aaron D
2015-01-01
Anthropogenic perturbations impact aquatic systems causing wide-ranging responses, from assemblage restructuring to assemblage recovery. Previous studies indicate the duration and intensity of disturbances play a role in the dynamics of assemblage recovery. In August 2011, the Pearl River, United States, was subjected to a weak black liquor spill from a paper mill which resulted in substantial loss of fish in a large stretch of the main channel. We quantified resilience and recovery of fish assemblage structure in the impacted area following the event. We compared downstream (impacted) assemblages to upstream (unimpacted) assemblages to determine initial impacts on structure. Additionally, we incorporated historic fish collections (1988–2011) to examine impacts on assemblage structure across broad temporal scales. Based on NMDS, upstream and downstream sites generally showed similar assemblage structure across sample periods with the exception of the 2 months postdischarge, where upstream and downstream sites visually differed. Multivariate analysis of variance (PERMANOVA) indicated significant seasonal variation among samples, but found no significant interaction between impacted and unimpacted assemblages following the discharge event. However, multivariate dispersion (MVDISP) showed greater variance among assemblage structure following the discharge event. These results suggest that 2 months following the disturbance represent a time period of stochasticity in regard to assemblage structure dynamics, and this was followed by rapid recovery. We term this dynamic the “hangover effect” as it represents the time frame from the cessation of the perturbation to the assemblage's return to predisturbance conditions. The availability and proximity of tributaries and upstream refugia, which were not affected by the disturbance, as well as the rapid recovery of abiotic parameters likely played a substantial role in assemblage recovery. This study not only demonstrates rapid recovery in an aquatic system, but further demonstrates the value of continuous, long-term, data collections which enhance our understanding of assemblage dynamics. PMID:26120432
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
At the request of the Office of Solid Wastes (OSW), the SAB's Environmental Engineering Committee reviewed a draft Agency guidance for the establishment of Alternate Concentration Limits (ACL) for Resource Conservation and Recovery Act (RCRA) facilities, and two case studies demonstrating applications of the guidance. The Committee identified only obvious technical errors or omissions, which are explained in detail in the report.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
Hsu, Nina S; Novick, Jared M
2016-04-01
Speech unfolds swiftly, yet listeners keep pace by rapidly assigning meaning to what they hear. Sometimes, though, initial interpretations turn out to be wrong. How do listeners revise misinterpretations of language input moment by moment to avoid comprehension errors? Cognitive control may play a role by detecting when processing has gone awry and then initiating behavioral adjustments accordingly. However, no research to date has investigated a cause-and-effect interplay between cognitive-control engagement and the overriding of erroneous interpretations in real time. Using a novel cross-task paradigm, we showed that Stroop-conflict detection, which mobilizes cognitive-control procedures, subsequently facilitates listeners' incremental processing of temporarily ambiguous spoken instructions that induce brief misinterpretation. When instructions followed incongruent Stroop items, compared with congruent Stroop items, listeners' eye movements to objects in a scene reflected more transient consideration of the false interpretation and earlier recovery of the correct one. Comprehension errors also decreased. Cognitive-control engagement therefore accelerates sentence-reinterpretation processes, even as linguistic input is still unfolding. © The Author(s) 2016.
Study of fault tolerant software technology for dynamic systems
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Zacharias, G. L.
1985-01-01
The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.
Locked-mode avoidance and recovery without external momentum input
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L.; Gates, D. A.; Wolfe, S.; Rice, J. E.; Gao, C.; Wukitch, S.; Greenwald, M.; Hughes, J.; Marmar, E.; Scott, S.
2014-10-01
Error-field-induced locked-modes (LMs) have been studied in C-Mod at ITER toroidal fields without NBI fueling and momentum input. The use of ICRH heating in synch with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW and a transition into H-mode when PICRH > 2 MW. The recovery experiments consisted in applying ICRH power during the LM non-rotating phase successfully unlocking the core plasma. The ``induced'' toroidal rotation was in the counter-current direction, restoring the direction and magnitude of the toroidal flow before the LM formation, but contrary to the expected Rice-scaling in the co-current direction. However, the LM occurs near the LOC/SOC transition where rotation reversals are commonly observed. Once PICRH is turned off, the core plasma ``locks'' at later times depending on the evolution of ne and Vt. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT and DE-AC02-09CH11466 at PPPL.
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
NASA Astrophysics Data System (ADS)
Hosseini, E.; Ghafoori, E.; Leinenbach, C.; Motavalli, M.; Holdsworth, S. R.
2018-02-01
The stress recovery and cyclic deformation behaviour of Fe-17Mn-5Si-10Cr-4Ni-1(V,C) shape memory alloy (Fe-SMA) strips, which are often used for pre-stressed strengthening of structural members, were studied. The evolution of recovery stress under different constraint conditions was studied. The results showed that the magnitude of the tensile stress in the Fe-SMA member during thermal activation can have a signification effect on the final recovery stress. The higher the tensile load in the Fe-SMA (e.g., caused by dead load or thermal expansion of parent structure during heating phase), the lower the final recovery stress. Furthermore, this study investigated the cyclic behaviour of the activated SMA followed by a second thermal activation. Although the magnitude of the recovery stress decreased during the cyclic loading, the second thermal activation could retrieve a significant part of the relaxed recovery stress. This observation suggests that the relaxation of recovery stress during cyclic loading is due to a reversible phase transformation-induced deformation (i.e., forward austenite-to-martensite transformation) rather than an irreversible dislocation-induced plasticity. Retrieval of the relaxed recovery stress by the reactivation process has important practical implications as the prestressing loss in pre-stressed civil structures can be simply recovered by reheating of the Fe-SMA elements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... recovery? In this situation, the recovery to be reported is the present value of the right to receive all... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false How is a structured settlement (that is, a... the recovery? 30.607 Section 30.607 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS...
18 CFR 35.35 - Transmission infrastructure investment.
Code of Federal Regulations, 2014 CFR
2014-04-01
... base; (iii) Recovery of prudently incurred pre-commercial operations costs; (iv) Hypothetical capital structure; (v) Accelerated depreciation used for rate recovery; (vi) Recovery of 100 percent of prudently... of the public utility; (vii) Deferred cost recovery; and (viii) Any other incentives approved by the...
18 CFR 35.35 - Transmission infrastructure investment.
Code of Federal Regulations, 2012 CFR
2012-04-01
... base; (iii) Recovery of prudently incurred pre-commercial operations costs; (iv) Hypothetical capital structure; (v) Accelerated depreciation used for rate recovery; (vi) Recovery of 100 percent of prudently... of the public utility; (vii) Deferred cost recovery; and (viii) Any other incentives approved by the...
18 CFR 35.35 - Transmission infrastructure investment.
Code of Federal Regulations, 2013 CFR
2013-04-01
... base; (iii) Recovery of prudently incurred pre-commercial operations costs; (iv) Hypothetical capital structure; (v) Accelerated depreciation used for rate recovery; (vi) Recovery of 100 percent of prudently... of the public utility; (vii) Deferred cost recovery; and (viii) Any other incentives approved by the...
Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel
2010-05-01
Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.
Efficient error correction for next-generation sequencing of viral amplicons
2012-01-01
Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Recovery capital pathways: Modelling the components of recovery wellbeing.
Cano, Ivan; Best, David; Edwards, Michael; Lehman, John
2017-12-01
In recent years, there has been recognition that recovery is a journey that involves the growth of recovery capital. Thus, recovery capital has become a commonly used term in addiction treatment and research yet its operationalization and measurement has been limited. Due to these limitations, there is little understanding of long-term recovery pathways and their clinical application. We used the data of 546 participants from eight different recovery residences spread across Florida, USA. We calculated internal consistency for recovery capital and wellbeing, then assessed their factor structure via confirmatory factor analysis. The relationships between time, recovery barriers and strengths, wellbeing and recovery capital, as well as the moderating effect of gender, were estimated using structural equations modelling. The proposed model obtained an acceptable fit (χ 2 (141, N=546)=533.642, p<0.001; CMIN/DF=3.785; CFI=0.915; TLI=0.896; RMSEA=0.071). Findings indicate a pathway to recovery capital that involves greater time in residence ('retention'), linked to an increase in meaningful activities and a reduction in barriers to recovery and unmet needs that, in turn, promote recovery capital and positive wellbeing. Gender differences were observed. We tested the pathways to recovery for residents in the recovery housing population. Our results have implications not only for retention as a predictor of sustained recovery and wellbeing but also for the importance of meaningful activities in promoting recovery capital and wellbeing. Copyright © 2017 Elsevier B.V. All rights reserved.
Phase and Pupil Amplitude Recovery for JWST Space-Optics Control
NASA Technical Reports Server (NTRS)
Dean, B. H.; Zielinski, T. P.; Smith, J. S.; Bolcar, M. R.; Aronstein, D. L.; Fienup, J. R.
2010-01-01
This slide presentation reviews the phase and pupil amplitude recovery for the James Webb Space Telescope (JWST) Near Infrared Camera (NIRCam). It includes views of the Integrated Science Instrument Module (ISIM), the NIRCam, examples of Phase Retrieval Data, Ghost Irradiance, Pupil Amplitude Estimation, Amplitude Retrieval, Initial Plate Scale Estimation using the Modulation Transfer Function (MTF), Pupil Amplitude Estimation vs lambda, Pupil Amplitude Estimation vs. number of Images, Pupil Amplitude Estimation vs Rotation (clocking), and Typical Phase Retrieval Results Also included is information about the phase retrieval approach, Non-Linear Optimization (NLO) Optimized Diversity Functions, and Least Square Error vs. Starting Pupil Amplitude.
Sensor fault detection and recovery in satellite attitude control
NASA Astrophysics Data System (ADS)
Nasrolahi, Seiied Saeed; Abdollahi, Farzaneh
2018-04-01
This paper proposes an integrated sensor fault detection and recovery for the satellite attitude control system. By introducing a nonlinear observer, the healthy sensor measurements are provided. Considering attitude dynamics and kinematic, a novel observer is developed to detect the fault in angular rate as well as attitude sensors individually or simultaneously. There is no limit on type and configuration of attitude sensors. By designing a state feedback based control signal and Lyapunov stability criterion, the uniformly ultimately boundedness of tracking errors in the presence of sensor faults is guaranteed. Finally, simulation results are presented to illustrate the performance of the integrated scheme.
NASA Technical Reports Server (NTRS)
Padilla, Peter A.
1991-01-01
An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.
High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.
Wang, Fei; Xie, Zhaoxin; Chen, Zuo
2014-01-01
Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.
Ximénez, Carmen
2016-01-01
This article extends previous research on the recovery of weak factor loadings in confirmatory factor analysis (CFA) by exploring the effects of adding the mean structure. This issue has not been examined in previous research. This study is based on the framework of Yung and Bentler (1999) and aims to examine the conditions that affect the recovery of weak factor loadings when the model includes the mean structure, compared to analyzing the covariance structure alone. A simulation study was conducted in which several constraints were defined for one-, two-, and three-factor models. Results show that adding the mean structure improves the recovery of weak factor loadings and reduces the asymptotic variances for the factor loadings, particularly for the models with a smaller number of factors and a small sample size. Therefore, under certain circumstances, modeling the means should be seriously considered for covariance models containing weak factor loadings. PMID:26779071
To Err Is Human; To Structurally Prime from Errors Is Also Human
ERIC Educational Resources Information Center
Slevc, L. Robert; Ferreira, Victor S.
2013-01-01
Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…
Model predictive and reallocation problem for CubeSat fault recovery and attitude control
NASA Astrophysics Data System (ADS)
Franchi, Loris; Feruglio, Lorenzo; Mozzillo, Raffaele; Corpino, Sabrina
2018-01-01
In recent years, thanks to the increase of the know-how on machine-learning techniques and the advance of the computational capabilities of on-board processing, expensive computing algorithms, such as Model Predictive Control, have begun to spread in space applications even on small on-board processor. The paper presents an algorithm for an optimal fault recovery of a 3U CubeSat, developed in MathWorks Matlab & Simulink environment. This algorithm involves optimization techniques aiming at obtaining the optimal recovery solution, and involves a Model Predictive Control approach for the attitude control. The simulated system is a CubeSat in Low Earth Orbit: the attitude control is performed with three magnetic torquers and a single reaction wheel. The simulation neglects the errors in the attitude determination of the satellite, and focuses on the recovery approach and control method. The optimal recovery approach takes advantage of the properties of magnetic actuation, which gives the possibility of the redistribution of the control action when a fault occurs on a single magnetic torquer, even in absence of redundant actuators. In addition, the paper presents the results of the implementation of Model Predictive approach to control the attitude of the satellite.
Error Reduction Analysis and Optimization of Varying GRACE-Type Micro-Satellite Constellations
NASA Astrophysics Data System (ADS)
Widner, M. V., IV; Bettadpur, S. V.; Wang, F.; Yunck, T. P.
2017-12-01
The Gravity Recovery and Climate Experiment (GRACE) mission has been a principal contributor in the study and quantification of Earth's time-varying gravity field. Both GRACE and its successor, GRACE Follow-On, are limited by their paired satellite design which only provide a full map of Earth's gravity field approximately every thirty days and at large spatial resolutions of over 300 km. Micro-satellite technology has presented the feasibility of improving the architecture of future missions to address these issues with the implementation of a constellations of satellites having similar characteristics as GRACE. To optimize the constellation's architecture, several scenarios are evaluated to determine how implementing this configuration affects the resultant gravity field maps and characterize which instrument system errors improve, which do not, and how changes in constellation architecture affect these errors.
NASA Astrophysics Data System (ADS)
Wang, Liuzheng; He, Xiang; Zhang, Wei; Liu, Yong; Banks, Craig E.; Zhang, Ying
2018-02-01
The structure-property relationship between biomineralized calcium phosphate compounds upon a fluorescent quenching-recovery platform and their distinct crystalline structure and surficial functional groups are investigated. A fluorescence-based sensing platform is shown to be viable for the sensing of 8-hydroxy-2-deoxy-guanosine in simulated systems.
Park, Jin Ho; Dao, Trung Dung; Lee, Hyung-il; Jeong, Han Mo; Kim, Byung Kyu
2014-01-01
Shape memory behavior of crystalline shape memory polyurethane (SPU) reinforced with graphene, which utilizes melting temperature as a shape recovery temperature, was examined with various external actuating stimuli such as direct heating, resistive heating, and infrared (IR) heating. Compatibility of graphene with crystalline SPU was adjusted by altering the structure of the hard segment of the SPU, by changing the structure of the graphene, and by changing the preparation method of the graphene/SPU composite. The SPU made of aromatic 4,4′-diphenylmethane diisocyanate (MSPU) exhibited better compatibility with graphene, having an aromatic structure, compared to that made of the aliphatic hexamethylene diisocyanate. The finely dispersed graphene effectively reinforced MSPU, improved shape recovery of MSPU, and served effectively as a filler, triggering shape recovery by resistive or IR heating. Compatibility was enhanced when the graphene was modified with methanol. This improved shape recovery by direct heating, but worsened the conductivity of the composite, and consequently the efficiency of resistive heating for shape recovery also declined. Graphene modified with methanol was more effective than pristine graphene in terms of shape recovery by IR heating. PMID:28788529
Effects of state recovery on creep buckling under variable loading
NASA Technical Reports Server (NTRS)
Robinson, D. N.; Arnold, S. M.
1986-01-01
Structural alloys embody internal mechanisms that allow recovery of state with varying stress and elevated temperature, i.e., they can return to a softer state following periods of hardening. Such material behavior is known to strongly influence structural response under some important thermomechanical loadings, for example, that involving thermal ratchetting. The influence of dynamic and thermal recovery on the creep buckling of a column under variable loading is investigated. The column is taken as the idealized (Shanley) sandwich column. The constitutive model, unlike the commonly employed Norton creep model, incorporates a representation of both dynamic and thermal (state) recovery. The material parameters of the constitutive model are chosen to characterize Narloy Z, a representative copper alloy used in thrust nozzle liners of reusable rocket engines. Variable loading histories include rapid cyclic unloading/reloading sequences and intermittent reductions of load for extended periods of time; these are superimposed on a constant load. The calculated results show that state recovery significantly affects creep buckling under variable loading. Structural alloys embody internal mechanisms that allow recovery of state with varying stress and time.
Isochoric structural recovery in molecular glasses and its analog in colloidal glasses
NASA Astrophysics Data System (ADS)
Banik, Sourya; McKenna, Gregory B.
2018-06-01
Concentrated colloidal dispersions have been regarded as models for molecular glasses. One of the many ways to compare the behavior in these two different systems is by comparing the structural recovery or the physical aging behavior. However, recent investigations from our group to examine structural recovery in thermosensitive colloidal dispersions have shown contrasting results between the colloidal and the molecular glasses. The differences in the behaviors of the two systems have led us to pose this question: Is structural recovery behavior in colloidal glasses truly distinct from that of molecular glasses or is the conventional experimental condition (isobaric temperature-jumps) in determining the structural recovery in molecular glasses different from the experimental condition in the colloidal experiments (concentration- or volume fraction-jumps); i.e., are colloidal glasses inherently different from molecular glasses or not? To address the question, we resort to model calculations of structural recovery in a molecular glass under constant volume (isochoric) conditions following temperature only- and simultaneous volume- and temperature-jumps, which are closer to the volume fraction-jump conditions used in the thermosensitive-colloidal experiments. The current model predictions are then compared with the signatures of structural recovery under the conventional isobaric state in a molecular glass and with structural recovery behavior in colloidal glasses following volume fraction-jumps. We show that the results obtained from the experiments conducted by our group were contrasting to classical molecular glass behavior because the basis of our comparisons were incorrect (the histories were not analogous). The present calculations (with analogous histories) are qualitatively closer to the colloidal behavior. The signatures of "intrinsic isotherms" and "asymmetry of approach" in the current isochoric model predictions are quite different from those in the classical isobaric conditions while the "memory" signatures remain essentially the same. While there are qualitative similarities between the current isochoric model predictions and results from colloidal glasses, it appears from the calculations that the origins of these are different. The isochoric histories in the molecular glasses have compensating effects of pressure and departure from equilibrium which determines the structure dependence on mobility of the molecules. On the other hand, in the colloids it simply appears that the volume fraction-jump conditions simply do not exhibit such structure mobility dependence. The determining interplay of thermodynamic phase variables in colloidal and molecular systems might be very different or at least their correlations are yet to be ascertained. This topic requires further investigation to bring the similarities and differences between molecular and colloidal glass formers into fuller clarity.
Animal movement constraints improve resource selection inference in the presence of telemetry error
Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.
2016-01-01
Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.
ADAPTATION AND GENERALIZATION TO OPPOSING PERTURBATIONS IN WALKING
Bhatt, T.; Wang, T.-Y.; Yang, F.; Pai, Y.-C.
2013-01-01
Little is known on how the CNS would select its movement options when a person faces a novel or recurring perturbation of two opposing types (slip or trip) while walking. The purposes of this study were (1) to determine whether young adults’ adaptation to repeated slips would interfere with their recovery from a novel trip, and (2) to investigate the generalized strategies after they were exposed to a mixed training with both types of perturbation. Thirty-two young adults were assigned to either the training group, which first underwent repeated-slip training before encountering a novel, unannounced trip while walking, or to the control group, which only experienced the same novel, unannounced trip. The former group would then experience a mix of repeated trips and slips. The results indicated that prior adaptation to slips had only limited interference during the initial phase of trip recovery. In fact, the prior repeated-slip exposure had primed their reaction, which mitigated any error resulting from early interference. As a result, they did not have to take a longer compensatory step for trip recovery than did the controls. After the mixed training, subjects were able to converge effectively the motion state of their center of mass (in its position and velocity space) to a stable and generalized “middle ground” steady-state. Such movement strategies not only further strengthened their robust reactive control of stability, but also reduced the CNS’ overall reliance on accurate context prediction and on feedback correction of perturbation-induced movement error. PMID:23603517
Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing
2017-08-05
The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set. Copyright © 2017 Elsevier B.V. All rights reserved.
Elison, Sarah; Davies, Glyn; Ward, Jonathan
2016-07-28
There is a growing literature around substance use disorder treatment outcomes measures. Various constructs have been suggested as being appropriate for measuring recovery outcomes, including "recovery capital" and "treatment progression." However, these previously proposed constructs do not measure changes in psychosocial functioning during the recovery process. Therefore, a new psychometric assessment, the "Recovery Progression Measure" (RPM), has been developed to measure this recovery oriented psychosocial change. The aims of this study were to evaluate the reliability and factor structure of the RPM via data collected from 2218 service users being treated for their substance dependence. Data were collected from service users accessing the Breaking Free Online (BFO) substance use disorder treatment and recovery program, which has within its baseline assessment a 36-item psychometric measure previously developed by the authors to assess the six areas of functioning described in the RPM. Reliability analyses and exploratory factor analyses (EFA) were conducted to examine the underlying factor structure of the RPM measure. Internal reliability of the RPM measure was found to be excellent (α > .70) with the overall assessment to have reliability α = .89, with item-total correlations revealing moderate-excellent reliability of individual items. EFA revealed the RPM to contain an underlying factor structure of eight components. This study provides initial data to support the reliability of the RPM as a recovery measure. Further work is now underway to extend these findings, including convergent and predictive validity analyses.
The Design of an Interactive Data Retrieval System for Casual Users.
ERIC Educational Resources Information Center
Radhakrishnan, T.; And Others
1982-01-01
Describes an interactive data retrieval system which was designed and implemented for casual users and which incorporates a user-friendly interface, aids to train beginners in use of the system, versatility in output, and error recovery protocols. A 14-item reference list and two figures illustrating system operation and output are included. (JL)
Age-Appropriate Cues Facilitate Source-Monitoring and Reduce Suggestibility in 3- To 7-Year-Olds
ERIC Educational Resources Information Center
Bright-Paul, A.; Jarrold, C.; Wright, D.B.
2005-01-01
Providing cues to facilitate the recovery of source information can reduce postevent misinformation effects in adults, implying that errors in source-monitoring contribute to suggestibility (e.g., [Lindsay, D. S., & Johnson, M. K. (1989). The eyewitness suggestibility effect and memory for source. Memory & Cognition, 17, 349-358]). The present…
Testing Predictions of the Interactive Activation Model in Recovery from Aphasia after Treatment
ERIC Educational Resources Information Center
Jokel, Regina; Rochon, Elizabeth; Leonard, Carol
2004-01-01
This paper presents preliminary results of pre- and post-treatment error analysis from an aphasic patient with anomia. The Interactive Activation (IA) model of word production (Dell, Schwartz, Martin, Saffran, & Gagnon, 1997) is utilized to make predictions about the anticipated changes on a picture naming task and to explain emerging patterns.…
NASA Technical Reports Server (NTRS)
Moore, Rachel; Stenger, Michael; Platts, Steven; Lee, Stuart
2013-01-01
Bed Rest and Space Flight cause a significant decrease in BEI. BR causes similar changes to BEI as SF. BEI may not correlate with subjects experiencing presyncope, but error is high and n is low. Compression Garments have the potential to restore BEI after short duration BR, but do not prevent recovery.
The Consequences of Ignoring Item Parameter Drift in Longitudinal Item Response Models
ERIC Educational Resources Information Center
Lee, Wooyeol; Cho, Sun-Joo
2017-01-01
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
Space shuttle navigation analysis. Volume 2: Baseline system navigation
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Rains, R. G.
1980-01-01
Studies related to the baseline navigation system for the orbiter are presented. The baseline navigation system studies include a covariance analysis of the Inertial Measurement Unit calibration and alignment procedures, postflight IMU error recovery for the approach and landing phases, on-orbit calibration of IMU instrument biases, and a covariance analysis of entry and prelaunch navigation system performance.
A Recovery-Oriented Approach to Dependable Services: Repairing Past Errors with System-Wide Undo
2003-12-01
54 4.5.3 Handling propagating paradoxes: the squash interface . . . . . . . . . . . . . . . . . . . 54 4.6 Discussion...84 6.3.3 Compensating for paradoxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.3.4 Squashing propagating...the service and comparing the behavior of the replicas to detect and squash misbehaving replicas. While on paper Byzantine fault tolerance may seem to
Hindrances to precise recovery of cellular forces in fibrous biopolymer networks.
Zhang, Yunsong; Feng, Jingchen; Heizler, Shay I; Levine, Herbert
2018-01-11
How cells move through the three-dimensional extracellular matrix (ECM) is of increasing interest in attempts to understand important biological processes such as cancer metastasis. Just as in motion on flat surfaces, it is expected that experimental measurements of cell-generated forces will provide valuable information for uncovering the mechanisms of cell migration. However, the recovery of forces in fibrous biopolymer networks may suffer from large errors. Here, within the framework of lattice-based models, we explore possible issues in force recovery by solving the inverse problem: how can one determine the forces cells exert to their surroundings from the deformation of the ECM? Our results indicate that irregular cell traction patterns, the uncertainty of local fiber stiffness, the non-affine nature of ECM deformations and inadequate knowledge of network topology will all prevent the precise force determination. At the end, we discuss possible ways of overcoming these difficulties.
Method and system for redundancy management of distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2012-01-01
A method and system for redundancy management is provided for a distributed and recoverable digital control system. The method uses unique redundancy management techniques to achieve recovery and restoration of redundant elements to full operation in an asynchronous environment. The system includes a first computing unit comprising a pair of redundant computational lanes for generating redundant control commands. One or more internal monitors detect data errors in the control commands, and provide a recovery trigger to the first computing unit. A second redundant computing unit provides the same features as the first computing unit. A first actuator control unit is configured to provide blending and monitoring of the control commands from the first and second computing units, and to provide a recovery trigger to each of the first and second computing units. A second actuator control unit provides the same features as the first actuator control unit.
Hindrances to precise recovery of cellular forces in fibrous biopolymer networks
NASA Astrophysics Data System (ADS)
Zhang, Yunsong; Feng, Jingchen; Heizler, Shay I.; Levine, Herbert
2018-03-01
How cells move through the three-dimensional extracellular matrix (ECM) is of increasing interest in attempts to understand important biological processes such as cancer metastasis. Just as in motion on flat surfaces, it is expected that experimental measurements of cell-generated forces will provide valuable information for uncovering the mechanisms of cell migration. However, the recovery of forces in fibrous biopolymer networks may suffer from large errors. Here, within the framework of lattice-based models, we explore possible issues in force recovery by solving the inverse problem: how can one determine the forces cells exert to their surroundings from the deformation of the ECM? Our results indicate that irregular cell traction patterns, the uncertainty of local fiber stiffness, the non-affine nature of ECM deformations and inadequate knowledge of network topology will all prevent the precise force determination. At the end, we discuss possible ways of overcoming these difficulties.
A bi-articular model for scapular-humeral rhythm reconstruction through data from wearable sensors.
Lorussi, Federico; Carbonaro, Nicola; De Rossi, Danilo; Tognetti, Alessandro
2016-04-23
Patient-specific performance assessment of arm movements in daily life activities is fundamental for neurological rehabilitation therapy. In most applications, the shoulder movement is simplified through a socket-ball joint, neglecting the movement of the scapular-thoracic complex. This may lead to significant errors. We propose an innovative bi-articular model of the human shoulder for estimating the position of the hand in relation to the sternum. The model takes into account both the scapular-toracic and gleno-humeral movements and their ratio governed by the scapular-humeral rhythm, fusing the information of inertial and textile-based strain sensors. To feed the reconstruction algorithm based on the bi-articular model, an ad-hoc sensing shirt was developed. The shirt was equipped with two inertial measurement units (IMUs) and an integrated textile strain sensor. We built the bi-articular model starting from the data obtained in two planar movements (arm abduction and flexion in the sagittal plane) and analysing the error between the reference data - measured through an optical reference system - and the socket-ball approximation of the shoulder. The 3D model was developed by extending the behaviour of the kinematic chain revealed in the planar trajectories through a parameter identification that takes into account the body structure of the subject. The bi-articular model was evaluated in five subjects in comparison with the optical reference system. The errors were computed in terms of distance between the reference position of the trochlea (end-effector) and the correspondent model estimation. The introduced method remarkably improved the estimation of the position of the trochlea (and consequently the estimation of the hand position during reaching activities) reducing position errors from 11.5 cm to 1.8 cm. Thanks to the developed bi-articular model, we demonstrated a reliable estimation of the upper arm kinematics with a minimal sensing system suitable for daily life monitoring of recovery.
A method for determination of [Fe3+]/[Fe2+] ratio in superparamagnetic iron oxide
NASA Astrophysics Data System (ADS)
Jiang, Changzhao; Yang, Siyu; Gan, Neng; Pan, Hongchun; Liu, Hong
2017-10-01
Superparamagnetic iron oxide nanoparticles (SPION), as a kind of nanophase materials, are widely used in biomedical application, such as magnetic resonance imaging (MRI), drug delivery, and magnetic field assisted therapy. The magnetic property of SPION has close connection with its crystal structure, namely it is related to the ratio of Fe3+ and Fe2+ which form the SPION. So a simple way to determine the content of the Fe3+ and Fe2+ is important for researching the property of SPION. This review covers a method for determination of the Fe3+ and Fe2+ ratio in SPION by UV-vis spectrophotometry based the reaction of Fe2+ and 1,10-phenanthroline. The standard curve of Fe with R2 = 0.9999 is used for determination the content of Fe2+ and total iron with 2.5 mL 0.01% (w/v) SPION digested by HCl, pH = 4.30 HOAc-NaAc buffer 10 mL, 0.01% (w/v) 1,10-phenanthroline 5 mL and 10% (w/v) ascorbic acid 1 mL for total iron determine independently. But the presence of Fe3+ interfere with obtaining the actual value of Fe2+ (the error close to 9%). We designed a calibration curve to eliminate the error by devising a series of solution of different ratio of [Fe3+]/[Fe2+], and obtain the calibration curve. Through the calibration curve, the error between the measured value and the actual value can be reduced to 0.4%. The R2 of linearity of the method is 0.99441 and 0.99929 for Fe2+ and total iron respectively. The error of accuracy of recovery and precision of inter-day and intra-day are both lower than 2%, which can prove the reliability of the determination method.
Schweizer, Tom A; Vogel-Sprott, Muriel
2008-06-01
Much research on the effects of a dose of alcohol has shown that motor skills recover from impairment as blood alcohol concentrations (BACs) decline and that acute tolerance to alcohol impairment can develop during the course of the dose. Comparable alcohol research on cognitive performance is sparse but has increased with the development of computerized cognitive tasks. This article reviews the results of recent research using these tasks to test the development of acute tolerance in cognitive performance and recovery from impairment during declining BACs. Results show that speed and accuracy do not necessarily agree in detecting cognitive impairment, and this mismatch most frequently occurs during declining BACs. Speed of cognitive performance usually recovers from impairment to drug-free levels during declining BACs, whereas alcohol-increased errors fail to diminish. As a consequence, speed of cognitive processing tends to develop acute tolerance, but no such tendency is shown in accuracy. This "acute protracted error" phenomenon has not previously been documented. The findings pose a challenge to the theory of alcohol tolerance on the basis of physiological adaptation and raise new research questions concerning the independence of speed and accuracy of cognitive processes, as well as hemispheric lateralization of alcohol effects. The occurrence of alcohol-induced protracted cognitive errors long after speed returned to normal is identified as a potential threat to the safety of social drinkers that requires urgent investigation.
Vanin, Evgeny; Jacobsen, Gunnar
2010-03-01
The Bit-Error-Ratio (BER) floor caused by the laser phase noise in the optical fiber communication system with differential quadrature phase shift keying (DQPSK) and coherent detection followed by digital signal processing (DSP) is analytically evaluated. An in-phase and quadrature (I&Q) receiver with a carrier phase recovery using DSP is considered. The carrier phase recovery is based on a phase estimation of a finite sum (block) of the signal samples raised to the power of four and the phase unwrapping at transitions between blocks. It is demonstrated that errors generated at block transitions cause the dominating contribution to the system BER floor when the impact of the additive noise is negligibly small in comparison with the effect of the laser phase noise. Even the BER floor in the case when the phase unwrapping is omitted is analytically derived and applied to emphasize the crucial importance of this signal processing operation. The analytical results are verified by full Monte Carlo simulations. The BER for another type of DQPSK receiver operation, which is based on differential phase detection, is also obtained in the analytical form using the principle of conditional probability. The principle of conditional probability is justified in the case of differential phase detection due to statistical independency of the laser phase noise induced signal phase error and the additive noise contributions. Based on the achieved analytical results the laser linewidth tolerance is calculated for different system cases.
Kaplan, H S
2005-11-01
Safety and reliability in blood transfusion are not static, but are dynamic non-events. Since performance deviations continually occur in complex systems, their detection and correction must be accomplished over and over again. Non-conformance must be detected early enough to allow for recovery or mitigation. Near-miss events afford early detection of possible system weaknesses and provide an early chance at correction. National event reporting systems, both voluntary and involuntary, have begun to include near-miss reporting in their classification schemes, raising awareness for their detection. MERS-TM is a voluntary safety reporting initiative in transfusion. Currently 22 hospitals submit reports anonymously to a central database which supports analysis of a hospital's own data and that of an aggregate database. The system encourages reporting of near-miss events, where the patient is protected from receiving an unsuitable or incorrect blood component due to a planned or unplanned recovery step. MERS-TM data suggest approximately 90% of events are near-misses, with 10% caught after issue but before transfusion. Near-miss reporting may increase total reports ten-fold. The ratio of near-misses to events with harm is 339:1, consistent with other industries' ratio of 300:1, which has been proposed as a measure of reporting in event reporting systems. Use of a risk matrix and an event's relation to protective barriers allow prioritization of these events. Near-misses recovered by planned barriers occur ten times more frequently then unplanned recoveries. A bedside check of the patient's identity with that on the blood component is an essential, final barrier. How the typical two person check is performed, is critical. Even properly done, this check is ineffective against sampling and testing errors. Blood testing at bedside just prior to transfusion minimizes the risk of such upstream events. However, even with simple and well designed devices, training may be a critical issue. Sample errors account for more than half of reported events. The most dangerous miscollection is a blood sample passing acceptance with no previous patient results for comparison. Bar code labels or collection of a second sample may counter this upstream vulnerability. Further upstream barriers have been proposed to counter the precariousness of urgent blood sample collection in a changing unstable situation. One, a linking device, allows safer labeling of tubes away from the bedside, the second, a forcing function, prevents omission of critical patient identification steps. Errors in the blood bank itself account for 15% of errors with a high potential severity. In one such event, a component incorrectly issued, but safely detected prior to transfusion, focused attention on multitasking's contribution to laboratory error. In sum, use of near-miss information, by enhancing barriers supporting error prevention and mitigation, increases our capacity to get the right blood to the right patient.
Kurrant, Douglas; Fear, Elise; Baran, Anastasia; LoVetri, Joe
2017-12-01
The authors have developed a method to combine a patient-specific map of tissue structure and average dielectric properties with microwave tomography. The patient-specific map is acquired with radar-based techniques and serves as prior information for microwave tomography. The impact that the degree of structural detail included in this prior information has on image quality was reported in a previous investigation. The aim of the present study is to extend this previous work by identifying and quantifying the impact that errors in the prior information have on image quality, including the reconstruction of internal structures and lesions embedded in fibroglandular tissue. This study also extends the work of others reported in literature by emulating a clinical setting with a set of experiments that incorporate heterogeneity into both the breast interior and glandular region, as well as prior information related to both fat and glandular structures. Patient-specific structural information is acquired using radar-based methods that form a regional map of the breast. Errors are introduced to create a discrepancy in the geometry and electrical properties between the regional map and the model used to generate the data. This permits the impact that errors in the prior information have on image quality to be evaluated. Image quality is quantitatively assessed by measuring the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The study is conducted using both 2D and 3D numerical breast models constructed from MRI scans. The reconstruction results demonstrate robustness of the method relative to errors in the dielectric properties of the background regional map, and to misalignment errors. These errors do not significantly influence the reconstruction accuracy of the underlying structures, or the ability of the algorithm to reconstruct malignant tissue. Although misalignment errors do not significantly impact the quality of the reconstructed fat and glandular structures for the 3D scenarios, the dielectric properties are reconstructed less accurately within the glandular structure for these cases relative to the 2D cases. However, general agreement between the 2D and 3D results was found. A key contribution of this paper is the detailed analysis of the impact of prior information errors on the reconstruction accuracy and ability to detect tumors. The results support the utility of acquiring patient-specific information with radar-based techniques and incorporating this information into MWT. The method is robust to errors in the dielectric properties of the background regional map, and to misalignment errors. Completion of this analysis is an important step toward developing the method into a practical diagnostic tool. © 2017 American Association of Physicists in Medicine.
Cavelti, M; Wirtz, M; Corrigan, P; Vauth, R
2017-03-01
The recovery framework has found its way into local and national mental health services and policies around the world, especially in English speaking countries. To promote this process, it is necessary to assess personal recovery validly and reliably. The Recovery Assessment Scale (RAS) is the most established measure in recovery research. The aim of the current study is to examine the factor structure of the German version of the RAS (RAS-G). One hundred and fifty-six German-speaking clients with schizophrenia or schizoaffective disorder from a community mental health service completed the RAS-G plus measures of recovery attitudes, self-stigma, psychotic symptoms, depression, and functioning. A confirmatory factor analysis of the original 24-item RAS version was conducted to examine its factor structure, followed by reliability and validity testing of the extracted factors. The CFA yielded five factors capturing 14 items which showed a substantial overlap with the original subscales Personal Confidence and Hope, Goal and Success Orientation, Willingness to Ask for Help, Reliance on Others, and No Domination by Symptoms. The factors demonstrated mean to excellent reliability (0.59-0.89) and satisfactory criterial validity by positive correlations with measures of recovery attitudes and functioning, and negative correlations with measures of self-stigma, and psychotic and depressive symptoms. The study results are discussed in the light of other studies examining the factor structure of the RAS. Overall, they support the use of the RAS-G as a means to promote recovery oriented services, policies, and research in German-speaking countries. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Teoh, Joanne Ee Mei; Zhao, Yue; An, Jia; Chua, Chee Kai; Liu, Yong
2017-12-01
Shape memory polymers (SMPs) have gained a presence in additive manufacturing due to their role in 4D printing. They can be printed either in multi-materials for multi-stage shape recovery or in a single material for single-stage shape recovery. When printed in multi-materials, material or material-based design is used as a controlling factor for multi-stage shape recovery. However, when printed in a single material, it is difficult to design multi-stage shape recovery due to the lack of a controlling factor. In this research, we explore the use of geometric thickness as a controlling factor to design smart structures possessing multi-stage shape recovery using a single SMP. L-shaped hinges with a thickness ranging from 0.3-2 mm were designed and printed in four different SMPs. The effect of thickness on SMP’s response time was examined via both experiment and finite element analysis using Ansys transient thermal simulation. A method was developed to accurately measure the response time in millisecond resolution. Temperature distribution and heat transfer in specimens during thermal activation were also simulated and discussed. Finally, a spiral square and an artificial flower consisting of a single SMP were designed and printed with appropriate thickness variation for the demonstration of a controlled multi-stage shape recovery. Experimental results indicated that smart structures printed using single material with controlled thickness parameters are able to achieve controlled shape recovery characteristics similar to those printed with multiple materials and uniform geometric thickness. Hence, the geometric parameter can be used to increase the degree of freedom in designing future smart structures possessing complex shape recovery characteristics.
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
Physical activity among adults with obesity: testing the Health Action Process Approach.
Parschau, Linda; Barz, Milena; Richert, Jana; Knoll, Nina; Lippke, Sonia; Schwarzer, Ralf
2014-02-01
This study tested the applicability of the Health Action Process Approach (HAPA) in a sample of obese adults in the context of physical activity. Physical activity was assessed along with motivational and volitional variables specified in the HAPA (motivational self-efficacy, outcome expectancies, risk perception, intention, maintenance self-efficacy, action planning, coping planning, recovery self-efficacy, social support) in a sample of 484 obese men and women (body mass index ≥ 30 kg/m2). Applying structural equation modeling, the fit of the HAPA model was satisfactory-χ²(191) = 569.93, p < .05, χ²/df = 2.98, comparative fit index = .91, normed-fit index = .87, and root mean square error of approximation = .06 (90% CI = .06, .07)-explaining 30% of the variance in intention and 18% of the variance in physical activity. Motivational self-efficacy, outcome expectancies, and social support were related to intention. An association between maintenance self-efficacy and coping planning was found. Recovery self-efficacy and social support were associated with physical activity. No relationships were found between risk perception and intention and between planning and physical activity. The assumptions derived from the HAPA were partly confirmed and the HAPA may, therefore, constitute a theoretical backdrop for intervention designs to promote physical activity in adults with obesity. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Using SAS PROC CALIS to fit Level-1 error covariance structures of latent growth models.
Ding, Cherng G; Jane, Ten-Der
2012-09-01
In the present article, we demonstrates the use of SAS PROC CALIS to fit various types of Level-1 error covariance structures of latent growth models (LGM). Advantages of the SEM approach, on which PROC CALIS is based, include the capabilities of modeling the change over time for latent constructs, measured by multiple indicators; embedding LGM into a larger latent variable model; incorporating measurement models for latent predictors; and better assessing model fit and the flexibility in specifying error covariance structures. The strength of PROC CALIS is always accompanied with technical coding work, which needs to be specifically addressed. We provide a tutorial on the SAS syntax for modeling the growth of a manifest variable and the growth of a latent construct, focusing the documentation on the specification of Level-1 error covariance structures. Illustrations are conducted with the data generated from two given latent growth models. The coding provided is helpful when the growth model has been well determined and the Level-1 error covariance structure is to be identified.
Bui, Tuan V; Stifani, Nicolas; Akay, Turgay; Brownstone, Robert M
2016-01-01
The spinal cord has the capacity to coordinate motor activities such as locomotion. Following spinal transection, functional activity can be regained, to a degree, following motor training. To identify microcircuits involved in this recovery, we studied a population of mouse spinal interneurons known to receive direct afferent inputs and project to intermediate and ventral regions of the spinal cord. We demonstrate that while dI3 interneurons are not necessary for normal locomotor activity, locomotor circuits rhythmically inhibit them and dI3 interneurons can activate these circuits. Removing dI3 interneurons from spinal microcircuits by eliminating their synaptic transmission left locomotion more or less unchanged, but abolished functional recovery, indicating that dI3 interneurons are a necessary cellular substrate for motor system plasticity following transection. We suggest that dI3 interneurons compare inputs from locomotor circuits with sensory afferent inputs to compute sensory prediction errors that then modify locomotor circuits to effect motor recovery. DOI: http://dx.doi.org/10.7554/eLife.21715.001 PMID:27977000
Roy Chowdhury, Sankhanilay; Witte, Peter T; Blank, Dave H A; Alsters, Paul L; Ten Elshof, Johan E
2006-04-03
The recovery of homogeneous polyoxometallate (POM) oxidation catalysts from aqueous and non-aqueous media by a nanofiltration process using mesoporous gamma-alumina membranes is reported. The recovery of Q(12)[WZn(3)(ZnW(9)O(34))(2)] (Q=[MeN(n-C(8)H(17))(3)](+)) from toluene-based media was quantitative within experimental error, while up to 97 % of Na(12)[WZn(3)(ZnW(9)O(34))(2)] could be recovered from water. The toluene-soluble POM catalyst was used repeatedly in the conversion of cyclooctene to cyclooctene oxide and separated from the product mixture after each reaction. The catalytic activity increased steadily with the number of times that the catalyst had been recycled, which was attributed to partial removal of the excess QCl that is known to have a negative influence on the catalytic activity. Differences in the permeability of the membrane for different liquid media can be attributed to viscosity differences and/or capillary condensation effects. The influence of membrane pore radius on permeability and recovery is discussed.
Structural power flow measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falter, K.J.; Keltie, R.F.
Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors weremore » found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.« less
Teaching Children with Hearing Loss in Reading Recovery
ERIC Educational Resources Information Center
Charlesworth, Ann; Charlesworth, Robert; Raban, Bridie; Rickards, Field
2006-01-01
This study quantitatively analyzed the structure of Reading Recovery lessons for children with hearing loss by examining and comparing the supportive interactions of three Reading Recovery teachers of 12 children with hearing loss and three Reading Recovery teachers of 12 hearing children. All of the children were in the second year of primary…
Exploring the Factor Structure of a Recovery Assessment Measure among Substance-Abusing Youth.
Gonzales, Rachel; Hernandez, Mayra; Douglas, Samantha B; Yu, Chong Ho
2015-01-01
To date, the measurement of recovery in the field of substance abuse is limited. Youth recovery from substance abuse is an important area to consider, given the complexities of such issues. The Recovery Assessment Scale (RAS) has been validated with mental health patient populations; however, its measurement characteristics have not been examined for individuals in substance abuse treatment. The current study explored the factor structure of the RAS with a sample of 80 substance-abusing youth who participated in a pilot aftercare study (Mage 20.5, SD=3.5; 71.3% male). Reliability analysis showed an internal consistency of α=.90 for the entire RAS measure among the youth sample. Results of exploratory factor analysis identified the following four factors: personal determination, skills for recovery, self-control in recovery, and social support/moving beyond recovery among the substance-abusing youth sample. The RAS also demonstrated sound convergent and divergent validity in comparison to other validated measures of functioning, sobriety, and well-being. Collectively, results support that the RAS has adequate psychometric properties for measuring recovery among substance-abusing youth.
Factors influencing recovery of left ventricular structure in patients with chronic heart failure.
Duan, Hong-Yan; Wu, Xue-Si; Han, Zhi-Hong; Guo, Yong-Fang; Fang, Shan-Juan; Zhang, Xiao-Xia; Wang, Chun-Mei
2011-09-01
Angiotensin converting enzyme (ACE) inhibitors and β-blockers (βB) have beneficial effects on left ventricular (LV) remodeling, alleviate symptoms and reduce morbidity and mortality in patients with chronic heart failure (CHF). However the correlation between the d osages of ACE inhibitors, βB, and recovery of LV structure remains controversial. Clinical factors associated with recovery of normal ventricular structure in CHF patients receiving medical therapy are poorly defined. Here we aimed to identify variables associated with recovery of normal or near-normal structure in patients with CHF. We recruited 231 consecutive CHF outpatients, left ventricular ejection fraction (LVEF) ≤ 40% and left ventricular end diastolic diameter (LVEDD) > 55/50 mm (male/female), who were receiving optimal pharmacotherapy between January 2001 and June 2009, and followed them until December 31, 2009. They were divided into three groups according to LVEDD and whether they were still alive at final follow-up: group A, LVEDD ≤ 60/55 mm (male/female); group B, LVEDD > 60/55 mm (male/female); and group C, those who died before final follow-up. Apart from group C, univariate analysis was performed followed by Logistic multivariate analysis to determine the predictors of recovery of LV structure. A total of 217 patients completed follow-up, and median follow-up time was 35 months (range 6 - 108). Twenty-five patients died during that period; the all-cause mortality rate was 11.5%. Group A showed clinical characteristics as follows: the shortest duration of disease and shortest QRS width, the lowest N-terminal brain natriuretic peptide (NT-proBNP) at baseline, the highest dose of βB usage, the highest systolic blood pressure (SBP), diastolic blood pressure (DBP) and the lowest New York Heart Association (NYHA) classification, serum creatinine, uric acid, total bilirubin and NT-proBNP after treatment. Logistic multivariate analysis was performed according to recovery or no recovery of LV structure. Data showed that LVEF at follow-up (P = 0.013), mitral regurgitation at baseline (P = 0.020), LVEDD at baseline (P = 0.031), and βB dosage (P = 0.041) were independently associated with recovery of LV diameter. Our study suggests that four clinical variables may predict recovery of LV structure to normal or near-normal values with optimal drug therapy alone, and may be used to discriminate between patients who should receive optimal pharmacotherapy and those who require more aggressive therapeutic interventions.
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Evaluation of micro-GPS receivers for tracking small-bodied mammals
Shipley, Lisa A.; Forbey, Jennifer S.; Olsoy, Peter J.
2017-01-01
GPS telemetry markedly enhances the temporal and spatial resolution of animal location data, and recent advances in micro-GPS receivers permit their deployment on small mammals. One such technological advance, snapshot technology, allows for improved battery life by reducing the time to first fix via postponing recovery of satellite ephemeris (satellite location) data and processing of locations. However, no previous work has employed snapshot technology for small, terrestrial mammals. We evaluated performance of two types of micro-GPS (< 20 g) receivers (traditional and snapshot) on a small, semi-fossorial lagomorph, the pygmy rabbit (Brachylagus idahoensis), to understand how GPS errors might influence fine-scale assessments of space use and habitat selection. During stationary tests, microtopography (i.e., burrows) and satellite geometry had the largest influence on GPS fix success rate (FSR) and location error (LE). There was no difference between FSR while animals wore the GPS collars above ground (determined via light sensors) and FSR generated during stationary, above-ground trials, suggesting that animal behavior other than burrowing did not markedly influence micro-GPS errors. In our study, traditional micro-GPS receivers demonstrated similar FSR and LE to snapshot receivers, however, snapshot receivers operated inconsistently due to battery and software failures. In contrast, the initial traditional receivers deployed on animals experienced some breakages, but a modified collar design consistently functioned as expected. If such problems were resolved, snapshot technology could reduce the tradeoff between fix interval and battery life that occurs with traditional micro-GPS receivers. Our results suggest that micro-GPS receivers are capable of addressing questions about space use and resource selection by small mammals, but that additional techniques might be needed to identify use of habitat structures (e.g., burrows, tree cavities, rock crevices) that could affect micro-GPS performance and bias study results. PMID:28301495
NASA Astrophysics Data System (ADS)
Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu
2018-04-01
Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.
Accounting for Relatedness in Family Based Genetic Association Studies
McArdle, P.F.; O’Connell, J.R.; Pollin, T.I.; Baumgarten, M.; Shuldiner, A.R.; Peyser, P.A.; Mitchell, B.D.
2007-01-01
Objective Assess the differences in point estimates, power and type 1 error rates when accounting for and ignoring family structure in genetic tests of association. Methods We compare by simulation the performance of analytic models using variance components to account for family structure and regression models that ignore relatedness for a range of possible family based study designs (i.e., sib pairs vs. large sibships vs. nuclear families vs. extended families). Results Our analyses indicate that effect size estimates and power are not significantly affected by ignoring family structure. Type 1 error rates increase when family structure is ignored, as density of family structures increases, and as trait heritability increases. For discrete traits with moderate levels of heritability and across many common sampling designs, type 1 error rates rise from a nominal 0.05 to 0.11. Conclusion Ignoring family structure may be useful in screening although it comes at a cost of a increased type 1 error rate, the magnitude of which depends on trait heritability and pedigree configuration. PMID:17570925
An investigation of reports of Controlled Flight Toward Terrain (CFTT)
NASA Technical Reports Server (NTRS)
Porter, R. F.; Loomis, J. P.
1981-01-01
Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.
Correction of electrode modelling errors in multi-frequency EIT imaging.
Jehl, Markus; Holder, David
2016-06-01
The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.
Impact of cell size on inventory and mapping errors in a cellular geographic information system
NASA Technical Reports Server (NTRS)
Wehde, M. E. (Principal Investigator)
1979-01-01
The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.
NASA Astrophysics Data System (ADS)
Wang, Chang; Wu, Hong-lin; Song, Yun-fei; He, Xing; Yang, Yan-qiang; Tan, Duo-wang
2015-11-01
A modified CARS technique with an intense nonresonant femtosecond laser is presented to drive the structural deformation of liquid nitromethane molecules and track their structural relaxation process. The CARS spectra reveal that the internal rotation of the molecule can couple with the CN symmetric stretching vibration and the molecules undergo ultrafast structural deformation of the CH3 groups from 'opened umbrella' to 'closed umbrella' shape, and then experience a structural recovery process within 720 fs.
Vistisen, Bodil; Mu, Huiling; Høy, Carl-Erik
2006-09-01
Specific structured triacylglycerols, MLM (M = medium-chain fatty acid, L = long-chain fatty acid), rapidly deliver energy and long-chain fatty acids to the body and are used for longer periods in human enteral feeding. In the present study rats were fed diets of 10 wt% MLM or LLL (L = oleic acid [18:1 n-9], M = caprylic acid [8:01) for 2 wk. Then lymph was collected 24 h following administration of a single bolus of 13C-labeled MLM or LLL. The total lymphatic recovery of exogenous 18:1 n-9 24 h after administration of a single bolus of MLM or LLL was similar in rats on the LLL diet (43% and 45%, respectively). However, the recovery of exogenous 18:1 n-9 was higher after a single bolus of MLM compared with a bolus of LLL in rats on the MLM diet (40% and 24%, respectively, P = 0.009). The recovery of lymphatic 18:1 n-9 of the LLL bolus tended to depend on the diet triacylglycerol structure and composition (P= 0.07). This study demonstrated that with a diet containing specific structured triacylglycerol, the lymphatic recovery of 18:1 n-9 after a single bolus of fat was dependent on the triacylglycerol structure of the bolus. This indicates that the lymphatic recovery of long-chain fatty acids from a single meal depends on the overall long-chain fatty acid composition of the habitual diet. This could have implications for enteral feeding for longer periods.
Monitoring of self-healing composites: a nonlinear ultrasound approach
NASA Astrophysics Data System (ADS)
Malfense Fierro, Gian-Piero; Pinto, Fulvio; Dello Iacono, Stefania; Martone, Alfonso; Amendola, Eugenio; Meo, Michele
2017-11-01
Self-healing composites using a thermally mendable polymer, based on Diels-Alder reaction were fabricated and subjected to various multiple damage loads. Unlike traditional destructive methods, this work presents a nonlinear ultrasound technique to evaluate the structural recovery of the proposed self-healing laminate structures. The results were compared to computer tomography and linear ultrasound methods. The laminates were subjected to multiple loading and healing cycles and the induced damage and recovery at each stage was evaluated. The results highlight the benefit and added advantage of using a nonlinear based methodology to monitor the structural recovery of reversibly cross-linked epoxy with efficient recycling and multiple self-healing capability.
ERIC Educational Resources Information Center
Dando, Coral J.; Ormerod, Thomas C.; Wilcock, Rachel; Milne, Rebecca
2011-01-01
An experimental mock eyewitness study is reported that compared Free and reverse order recall of an empirically informed scripted crime event. Proponents of reverse order recall suggest it facilitates recovery of script incidental information and increases the total amount of information recalled. However, compared with free recall it was found to…
2011-02-01
µECD Gas chromatography - micro electron capture detector HPAH high molecular weight polyaromatic hydrocarbon HOC Hydrophobic organic compound IR...hydrocarbon PCB Polychlorinated biphenyl PE Polyethylene PED Polyethylene devices PFC Perfluorinated chemical POM Polyoxymethylene PRC...Performance reference compound RMSE Root Mean Squared Error SPME Solid Phase Micro Extraction SERDP Strategic Environmental Research and Development
Error Awareness and Recovery in Conversational Spoken Language Interfaces
2007-05-01
portant step towards constructing autonomously self -improving systems. Furthermore, we developed a scalable, data-driven approach that allows a system...prob- lems in spoken dialog (as well as other interactive systems) and constitutes an important step towards building autonomously self -improving...implicitly-supervised learning approach is applicable to other problems, and represents an important step towards developing autonomous, self
USDA-ARS?s Scientific Manuscript database
We investigated measurement error in the self-reported diets of US Hispanics/Latinos, who are prone to obesity and related comorbidities, by background (Central American, Cuban, Dominican, Mexican, Puerto Rican, and South American) in 2010–2012. In 477 participants aged 18–74 years, doubly labeled w...
Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.
2004-01-01
The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.
Effects of structural error on the estimates of parameters of dynamical systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1986-01-01
In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.
System for Configuring Modular Telemetry Transponders
NASA Technical Reports Server (NTRS)
Varnavas, Kosta A. (Inventor); Sims, William Herbert, III (Inventor)
2014-01-01
A system for configuring telemetry transponder cards uses a database of error checking protocol data structures, each containing data to implement at least one CCSDS protocol algorithm. Using a user interface, a user selects at least one telemetry specific error checking protocol from the database. A compiler configures an FPGA with the data from the data structures to implement the error checking protocol.
Decoupled recovery of energy and momentum with correction of n = 2 error fields
Paz-Soldan, Carlos A.; Logan, Nikolas C.; Lanctot, Matthew J.; ...
2015-07-06
Experiments applying known n = 2 “proxy” error fields (EFs) find that the rotation braking introduced by the proxy EF cannot be completely alleviated through optimal n = 2 correction with poorly matched poloidal spectra. This imperfect performance recovery demonstrates the importance of correcting multiple components of the n = 2 field spectrum and is in contrast to previous results with n = 1 EFs despite similar execution. Measured optimal n = 2 proxy EF correction currents are consistent with those required to null dominant mode coupling to the resonant surfaces and minimize the neoclassical toroidal viscosity (NTV) torque, calculatedmore » using ideal MHD plasma response computation. Unlike rotation braking, density pumpout can be fully corrected despite poorly matched spectra, indicating density pumpout is driven only by a single component proportional to the resonant coupling. Through precise n = 2 spectral control density pumpout and rotation braking can thus be decoupled. Rotation braking with n = 2 fields is also found to be proportional to the level of concurrent toroidal rotation, consistent with NTV theory. Lastly, plasmas with modest countercurrent rotation are insensitive to the n = 2 field with neither rotation braking nor density pumpout observed.« less
A joint equalization algorithm in high speed communication systems
NASA Astrophysics Data System (ADS)
Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin
2018-02-01
This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.
Optimization of processing parameters of UAV integral structural components based on yield response
NASA Astrophysics Data System (ADS)
Chen, Yunsheng
2018-05-01
In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.
Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C
2018-06-29
A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.
Integrating automated structured analysis and design with Ada programming support environments
NASA Technical Reports Server (NTRS)
Hecht, Alan; Simmons, Andy
1986-01-01
Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.
NASA Astrophysics Data System (ADS)
Harvey, Nate
2016-08-01
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.
Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen
2017-04-01
We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p < 0.0001). Consequential errors (263; 173 flights) were associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Cabilan, C J; Kynoch, Kathryn
2017-09-01
Second victims are clinicians who have made adverse errors and feel traumatized by the experience. The current published literature on second victims is mainly representative of doctors, hence nurses' experiences are not fully depicted. This systematic review was necessary to understand the second victim experience for nurses, explore the support provided, and recommend appropriate support systems for nurses. To synthesize the best available evidence on nurses' experiences as second victims, and explore their experiences of the support they receive and the support they need. Participants were registered nurses who made adverse errors. The review included studies that described nurses' experiences as second victims and/or the support they received after making adverse errors. All studies conducted in any health care settings worldwide. The qualitative studies included were grounded theory, discourse analysis and phenomenology. A structured search strategy was used to locate all unpublished and published qualitative studies, but was limited to the English language, and published between 1980 and February 2017. The references of studies selected for eligibility screening were hand-searched for additional literature. Eligible studies were assessed by two independent reviewers for methodological quality using a standardized critical appraisal instrument from the Joanna Briggs Institute Qualitative Assessment and Review Instrument (JBI QARI). Themes and narrative statements were extracted from papers included in the review using the standardized data extraction tool from JBI QARI. Data synthesis was conducted using the Joanna Briggs Institute meta-aggregation approach. There were nine qualitative studies included in the review. The narratives of 284 nurses generated a total of 43 findings, which formed 15 categories based on similarity of meaning. Four synthesized findings were generated from the categories: (i) The error brings a considerable emotional burden to the nurse that can last for a long time. In some cases, the error can alter nurses' perspectives and disrupt workplace relations; (ii) The type of support received influences how the nurse will feel about the error. Often nurses choose to speak with colleagues who have had similar experiences. Strategies need to focus on helping them to overcome the negative emotions associated with being a second victim; (iii) After the error, nurses are confronted with the dilemma of disclosure. Disclosure is determined by the following factors: how nurses feel about the error, harm to the patient, the support available to the nurse, and how errors are dealt with in the past; and (iv) Reconciliation is every nurse's endeavor. Predominantly, this is achieved by accepting fallibility, followed by acts of restitution, such as making positive changes in practice and disclosure to attain closure (see "Summary of findings"). Adverse errors were distressing for nurses, but they did not always receive the support they needed from colleagues. The lack of support had a significant impact on nurses' decisions on whether to disclose the error and his/her recovery process. Therefore, a good support system is imperative in alleviating the emotional burden, promoting the disclosure process, and assisting nurses with reconciliation. This review also highlighted research gaps that encompass the characteristics of the support system preferred by nurses, and the scarcity of studies worldwide.
Statistical analysis of AFE GN&C aeropass performance
NASA Technical Reports Server (NTRS)
Chang, Ho-Pen; French, Raymond A.
1990-01-01
Performance of the guidance, navigation, and control (GN&C) system used on the Aeroassist Flight Experiment (AFE) spacecraft has been studied with Monte Carlo techniques. The performance of the AFE GN&C is investigated with a 6-DOF numerical dynamic model which includes a Global Reference Atmospheric Model (GRAM) and a gravitational model with oblateness corrections. The study considers all the uncertainties due to the environment and the system itself. In the AFE's aeropass phase, perturbations on the system performance are caused by an error space which has over 20 dimensions of the correlated/uncorrelated error sources. The goal of this study is to determine, in a statistical sense, how much flight path angle error can be tolerated at entry interface (EI) and still have acceptable delta-V capability at exit to position the AFE spacecraft for recovery. Assuming there is fuel available to produce 380 ft/sec of delta-V at atmospheric exit, a 3-sigma standard deviation in flight path angle error of 0.04 degrees at EI would result in a 98-percent probability of mission success.
Lin, Le; Wang, Ying; Liu, Tianxue
2017-01-01
Much of the literature on recovery focuses on the economy, the physical environment and infrastructure at a macro level, which may ignore the personal experiences of affected individuals during recovery. This paper combines internal factors at a micro level and external factors at a macro level to model for understanding perception of recovery (PoR). This study focuses on areas devastated by the 2008 Wenchuan earthquake in China. With respect to three recovery-related aspects (house recovery condition (HRC), family recovery power (FRP) and reconstruction investment (RI)), structural equation modeling (SEM) was applied. It was found that the three aspects (FRP, HRC and RI) effectively explain how earthquake affected households perceive recovery. Internal factors associated with FRP contributed the most to favourable PoR, followed by external factors associated with HRC. Findings identified that for PoR the importance of active recovery within households outweighed an advantageous house recovery condition. At the same time, households trapped in unfavourable external conditions would invest more in housing recovery, which result in wealth accumulation and improved quality of life leading to a high level of PoR. In addition, schooling in households showed a negative effect on improving PoR. This research contributes to current debates around post-disaster permanent housing policy. It is implied that a one-size-fits-all policy in disaster recovery may not be effective and more specific assistance should be provided to those people in need. PMID:28854217
Reading Recovery--German Style. Short Report.
ERIC Educational Resources Information Center
Chambers, Gary N.
1995-01-01
This paper describes the Reading Recovery program implemented at a special needs school in Kiel, Germany. The program is intended to offer primary-grade pupils a second chance to obtain reading and writing skills. The highly structured reading program involves isolation of difficulties, work on word structure, learning consonant-vowel combinations…
Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A.; Bonomi, Alberto G.; Moore, Jonathan P.; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter
2016-01-01
Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included. PMID:27959935
Sartor, Francesco; Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A; Bonomi, Alberto G; Moore, Jonathan P; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter
2016-01-01
Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.
NASA Astrophysics Data System (ADS)
Sigmund, Armin; Pfister, Lena; Sayde, Chadi; Thomas, Christoph K.
2017-06-01
In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS) has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to -1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s-1 m-2) and very weak winds (0.1 m s-1). After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (< 0.18 K). Only for locations where the plastic rings that supported the reinforcing fabric touched the fiber-optic cable were significant temperature artifacts of up to 2.5 K observed. Overall, the reinforcing fabric offers several advantages over conventional support structures published to date in the literature as it minimizes both radiation and conduction errors.
Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.
Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin
2013-09-01
Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.
Coral reef recovery dynamics in a changing world
NASA Astrophysics Data System (ADS)
Graham, N. A. J.; Nash, K. L.; Kool, J. T.
2011-06-01
Coral reef ecosystems are degrading through multiple disturbances that are becoming more frequent and severe. The complexities of this degradation have been studied in detail, but little work has assessed characteristics that allow reefs to bounce back and recover between pulse disturbance events. We quantitatively review recovery rates of coral cover from pulse disturbance events among 48 different reef locations, testing the relative roles of disturbance characteristics, reef characteristics, connectivity and anthropogenic influences. Reefs in the western Pacific Ocean had the fastest recovery, whereas reefs in the geographically isolated eastern Pacific Ocean were slowest to recover, reflecting regional differences in coral composition, fish functional diversity and geographic isolation. Disturbances that opened up large areas of benthic space recovered quickly, potentially because of nonlinear recovery where recruitment rates were high. The type of disturbance had a limited effect on subsequent rates of reef recovery, although recovery was faster following crown-of-thorns starfish outbreaks. This inconsequential role of disturbance type may be in part due to the role of unaltered structural complexity in maintaining key reef processes, such as recruitment and herbivory. Few studies explicitly recorded potential ecological determinants of recovery, such as recruitment rates, structural complexity of habitat and the functional composition of reef-associated fish. There was some evidence of slower recovery rates within protected areas compared with other management systems and fished areas, which may reflect the higher initial coral cover in protected areas rather than reflecting a management effect. A better understanding of the driving role of processes, structural complexity and diversity on recovery may enable more appropriate management actions that support coral-dominated ecosystems in our changing climate.
Johnstone, Victoria P A; Wright, David K; Wong, Kendrew; O'Brien, Terence J; Rajan, Ramesh; Shultz, Sandy R
2015-09-01
Traumatic brain injury (TBI) is a leading cause of death worldwide. In recent studies, we have shown that experimental TBI caused an immediate (24-h post) suppression of neuronal processing, especially in supragranular cortical layers. We now examine the long-term effects of experimental TBI on the sensory cortex and how these changes may contribute to a range of TBI morbidities. Adult male Sprague-Dawley rats received either a moderate lateral fluid percussion injury (n=14) or a sham surgery (n=12) and 12 weeks of recovery before behavioral assessment, magnetic resonance imaging, and electrophysiological recordings from the barrel cortex. TBI rats demonstrated sensorimotor deficits, cognitive impairments, and anxiety-like behavior, and this was associated with significant atrophy of the barrel cortex and other brain structures. Extracellular recordings from ipsilateral barrel cortex revealed normal neuronal responsiveness and diffusion tensor MRI showed increased fractional anisotropy, axial diffusivity, and tract density within this region. These findings suggest that long-term recovery of neuronal responsiveness is owing to structural reorganization within this region. Therefore, it is likely that long-term structural and functional changes within sensory cortex post-TBI may allow for recovery of neuronal responsiveness, but that this recovery does not remediate all behavioral deficits.
Error analysis and correction of discrete solutions from finite element codes
NASA Technical Reports Server (NTRS)
Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.
1984-01-01
Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.
Valued social roles and measuring mental health recovery: examining the structure of the tapestry.
Hunt, Marcia G; Stein, Catherine H
2012-12-01
The complexity of the concept of mental health recovery often makes it difficult to systematically examine recovery processes and outcomes. The concept of social role is inherent within many acknowledged dimensions of recovery such as community integration, family relationships, and peer support and can deepen our understanding of these dimensions when social roles are operationalized in ways that directly relate to recovery research and practice. This paper reviews seminal social role theories and operationalizes aspects of social roles: role investment, role perception, role loss, and role gain. The paper provides a critical analysis of the ability of social role concepts to inform mental health recovery research and practice. PubMed and PsychInfo databases were used for the literature review. A more thorough examination of social role aspects allows for a richer picture of recovery domains that are structured by the concept social roles. Increasing understanding of consumers' investment and changes in particular roles, perceptions of consumers' role performance relative to peers, and consumers' hopes for the future with regards to the different roles that they occupy could generate tangible, pragmatic approaches in addressing complex recovery domains. This deeper understanding allows a more nuanced approach to recovery-related movements in mental health system transformation.
Valued Social Roles and Measuring Mental Health Recovery: Examining the Structure of the Tapestry
Hunt, Marcia G.; Stein, Catherine H.
2014-01-01
The complexity of the concept of mental health recovery often makes it difficult to systematically examine recovery processes and outcomes. The concept of social role is inherent within many acknowledged dimensions of recovery such as community integration, family relationships, and peer support and can deepen our understanding of these dimensions when social roles are operationalized in ways that directly relate to recovery research and practice. Objective This paper reviews seminal social role theories and operationalizes aspects of social roles: role investment, role perception, role loss, and role gain. The paper provides a critical analysis of the ability of social role concepts to inform mental health recovery research and practice. Method PubMed and PsychInfo databases were used for the literature review. Results A more thorough examination of social role aspects allows for a richer picture of recovery domains that are structured by the concept social roles. Increasing understanding of consumers’ investment and changes in particular roles, perceptions of consumers’ role performance relative to peers, and consumers’ hopes for the future with regards to the different roles that they occupy could generate tangible, pragmatic approaches in addressing complex recovery domains. Conclusions and Implications for Practice This deeper understanding allows a more nuanced approach to recovery-related movements in mental health system transformation. PMID:23276237
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
Application study on aircraft structures of CFRP laminates with embedded SMA foils
NASA Astrophysics Data System (ADS)
Ogisu, Toshimichi; Nomura, Masato; Ando, Norio; Takaki, Junji; Takeda, Nobuo
2002-07-01
This paper reports some research results for the application study of the smart materials an structural using Shape Memory Alloy (SMA) foils. First, the authors acquired the recovery strain of CFRP laminates generated by the recovery stress of the pre-strained SMA foils. Then, the quasi-static load-unload tests were conducted using several kinds of quasi-isotropic CFRP laminates with embedded SMA foils. Micro-mechanics of damage behavior due to the effects of the recovery strain and the first transverse crack strain were discussed. The improvement of maximum 40 percent for the onset strain of the transverse cracks and maximum 60 percent for the onset strain of delamination were achieved for CFRP laminates with embedded pre-strained SMA foils compared with standard CFRP laminates. Furthermore, the authors conducted the structural element test for application to actual structures. Testing technique and the manufacturing technique of the structural element specimen were established.
Wisdom in Medicine: What Helps Physicians After a Medical Error?
Plews-Ogan, Margaret; May, Natalie; Owens, Justine; Ardelt, Monika; Shapiro, Jo; Bell, Sigall K
2016-02-01
Confronting medical error openly is critical to organizational learning, but less is known about what helps individual clinicians learn and adapt positively after making a harmful mistake. Understanding what factors help doctors gain wisdom can inform educational and peer support programs, and may facilitate the development of specific tools to assist doctors after harmful errors occur. Using "posttraumatic growth" as a model, the authors conducted semistructured interviews (2009-2011) with 61 physicians who had made a serious medical error. Interviews were recorded, professionally transcribed, and coded by two study team members (kappa 0.8) using principles of grounded theory and NVivo software. Coders also scored interviewees as wisdom exemplars or nonexemplars based on Ardelt's three-dimensional wisdom model. Of the 61 physicians interviewed, 33 (54%) were male, and on average, eight years had elapsed since the error. Wisdom exemplars were more likely to report disclosing the error to the patient/family (69%) than nonexemplars (38%); P < .03. Fewer than 10% of all participants reported receiving disclosure training. Investigators identified eight themes reflecting what helped physician wisdom exemplars cope positively: talking about it, disclosure and apology, forgiveness, a moral context, dealing with imperfection, learning/becoming an expert, preventing recurrences/improving teamwork, and helping others/teaching. The path forged by doctors who coped well with medical error highlights specific ways to help clinicians move through this difficult experience so that they avoid devastating professional outcomes and have the best chance of not just recovery but positive growth.
Unifying error structures in commonly used biotracer mixing models.
Stock, Brian C; Semmens, Brice X
2016-10-01
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.
Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria
2017-08-01
Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.
NASA Astrophysics Data System (ADS)
Shi, Mingjie; Liu, Junjie; Zhao, Maosheng; Yu, Yifan; Saatchi, Sassan
2017-12-01
The long-term impact of Amazonian drought on canopy structure has been observed in ground and remote sensing measurements. However, it is still unclear whether it is caused by biotic (e.g., plant structure damage) or environmental (e.g., water deficiency) factors. We used the Community Land Model version 4.5 (CLM4.5) and radar backscatter observations from SeaWinds Scatterometer on board QuikSCAT (QSCAT) satellite to investigate the relative role of biotic and environmental factors in controlling the forest canopy disturbance and recovery processes after the 2005 Amazonian drought. We validated the CLM4.5 simulation of the drought impact and the recovery of leaf carbon (C) pool, an indicator of canopy structure, over southwestern Amazonia with QSCAT backscatter observations, which are sensitive to canopy structure change. We found that the leaf C pool simulated by CLM4.5 recovered to the 2000-2009 mean level (343 g C m-2) in 3 years after a sharp decrease in 2005, consistent with the QSCAT observed slow recovery. Through sensitivity experiments, we found that the slow C recovery was primarily due to biotic factors represented by the canopy damage and reduction of plant C pools. The recovery of soil water and the coupling between water and C pools, which is an environmental factor, only contributes 24% to the leaf C recovery. The results showed (1) the strength of scatterometer backscatter measurements in capturing canopy damage over tropical forests and in validating C cycle models and (2) the biotic factors play the dominant role in regulating the drought induced disturbance and persistent canopy changes in CLM4.5.
NASA Technical Reports Server (NTRS)
Schutz, Bob E.; Baker, Gregory A.
1997-01-01
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
NASA Astrophysics Data System (ADS)
Baker, Gregory Allen
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
Design and Implementation of a new Autonomous Sensor Fish to Support Advanced Hydropower Development
Deng, Zhiqun; Lu, Jun; Myjak, Mitchell J.; ...
2014-11-04
Acceleration in development of additional conventional hydropower requires tools and methods to perform laboratory and in-field validation of turbine performance and fish passage claims. The new-generation Sensor Fish has been developed with more capabilities to accommodate a wider range of users over a wider range of turbine designs and operating environments. It provides in situ measurements of three dimensional (3D) accelerations, 3D rotational velocities, 3D orientation, pressure, and temperature at a sampling frequency of 2048 Hz. It also has an automatic floatation system and built-in radio frequency transmitter for recovery. The relative errors of the pressure, acceleration and rotational velocitymore » were within ±2%, ±5%, and ±5%, respectively. The accuracy of orientation was within ±4° and accuracy of temperature was ±2°C. It is being deployed to evaluate the biological effects of turbines or other hydraulic structures in several countries.« less
Simmons, B R; Chukwumerije, O; Stewart, J T
1997-11-01
13-Cis retinoic acid (Accutane) was extracted from a cream, gel, capsule and beadlet dosage from using supercritical carbon dioxide modified with 5% methanol as the mobile phase. The pump pressure and the extraction chamber and restrictor temperature were experimentally optimized at 325 atm and 45 degrees C, respectively. A 2.5-min static and 5-min dynamic extraction time were used. The supercritical fluid extraction (SFE) eluent was trapped in methanol, injected into the high-performance liquid chromatographic (HPLC) system, and quantitated by ultraviolet detection at 360 nm. Application of the SFE method to spiked placebo dosage forms gave 13-cis retinoic acid recoveries of 98.8, 98.9, 98.8 and 100% for the cream, gel, capsule and beadlet, respectively, with R.S.D.s in the range 0.6-0.9% (n = 4). Inter-day percent error and precision of the extraction were 1.1-2.0 and 0.2-2.4% (n = 3), respectively, and intra-day percent error and precision were 1.0-3.0 and 0.3-2.1% (n = 8), respectively. Percent error and precision data for spiked celite samples in the 0.05-1.0 microgram ml-1 range were 0.59-4.75 and 1.8-2.1% (n = 3), respectively. The extraction method was applied to commercial 13-cis retinoic acid dosage forms and the results compared to unextracted samples. Linear regression analysis of concentration versus peak height gave a correlation coefficient of 0.9991 with a slope of 7.468 and a y-intercept of 0.1923. The percent error and precision data were 1.3-5.3 and 0.2-1.5% (n = 4), respectively. The photoisomers of 13-cis retinoic acid were also extracted with the method and recoveries of 90.4-92.4% with R.S.D.s of 1.5-3.4% were obtained (n = 4).
Accurately determining log and bark volumes of saw logs using high-resolution laser scan data
R. Edward Thomas; Neal D. Bennett
2014-01-01
Accurately determining the volume of logs and bark is crucial to estimating the total expected value recovery from a log. Knowing the correct size and volume of a log helps to determine which processing method, if any, should be used on a given log. However, applying volume estimation methods consistently can be difficult. Errors in log measurement and oddly shaped...
JPRS Report, Science & Technology, China
1991-10-22
ZHONGGUO KEXUE BAO, 30 Aug 91] .......................................... 22 Shanghai Scientist Develops State-of-the-Art Liquid-Crystal Light Valve...the angle of attack will gradu- direction of the final velocity vector of the satellite are ally decrease under the action of aerodynamic moments...impulse and the direction of the thrust vector of the The recovery system, is located inside the sealed reentry retro-rocket engine, errors in the
Change over Time in First Graders' Strategic Use of Information at Point of Difficulty in Reading
ERIC Educational Resources Information Center
McGee, Lea M.; Kim, Hwewon; Nelson, Kathryn S.; Fried, Mary D.
2015-01-01
In this study, we describe young students' actions at point of difficulty in reading and examine changes in their strategic use of sources of information. We examined errors from running records of first graders who entered Reading Recovery (RR) in the fall and ended the year reading at the first-grade level compared with RR first graders who did…
The Role of Cortical Plasticity in Recovery of Function Following Allogeneic Hand Transplantation
2016-10-01
difficult to understand given prevailing models in neuroscience that emphasize reinnervation errors. These results are summarized below. 15...improvements that may reflect central adaptations. Poster to be presented at the annual meeting of the Society for Neuroscience , Washington, D.C...Neurorehabilitation and Society for Neuroscience , Washington, D.C. Peng H., Cirstea M.C., Valyear K.F., & Frey S.H. (November 2014). Diffusion
ERIC Educational Resources Information Center
Lapierre, Laurent M.; Hammer, Leslie B.; Truxillo, Donald M.; Murphy, Lauren A.
2012-01-01
The first goal of this study was to test whether family interference with work (FIW) is positively related to increased workplace cognitive failure (WCF), which is defined as errors made at work that indicate lapses in memory (e.g., failing to recall work procedures), attention (e.g., not fully listening to instruction), and motor function (e.g.,…
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Structural quality of on Oxisol in recovery for 18 years
NASA Astrophysics Data System (ADS)
dos Santos Batista Bonini, C.; Alves, M. C.; Marchini, D. C.; Garcia de Arruda, O.; Nilce Souto Filho, S.
2012-04-01
Incorrect use of soil and large buildings construction in rural areas are causing changes to it, making them less productive and thus increasing the degraded areas. Techniques aimed at ecological restoration of degraded soils have been investigated. In this sense we investigated the positive changes in the structural quality of a soil that was beheaded in human intervention techniques for recovery for 18 years, having been used green manures, gypsum and pasture. The studied area is located in Mato Grosso do Sul, Brazil. The experimental design was a completely randomized with seven treatments and four replications. The treatments were: control (tilled soil without culture); Stizolobium aterrium; Cajanus cajan; lime+S. aterrimum; lime+C. cajan; lime+gypsum+S. aterrimum; lime+gypsum+C. cajan. In 1994, all treatments with C. cajan were replaced by Canavalia ensiformis and in 1999, Brachiaria decumbens was implanted in all treatments. Data from vegetated treatments were compared with the control bare soil and native vegetation (savannah). We evaluated the distribution and aggregate stability in water, soil samples were collected in 2010 in the depths: 0.00-0.10; 0.10-0.20 and 0,20-0.40 m. The results were analyzed by analysis of variance, following Scott-Knott test (5%) of probability to compare averages. Evaluating the results is noted that in the depth of 0.00-0.10 m, the control bare soil and savannah soil had lower and higher DMP, respectively. All recovery treatments were DMP greater than found for the bare soil control. Treatments: S. aterrimum, lime + gypsum + C. cajan and lime + gypsum + S. aterrimum and the savannah control were similar in the depth of 0.00-0.10 m. All of the recovery treatment in the depth from 0.00-0.10 m with values is close to the native vegetation of the savannah. Depths of 0.10-0.20 and 0.20-0.40 m results obtained for DMP treatments in recovery are similar to the bare soil, except for treatments with S. aterrimum and lime + gypsum + S. aterrimum that had values were similar to the savannah control. This behavior shows that the recovery of soil treatments were eficient only the superficial layer soil and other depths in the structure is still in recovery. It is concluded that the recovery treatment have positively influenced the structure quality in the 0.00-0.10 m depth : the recovery treatment with S. aterrimum and lime + gypsum + S. aterrimum were the most promising in the recovery structural quality.
NASA Astrophysics Data System (ADS)
Cao, Lu; Li, Hengnian
2016-10-01
For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert
2011-01-01
The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.
Simulation Study of a Follow-on Gravity Mission to GRACE
NASA Technical Reports Server (NTRS)
Loomis, Bryant D.; Nerem, R. S.; Luthcke, Scott B.
2012-01-01
The gravity recovery and climate experiment (GRACE) has been providing monthly estimates of the Earth's time-variable gravity field since its launch in March 2002. The GRACE gravity estimates are used to study temporal mass variations on global and regional scales, which are largely caused by a redistribution of water mass in the Earth system. The accuracy of the GRACE gravity fields are primarily limited by the satellite-to-satellite range-rate measurement noise, accelerometer errors, attitude errors, orbit errors, and temporal aliasing caused by unmodeled high-frequency variations in the gravity signal. Recent work by Ball Aerospace and Technologies Corp., Boulder, CO has resulted in the successful development of an interferometric laser ranging system to specifically address the limitations of the K-band microwave ranging system that provides the satellite-to-satellite measurements for the GRACE mission. Full numerical simulations are performed for several possible configurations of a GRACE Follow-On (GFO) mission to determine if a future satellite gravity recovery mission equipped with a laser ranging system will provide better estimates of time-variable gravity, thus benefiting many areas of Earth systems research. The laser ranging system improves the range-rate measurement precision to approximately 0.6 nm/s as compared to approx. 0.2 micro-seconds for the GRACE K-band microwave ranging instrument. Four different mission scenarios are simulated to investigate the effect of the better instrument at two different altitudes. The first pair of simulated missions is flown at GRACE altitude (approx. 480 km) assuming on-board accelerometers with the same noise characteristics as those currently used for GRACE. The second pair of missions is flown at an altitude of approx. 250 km which requires a drag-free system to prevent satellite re-entry. In addition to allowing a lower satellite altitude, the drag-free system also reduces the errors associated with the accelerometer. All simulated mission scenarios assume a two satellite co-orbiting pair similar to GRACE in a near-polar, near-circular orbit. A method for local time variable gravity recovery through mass concentration blocks (mascons) is used to form simulated gravity estimates for Greenland and the Amazon region for three GFO configurations and GRACE. Simulation results show that the increased precision of the laser does not improve gravity estimation when flown with on-board accelerometers at the same altitude and spacecraft separation as GRACE, even when time-varying background models are not included. This study also shows that only modest improvement is realized for the best-case scenario (laser, low-altitude, drag-free) as compared to GRACE due to temporal aliasing errors. These errors are caused by high-frequency variations in the hydrology signal and imperfections in the atmospheric, oceanographic, and tidal models which are used to remove unwanted signal. This work concludes that applying the updated technologies alone will not immediately advance the accuracy of the gravity estimates. If the scientific objectives of a GFO mission require more accurate gravity estimates, then future work should focus on improvements in the geophysical models, and ways in which the mission design or data processing could reduce the effects of temporal aliasing.
NASA Astrophysics Data System (ADS)
Laukkanen, Olli-Ville; Winter, H. Henning
2017-11-01
The creep-recovery (CR) test starts out with a period of shearing at constant stress (creep) and is followed by a period of zero-shear stress where some of the accumulated shear strain gets reversed. Linear viscoelasticity (LVE) allows one to predict the strain response to repeated creep-recovery (RCR) loading from measured small-amplitude oscillatory shear (SAOS) data. Only the relaxation and retardation time spectra of a material need to be known and these can be determined from SAOS data. In an application of the Boltzmann superposition principle (BSP), the strain response to RCR loading can be obtained as a linear superposition of the strain response to many single creep-recovery tests. SAOS and RCR data were collected for several unmodified and modified bituminous binders, and the measured and predicted RCR responses were compared. Generally good agreement was found between the measured and predicted strain accumulation under RCR loading. However, in the case of modified binders, the strain accumulation was slightly overestimated (≤20% relative error) due to the insufficient SAOS information at long relaxation times. Our analysis also demonstrates that the evolution in the strain response under RCR loading, caused by incomplete recovery, can be reasonably well predicted by the presented methodology. It was also shown that the outlined modeling framework can be used, as a first approximation, to estimate the rutting resistance of bituminous binders by predicting the values of the Multiple Stress Creep Recovery (MSCR) test parameters.
MAGSAT data processing: A report for investigators
NASA Technical Reports Server (NTRS)
Langel, R. A.; Berbert, J.; Jennings, T.; Horner, R. (Principal Investigator)
1981-01-01
The in-flight attitude and vector magnetometer data bias recovery techniques and results are described. The attitude bias recoveries are based on comparisons with a magnetic field model and are thought to be accurate to 20 arcsec. The vector magnetometer bias recoveries are based on comparisons with the scalar magnetometer data and are thought to be accurate to 3 nT or better. The MAGSAT position accuracy goals of 60 m radially and 300 m horizontally were achieved for all but the last 3 weeks of Magsat lifetime. This claim is supported by ephemeris overlap statistics and by comparisons with ephemerides computed with an independent orbit program using data from an independent tracking network. MAGSAT time determination accuracy is estimated at 1 ms. Several errors in prelaunch assumptions regarding data time tags, which escaped detection in prelaunch data tests, and were discovered and corrected postlaunch are described. Data formats and products, especially the Investigator-B tapes, which contain auxiliary parameters in addition to the basic magnetometer and ephemeris data, are described.
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Molecularly imprinted polymer for analysis of trace atrazine herbicide in water.
Kueseng, Pamornrat; Noir, Mathieu L; Mattiasson, Bo; Thavarungkul, Panote; Kanatharana, Proespichaya
2009-11-01
A molecularly imprinted polymer (MIP) for atrazine was synthesized by non-covalent method. The binding capacity of MIP was 1.00 mg g(-1) polymer. The selectivity and recovery were investigated with various pesticides which are mostly, found in the environment, for both similar and different chemical structure of atrazine. The competitive recognition between atrazine and structurally similar compounds was evaluated and it was found that the system provided highest recovery and selectivity for atrazine while low recovery and selectivity were obtained for the other compounds. The highest recovery was obtained from MIP compared with non-imprinted polymer (NIP), a commercial C(18) and a granular activated carbon (GAC) sorbent. The method provided high recoveries ranged from 94 to 99% at two spiked levels with relative standard deviations less than 2%. The lower detection limit of the method was 80 ng L(-1). This method was successfully applied for analysis of environmental water samples.
Precise orbit determination and rapid orbit recovery supported by time synchronization
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhou, JianHua; Hu, XiaoGong; Liu, Li; Tang, Bo; Li, XiaoJie; Wu, Shan
2015-06-01
In order to maintain optimal signal coverage, GNSS satellites have to experience orbital maneuvers. For China's COMPASS system, precise orbit determination (POD) as well as rapid orbit recovery after maneuvers contribute to the overall Positioning, Navigation and Timing (PNT) service performance in terms of accuracy and availability. However, strong statistical correlations between clock offsets and the radial component of a satellite's positions require long data arcs for POD to converge. We propose here a new strategy which relies on time synchronization between ground tracking stations and in-orbit satellites. By fixing satellite clock offsets measured by the satellite station two-way synchronization (SSTS) systems and receiver clock offsets, POD and orbital recovery performance can be improved significantly. Using the Satellite Laser Ranging (SLR) as orbital accuracy evaluation, we find the 4-hr recovered orbit achieves about 0.71 m residual root mean square (RMS) error of fit SLR data, the recovery time is improved from 24-hr to 4-hr compared with the conventional POD without time synchronization support. In addition, SLR evaluation shows that for 1-hr prediction, about 1.47 m accuracy is achieved with the new proposed POD strategy.
3D shape recovery from image focus using Gabor features
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Mahmood, Jawad; Zeb, Ayesha; Iqbal, Javaid
2018-04-01
Recovering an accurate and precise depth map from a set of acquired 2-D image dataset of the target object each having different focus information is an ultimate goal of 3-D shape recovery. Focus measure algorithm plays an important role in this architecture as it converts the corresponding color value information into focus information which will be then utilized for recovering depth map. This article introduces Gabor features as focus measure approach for recovering depth map from a set of 2-D images. Frequency and orientation representation of Gabor filter features is similar to human visual system and normally applied for texture representation. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach, in spite of simplicity, generates accurate results.
2012-01-01
Background Presented is the method “Detection and Outline Error Estimates” (DOEE) for assessing rater agreement in the delineation of multiple sclerosis (MS) lesions. The DOEE method divides operator or rater assessment into two parts: 1) Detection Error (DE) -- rater agreement in detecting the same regions to mark, and 2) Outline Error (OE) -- agreement of the raters in outlining of the same lesion. Methods DE, OE and Similarity Index (SI) values were calculated for two raters tested on a set of 17 fluid-attenuated inversion-recovery (FLAIR) images of patients with MS. DE, OE, and SI values were tested for dependence with mean total area (MTA) of the raters' Region of Interests (ROIs). Results When correlated with MTA, neither DE (ρ = .056, p=.83) nor the ratio of OE to MTA (ρ = .23, p=.37), referred to as Outline Error Rate (OER), exhibited significant correlation. In contrast, SI is found to be strongly correlated with MTA (ρ = .75, p < .001). Furthermore, DE and OER values can be used to model the variation in SI with MTA. Conclusions The DE and OER indices are proposed as a better method than SI for comparing rater agreement of ROIs, which also provide specific information for raters to improve their agreement. PMID:22812697
Attitude error response of structures to actuator/sensor noise
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1991-01-01
Explicit closed-form formulas are presented for the RMS attitude-error response to sensor and actuator noise for co-located actuators/sensors as a function of both control-gain parameters and structure parameters. The main point of departure is the use of continuum models. In particular the anisotropic Timoshenko model is used for lattice trusses typified by the NASA EPS Structure Model and the Evolutionary Model. One conclusion is that the maximum attainable improvement in the attitude error varying either structure parameters or control gains is 3 dB for the axial and torsion modes, the bending being essentially insensitive. The results are similar whether the Bernoulli model or the anisotropic Timoshenko model is used.
NASA Astrophysics Data System (ADS)
Savelieva, Tatiana A.; Loshchenov, Victor B.; Volkov, Vladimir V.; Linkov, Kirill G.; Goryainov, Sergey A.; Potapov, Alexander A.
2014-05-01
The method of intraoperative analysis of tumor markers such as structural changes, concentrations of 5- ALA induced protoporphyrin IX and hemoglobin in the area of tissue resection was developed. A device for performing this method is a neurosurgical aspiration cannulae coupled with the fiber optic probe. The configuration of fibers at the end of cannulae was developed according to the results of numerical modeling of light distribution in biological tissues. The optimal distance between the illuminating and receiving fiber was found for biologically relevant interval of optical properties. On this particular distance the detected diffuse reflectance depends on scattering coefficient almost linearly. Array of optical phantoms containing hemoglobin, protoporphyrin IX and fat emulsion (as scattering media) in various concentrations was prepared to verify the method. The recovery of hemoglobin and protoporphyrin IX concentrations in the scattering media with an error less than 10% has been demonstrated. The fat emulsion concentration estimation accuracy was less than 12%. The first clinical test was carried out during glioblastoma multiforme resection in Burdenko Neurosurgery Institute and confirmed that sensitivity of this method is enough to detect investigated tumor markers in vivo. This method will allow intraoperative analysis of the structural and metabolical tumor markers directly in the zone of destruction of tumor tissue, thereby increasing the degree of radical removal and preservation of healthy tissue.
A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling
ERIC Educational Resources Information Center
Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang
2017-01-01
It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…
NASA Astrophysics Data System (ADS)
Sigurdardottir, Dorotea H.; Stearns, Jett; Glisic, Branko
2017-07-01
The deformed shape is a consequence of loading the structure and it is defined by the shape of the centroid line of the beam after deformation. The deformed shape is a universal parameter of beam-like structures. It is correlated with the curvature of the cross-section; therefore, any unusual behavior that affects the curvature is reflected through the deformed shape. Excessive deformations cause user discomfort, damage to adjacent structural members, and may ultimately lead to issues in structural safety. However, direct long-term monitoring of the deformed shape in real-life settings is challenging, and an alternative is indirect determination of the deformed shape based on curvature monitoring. The challenge of the latter is an accurate evaluation of error in the deformed shape determination, which is directly correlated with the number of sensors needed to achieve the desired accuracy. The aim of this paper is to study the deformed shape evaluated by numerical double integration of the monitored curvature distribution along the beam, and create a method to predict the associated errors and suggest the number of sensors needed to achieve the desired accuracy. The error due to the accuracy in the curvature measurement is evaluated within the scope of this work. Additionally, the error due to the numerical integration is evaluated. This error depends on the load case (i.e., the shape of the curvature diagram), the magnitude of curvature, and the density of the sensor network. The method is tested on a laboratory specimen and a real structure. In a laboratory setting, the double integration is in excellent agreement with the beam theory solution which was within the predicted error limits of the numerical integration. Consistent results are also achieved on a real structure—Streicker Bridge on Princeton University campus.
Strength conditions for the elastic structures with a stress error
NASA Astrophysics Data System (ADS)
Matveev, A. D.
2017-10-01
As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.
The Work of Recovery on Two Assertive Community Treatment Teams
Salyers, Michelle P.; Stull, Laura G.; Rollins, Angela L.; Hopper, Kim
2011-01-01
The compatibility of recovery work with the Assertive Community Treatment (ACT) model has been debated; and little is known about how to best measure the work of recovery. Two ACT teams with high and low recovery orientation were identified by expert consensus and compared on a number of dimensions. Using an interpretive, qualitative approach to analyze interview and observation data, teams differed in the extent to which the environment, team structure, staff attitudes, and processes of working with consumers supported principles of recovery orientation. We present a model of recovery work and discuss implications for research and practice. PMID:20839045
Dense-HOG-based drift-reduced 3D face tracking for infant pain monitoring
NASA Astrophysics Data System (ADS)
Saeijs, Ronald W. J. J.; Tjon A Ten, Walther E.; de With, Peter H. N.
2017-03-01
This paper presents a new algorithm for 3D face tracking intended for clinical infant pain monitoring. The algorithm uses a cylinder head model and 3D head pose recovery by alignment of dynamically extracted templates based on dense-HOG features. The algorithm includes extensions for drift reduction, using re-registration in combination with multi-pose state estimation by means of a square-root unscented Kalman filter. The paper reports experimental results on videos of moving infants in hospital who are relaxed or in pain. Results show good tracking behavior for poses up to 50 degrees from upright-frontal. In terms of eye location error relative to inter-ocular distance, the mean tracking error is below 9%.
Sliceable transponders for metro-access transmission links
NASA Astrophysics Data System (ADS)
Wagner, C.; Madsen, P.; Spolitis, S.; Vegas Olmos, J. J.; Tafur Monroy, I.
2015-01-01
This paper presents a solution for upgrading optical access networks by reusing existing electronics or optical equipment: sliceable transponders using signal spectrum slicing and stitching back method after direct detection. This technique allows transmission of wide bandwidth signals from the service provider (OLT - optical line terminal) to the end user (ONU - optical network unit) over an optical distribution network (ODN) via low bandwidth equipment. We show simulation and experimental results for duobinary signaling of 1 Gbit/s and 10 Gbit/s waveforms. The number of slices is adjusted to match the lowest analog bandwidth of used electrical devices and scale from 2 slices to 10 slices. Results of experimental transmission show error free signal recovery by using post forward error correction with 7% overhead.
ERIC Educational Resources Information Center
Donoghue, John R.
Monte Carlo studies investigated effects of within-group covariance structure on subgroup recovery by several widely used hierarchical clustering methods. In Study 1, subgroup size, within-group correlation, within-group variance, and distance between subgroup centroids were manipulated. All clustering methods were strongly affected by…
Konkolÿ Thege, Barna; Ham, Elke; Ball, Laura C
2017-12-01
Recovery is understood as living a life with hope, purpose, autonomy, productivity, and community engagement despite a mental illness. The aim of this study was to provide further information on the psychometric properties of the Person-in-Recovery and Provider versions of the Revised Recovery Self-Assessment (RSA-R), a widely used measure of recovery orientation. Data from 654 individuals were analyzed, 519 of whom were treatment providers (63.6% female), while 135 were inpatients (10.4% female) of a Canadian tertiary-level psychiatric hospital. Confirmatory and exploratory techniques were used to investigate the factor structure of both versions of the instrument. Results of the confirmatory factor analyses showed that none of the four theoretically plausible models fit the data well. Principal component analyses could not replicate the structure obtained by the scale developers either and instead resulted in a five-component solution for the Provider and a four-component solution for the Person-in-Recovery version. When considering the results of a parallel analysis, the number of components to retain dropped to two for the Provider version and one for the Person-in-Recovery version. We can conclude that the RSA-R requires further revision to become a psychometrically sound instrument for assessing recovery-oriented practices in an inpatient mental health-care setting.
Characteristics of long recovery early VLF events observed by the North African AWESOME Network
NASA Astrophysics Data System (ADS)
Naitamor, S.; Cohen, M. B.; Cotts, B. R. T.; Ghalila, H.; Alabdoadaim, M. A.; Graf, K.
2013-08-01
Lightning strokes are capable of initiating disturbances in the lower ionosphere, whose recoveries persist for many minutes. These events are remotely sensed via monitoring subionospherically propagating very low frequency (VLF) transmitter signals, which are perturbed as they pass through the region above the lightning stroke. In this paper we describe the properties and characteristics of the early VLF signal perturbations, which exhibit long recovery times using subionospheric VLF transmitter data from three identical receivers located at Algiers (Algeria), Tunis (Tunisia), and Sebha (Libya). The results indicate that the observation of long recovery events depends strongly on the modal structure of the signal electromagnetic field and the distance from the disturbed region and the receiver or transmitter locations. Comparison of simultaneously collected data at the three sites indicates that the role of the causative lightning stroke properties (e.g., peak current and polarity), or that of transient luminous events may be much less important. The dominant parameter which determines the duration of the recovery time and amplitude appears to be the modal structure of the subionospheric VLF probe signal at the ionospheric disturbance, where scattering occurs, and the subsequent modal structure that propagates to the receiver location.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
Neuroplasticity and functional recovery in multiple sclerosis
Tomassini, Valentina; Matthews, Paul M.; Thompson, Alan J.; Fuglø, Daniel; Geurts, Jeroen J.; Johansen-Berg, Heidi; Jones, Derek K.; Rocca, Maria A.; Wise, Richard G.; Barkhof, Frederik; Palace, Jacqueline
2013-01-01
The development of therapeutic strategies that promote functional recovery is a major goal of multiple sclerosis (MS) research. Neuroscientific and methodological advances have improved our understanding of the brain’s recovery from damage, generating novel hypotheses for potential targets or modes of intervention and laying the foundation for the development of scientifically informed strategies promoting recovery in interventional studies. This Review aims to encourage the transition from characterization of recovery mechanisms to the development of strategies that promote recovery in MS. We discuss current evidence for functional reorganization that underlies recovery and its implications for development of new recovery-oriented strategies in MS. Promotion of functional recovery requires an improved understanding of recovery mechanisms modulated by interventions and the development of reliable measures of therapeutic effects. As imaging methods can be used to measure functional and structural alterations associated with recovery, this Review discusses their use as reliable markers to measure the effects of interventions. PMID:22986429
NASA Astrophysics Data System (ADS)
Ghosh, Soumyadeep
Surfactant-polymer (SP) floods have significant potential to recover waterflood residual oil in shallow oil reservoirs. A thorough understanding of surfactant-oil-brine phase behavior is critical to the design of chemical EOR floods. While considerable progress has been made in developing surfactants and polymers that increase the potential of a chemical enhanced oil recovery (EOR) project, very little progress has been made to predict phase behavior as a function of formulation variables such as pressure, temperature, and oil equivalent alkane carbon number (EACN). The empirical Hand's plot is still used today to model the microemulsion phase behavior with little predictive capability as these and other formulation variables change. Such models could lead to incorrect recovery predictions and improper flood designs. Reservoir crudes also contain acidic components (primarily naphthenic acids), which undergo neutralization to form soaps in the presence of alkali. The generated soaps perform synergistically with injected synthetic surfactants to mobilize waterflood residual oil in what is termed alkali-surfactant-polymer (ASP) flooding. The addition of alkali, however, complicates the measurement and prediction of the microemulsion phase behavior that forms with acidic crudes. In this dissertation, we account for pressure changes in the hydrophilic-lipophilic difference (HLD) equation. This new HLD equation is coupled with the net-average curvature (NAC) model to predict phase volumes, solubilization ratios, and microemulsion phase transitions (Winsor II-, III, and II+). This dissertation presents the first modified HLD-NAC model to predict microemulsion phase behavior for live crudes, including optimal solubilization ratio and the salinity width of the three-phase Winsor III region at different temperatures and pressures. This new equation-of-state-like model could significantly aid the design and forecast of chemical floods where key variables change dynamically, and in screening of potential candidate reservoirs for chemical EOR. The modified HLD-NAC model is also extended here for ASP flooding. We use an empirical equation to calculate the acid distribution coefficient from the molecular structure of the soap. Key HLD-NAC parameters like optimum salinities and optimum solubilization ratios are calculated from soap mole fraction weighted equations. The model is tuned to data from phase behavior experiments with real crudes to demonstrate the procedure. We also examine the ability of the new model to predict fish plots and activity charts that show the evolution of the three-phase region. The modified HLD-NAC equations are then made dimensionless to develop important microemulsion phase behavior relationships and for use in tuning the new model to measured data. Key dimensionless groups that govern phase behavior and their effects are identified and analyzed. A new correlation was developed to predict optimum solubilization ratios at different temperatures, pressures and oil EACN with an average relative error of 10.55%. The prediction of optimum salinities with the modified HLD approach resulted in average relative errors of 2.35%. We also present a robust method to precisely determine optimum salinities and optimum solubilization ratios from salinity scan data with average relative errors of 1.17% and 2.44% for the published data examined.
Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.
2017-01-01
Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy. PMID:29200595
NASA Astrophysics Data System (ADS)
Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.
2016-03-01
Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy.
NASA Astrophysics Data System (ADS)
Keller, Thomas; Colombi, Tino; Ruiz, Siul; Grahm, Lina; Reiser, René; Rek, Jan; Oberholzer, Hans-Rudolf; Schymanski, Stanislaus; Walter, Achim; Or, Dani
2016-04-01
Soil compaction due to agricultural vehicular traffic alters the geometrical arrangement of soil constituents, thereby modifying mechanical properties and pore spaces that affect a range of soil hydro-ecological functions. The ecological and economic costs of soil compaction are dependent on the immediate impact on soil functions during the compaction event, and a function of the recovery time. In contrast to a wealth of soil compaction information, mechanisms and rates of soil structure recovery remain largely unknown. A long-term (>10-yr) soil structure observatory (SSO) was established in 2014 on a loamy soil in Zurich, Switzerland, to quantify rates and mechanisms of structure recovery of compacted arable soil under different post-compaction management treatments. We implemented three initial compaction treatments (using a two-axle agricultural vehicle with 8 Mg wheel load): compaction of the entire plot area (i.e. track-by-track), compaction in wheel tracks, and no compaction. After compaction, we implemented four post-compaction soil management systems: bare soil (BS), permanent grass (PG), crop rotation without mechanical loosening (NT), and crop rotation under conventional tillage (CT). BS and PG provide insights into uninterrupted natural processes of soil structure regeneration under reduced (BS) and normal biological activity (PG). The two cropping systems (NT and CT) enable insights into soil structure recovery under common agricultural practices with minimal (NT) and conventional mechanical soil disturbance (CT). Observations include periodic sampling and measurements of soil physical properties, earthworm abundance, crop measures, electrical resistivity and ground penetrating radar imaging, and continuous monitoring of state variables - soil moisture, temperature, CO2 and O2 concentrations, redox potential and oxygen diffusion rates - for which a network of sensors was installed at various depths (0-1 m). Initial compaction increased soil bulk density to about half a metre, decreased gas and water transport functions (air permeability, gas diffusivity, saturated hydraulic conductivity), and increased mechanical impedance. Water infiltration at the soil surface was initially reduced by three orders of magnitude, but significantly recovered within a year. However, within the soil profile, recovery of transport properties is much smaller. Air permeability tended to recover more than gas diffusivity, suggesting that initial post-compaction recovery is initiated by new macropores (e.g. biopores). Tillage recovered topsoil bulk density but not topsoil transport functions. Compaction changed grass species composition in PG, and significantly reduced grass biomass in PG and crop yields in NT and CT.
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
Recovery of the Antarctic Ozone Hole
NASA Technical Reports Server (NTRS)
Newman, Paul A.; Nash, Eric R.; Kawa, S. Randolph; Montzka, Steve; Schauffler, Sue; Stolarski, Richard S.; Douglass, Anne R.; Pawson, Steven; Nielsen, J. Eric
2006-01-01
The Antarctic ozone hole develops each year and culminates by early Spring. Antarctic ozone values have been monitored since 1979 using satellite observations from the TOMS and OMI instruments. The severity of the hole has been assessed using the minimum total ozone value from the October monthly mean (depth of the hole), the average size during the September-October period, and the ozone mass deficit. Ozone is mainly destroyed by halogen catalytic cycles, and these losses are modulated by temperature variations in the collar of the polar lower stratospheric vortex. In this presentation, we show the relationships of halogens and temperature to both the size and depth of the hole. Because atmospheric halogen levels are responding to international agreements that limit or phase out production, the amount of halogens in the stratosphere should decrease over the next few decades. We use two methods to estimate ozone hole recovery. First, we use projections of halogen levels combined with age-of-air estimates in a parametric model. Second, we use a coupled chemistry climate model to assess recovery. We find that the ozone hole is recovering at an extremely slow rate and that large ozone holes will regularly recur over the next 2 decades. Furthermore, full recovery to 1980 levels will not occur until approximately 2068. We will also show some error estimates of these dates and the impact of climate change on the recovery.
Skylab S-193 radar altimeter experiment analyses and results
NASA Technical Reports Server (NTRS)
Brown, G. S. (Editor)
1977-01-01
The design of optimum filtering procedures for geoid recovery is discussed. Statistical error bounds are obtained for pointing angle estimates using average waveform data. A correlation of tracking loop bandwidth with magnitude of pointing error is established. The impact of ocean currents and precipitation on the received power are shown to be measurable effects. For large sea state conditions, measurements of sigma 0 deg indicate a distinct saturation level of about 8 dB. Near-nadir less than 15 deg values of sigma 0 deg are also presented and compared with theoretical models. Examination of Great Salt Lake Desert scattering data leads to rejection of a previously hypothesized specularly reflecting surface. Pulse-to-pulse correlation results are in agreement with quasi-monochromatic optics theoretical predictions and indicate a means for estimating direction of pointing error. Pulse compression techniques for and results of estimating significant waveheight from waveform data are presented and are also shown to be in good agreement with surface truth data. A number of results pertaining to system performance are presented.
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-08-01
Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.
Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)
NASA Technical Reports Server (NTRS)
Schmalz, Tyler; Ryan, Jack
2011-01-01
Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.
Towards Holography via Quantum Source-Channel Codes.
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-14
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
Towards Holography via Quantum Source-Channel Codes
NASA Astrophysics Data System (ADS)
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-01
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
NASA Technical Reports Server (NTRS)
1978-01-01
The theoretical background for a coherent demodulator for minimum shift keying signals generated by the advanced data collection/position locating system breadboard is presented along with a discussion of the design concept. Various tests and test results, obtained with the breadboard system described, include evaluation of bit-error rate performance, acquisition time, clock recovery, recycle time, frequency measurement accuracy, and mutual interference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skalski, J. R.; Eppard, M. B.; Ploskey, Gene R.
2014-07-11
High survival through hydropower projects is an essential element in the recovery of salmonid populations in the Columbia River. It is also a regulatory requirement under the 2008 Federal Columbia River Power System (FCRPS) Biological Opinion (BiOp) established under the Endangered Species Act. It requires dam passage survival to be ≥0.96 and ≥0.93 for spring and summer outmigrating juvenile salmonids, respectively, and estimated with a standard error ≤ 0.015. An innovative virtual/paired-release design was used to estimate dam passage survival, defined as survival from the face of a dam to the tailrace mixing zone. A coordinated four-dam study was conductedmore » during the 2012 summer outmigration using 14,026 run-of-river subyearling Chinook salmon surgically implanted with acoustic micro-transmitter (AMT) tags released at 9 different locations, and monitored on 14 different detection arrays. Each of the four estimates of dam passage survival exceeded BiOp requirements with values ranging from 0.9414 to 0.9747 and standard errors, 0.0031 to 0.0114. Two consecutive years of survival estimates must meet BiOp standards in order for a hydropower project to be in compliance with recovery requirements for a fish stock.« less
Determination of the carmine content based on spectrum fluorescence spectral and PSO-SVM
NASA Astrophysics Data System (ADS)
Wang, Shu-tao; Peng, Tao; Cheng, Qi; Wang, Gui-chuan; Kong, De-ming; Wang, Yu-tian
2018-03-01
Carmine is a widely used food pigment in various food and beverage additives. Excessive consumption of synthetic pigment shall do harm to body seriously. The food is generally associated with a variety of colors. Under the simulation context of various food pigments' coexistence, we adopted the technology of fluorescence spectroscopy, together with the PSO-SVM algorithm, so that to establish a method for the determination of carmine content in mixed solution. After analyzing the prediction results of PSO-SVM, we collected a bunch of data: the carmine average recovery rate was 100.84%, the root mean square error of prediction (RMSEP) for 1.03e-04, 0.999 for the correlation coefficient between the model output and the real value of the forecast. Compared with the prediction results of reverse transmission, the correlation coefficient of PSO-SVM was 2.7% higher, the average recovery rate for 0.6%, and the root mean square error was nearly one order of magnitude lower. According to the analysis results, it can effectively avoid the interference caused by pigment with the combination of the fluorescence spectrum technique and PSO-SVM, accurately determining the content of carmine in mixed solution with an effect better than that of BP.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Structured methods for identifying and correcting potential human errors in aviation operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less
Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis
NASA Technical Reports Server (NTRS)
Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.
2004-01-01
This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.
NASA Astrophysics Data System (ADS)
Altunina, L. K.; Kuvshinov, I. V.; Kuvshinov, V. A.; Kozlov, V. V.; Stasyeva, L. A.
2017-12-01
This work presents the results of laboratory and field tests of thermotropic composition MEGA with two simultaneously acting gelling components, polymer and inorganic. The composition is intended for improving oil recovery and water shut-off at oilfields developed by thermal flooding, and cyclic-steam stimulated oil production wells. The composition forms an in-situ "gel-in-gel" system with improved structural-mechanical properties, using reservoir or carrier fluid heat for gelling. The gel blocks water breakthrough into producing wells and redistribute fluid flows, thus increasing the oil recovery factor.
Tobin, R S; Dutka, B J
1977-01-01
A comparative study was made of nine commonly used membrane filters from five manufacturers, all recommended for enumeration of coliform bacteria. Bacterial recoveries and flow rates were examined from three types of water and were found to correlate with the surface pore structure determined by scanning electron microscopy. The sorption of metals was also determined. The results of these studies indicate that the five best membranes for fecal coliform recovery could be placed in two groups: Millipore HC and Gelman, followed by Johns-Manville SG and AG and Sartorius 13806. Images PMID:329763
Review of "The Louisiana Recovery School District: Lessons for the Buckeye State"
ERIC Educational Resources Information Center
Buras, Kristen L.
2012-01-01
In "The Louisiana Recovery School District: Lessons for the Buckeye State," the Thomas B. Fordham Institute criticizes local urban governance structures and presents the decentralized, charter-school-driven Recovery School District (RSD) in New Orleans as a successful model for fiscal and academic performance. Absent from the review is…
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
O'Donnell, Katherine; Messerman, Arianne F; Barichivich, William J.; Semlitsch, Raymond D.; Gorman, Thomas A.; Mitchell, Harold G; Allan, Nathan; Fenolio, Dante B.; Green, Adam; Johnson, Fred A.; Keever, Allison; Mandica, Mark; Martin, Julien; Mott, Jana; Peacock, Terry; Reinman, Joseph; Romañach, Stephanie; Titus, Greg; McGowan, Conor P.; Walls, Susan
2017-01-01
At least one-third of all amphibian species face the threat of extinction, and current amphibian extinction rates are four orders of magnitude greater than background rates. Preventing extirpation often requires both ex situ (i.e., conservation breeding programs) and in situ strategies (i.e., protecting natural habitats). Flatwoods salamanders (Ambystoma bishopi and A. cingulatum) are protected under the U.S. Endangered Species Act. The two species have decreased from 476 historical locations to 63 recently extant locations (86.8% loss). We suggest that recovery efforts are needed to increase populations and prevent extinction, but uncertainty regarding optimal actions in both ex situ and in situ realms hinders recovery planning. We used structured decision making (SDM) to address key uncertainties regarding both captive breeding and habitat restoration, and we developed short-, medium-, and long-term goals to achieve recovery objectives. By promoting a transparent, logical approach, SDM has proven vital to recovery plan development for flatwoods salamanders. The SDM approach has clear advantages over other previous approaches to recovery efforts, and we suggest that it should be considered for other complex decisions regarding endangered species.
Katsakou, Christina; Pistrang, Nancy; Barnicot, Kirsten; White, Hayley; Priebe, Stefan
2017-07-04
Recovery processes in borderline personality disorder (BPD) are poorly understood. This study explored how recovery in BPD occurs through routine or specialist treatment, as perceived by service users (SUs) and therapists. SUs were recruited from two specialist BPD services, three community mental health teams, and one psychological therapies service. Semi-structured interviews were conducted with 48 SUs and 15 therapists. The "framework" approach was used to analyse the data. The findings were organized into two domains of themes. The first domain described three parallel processes that constituted SUs' recovery journey: fighting ambivalence and committing to taking action; moving from shame to self-acceptance and compassion; and moving from distrust and defensiveness to opening up to others. The second domain described four therapeutic challenges that needed to be addressed to support this journey: balancing self-exploration and finding solutions; balancing structure and flexibility; confronting interpersonal difficulties and practicing new ways of relating; and balancing support and independence. Therapies facilitating the identified processes may promote recovery. The recovery processes and therapeutic challenges identified in this study could provide a framework to guide future research.
Brehm, Laurel; Goldrick, Matthew
2017-10-01
The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Bodkin, James L.; Ballachey, Brenda E.; Esslinger, George G.
2011-01-01
Sea otters in western Prince William Sound (WPWS) and elsewhere in the Gulf of Alaska suffered widespread mortality as a result of oiling following the 1989 T/V Exxon Valdez oil spill. Following the spill, extensive efforts have been directed toward identifying and understanding long-term consequences of the spill and the process of recovery. We conducted annual aerial surveys of sea otter abundance from 1993 to 2009 (except for 2001 and 2006) in WPWS. We observed an increasing trend in population abundance at the scale of WPWS through 2000 at an average annual rate of 4 percent: however, at northern Knight Island where oiling was heaviest and sea otter mortality highest, no increase in abundance was evident by 2000. We continued to see significant increase in abundance at the scale of WPWS between 2001 and 2009, with an average annual rate of increase from 1993 to 2009 of 2.6 percent. We estimated the 2009 population size of WPWS to be 3,958 animals (standard error=653), nearly 2,000 animals more than the first post-spill estimate in 1993. Surveys since 2003 also have identified a significant increasing trend at the heavily oiled site in northern Knight Island, averaging about 25 percent annually and resulting in a 2009 estimated population size of 116 animals (standard error=19). Although the 2009 estimate for northern Knight Island remains about 30 percent less than the pre-spill estimate of 165 animals, we interpret this trend as strong evidence of a trajectory toward recovery of spill-affected sea otter populations in WPWS.
NASA Astrophysics Data System (ADS)
Watford, M.; DeCusatis, C.
2005-09-01
With the advent of new regulations governing the protection and recovery of sensitive business data, including the Sarbanes-Oxley Act, there has been a renewed interest in business continuity and disaster recovery applications for metropolitan area networks. Specifically, there has been a need for more efficient bandwidth utilization and lower cost per channel to facilitate mirroring of multi-terabit data bases. These applications have further blurred the boundary between metropolitan and wide area networks, with synchronous disaster recovery applications running up to 100 km and asynchronous solutions extending to 300 km or more. In this paper, we discuss recent enhancements in the Nortel Optical Metro 5200 Dense Wavelength Division Multiplexing (DWDM) platform, including features recently qualified for data communication applications such as Metro Mirror, Global Mirror, and Geographically Distributed Parallel Sysplex (GDPS). Using a 10 Gigabit/second (Gbit/s) backbone, this solution transports significantly more Fibre Channel protocol traffic with up to five times greater hardware density in the same physical package. This is also among the first platforms to utilize forward error correction (FEC) on the aggregate signals to improve bit error rate (BER) performance beyond industry standards. When combined with encapsulation into wide area network protocols, the use of FEC can compensate for impairments in BER across a service provider infrastructure without impacting application level performance. Design and implementation of these features will be discussed, including results from experimental test beds which validate these solutions for a number of applications. Future extensions of this environment will also be considered, including ways to provide configurable bandwidth on demand, mitigate Fibre Channel buffer credit management issues, and support for other GDPS protocols.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
NASA Technical Reports Server (NTRS)
Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.
1960-01-01
The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.
Tokuda, Yasuharu; Kishida, Naoki; Konishi, Ryota; Koizumi, Shunzo
2011-03-01
Cognitive errors in the course of clinical decision-making are prevalent in many cases of medical injury. We used information on verdict's judgment from closed claims files to determine the important cognitive factors associated with cases of medical injury. Data were collected from claims closed between 2001 to 2005 at district courts in Tokyo and Osaka, Japan. In each case, we recorded all the contributory cognitive, systemic, and patient-related factors judged in the verdicts to be causally related to the medical injury. We also analyzed the association between cognitive factors and cases involving paid compensation using a multivariable logistic regression model. Among 274 cases (mean age 49 years old; 45% women), there were 122 (45%) deaths and 67 (24%) major injuries (incomplete recovery within a year). In 103 cases (38%), the verdicts ordered hospitals to pay compensation (median; 8,000,000 Japanese Yen). An error in judgment (199/274, 73%) and failure of vigilance (177/274, 65%) were the most prevalent causative cognitive factors, and error in judgment was also significantly associated with paid compensation (odds ratio, 1.9; 95% confidence interval [CI], 1.0-3.4). Systemic causative factors including poor teamwork (11/274, 4%) and technology failure (5/274, 2%) were less common. The closed claims analysis based on verdict's judgment showed that cognitive errors were common in cases of medical injury, with an error in judgment being most prevalent and closely associated with compensation payment. Reduction of this type of error is required to produce safer healthcare. 2010 Society of Hospital Medicine.
Public sector refraction and spectacle dispensing in low-resource countries of the Western Pacific.
Ramke, Jacqueline; du Toit, Rènée; Palagyi, Anna; Williams, Carmel; Brian, Garry
2008-05-01
Given that uncorrected refractive error is a frequent cause of vision impairment, and that there is a high unmet need for spectacles, an appraisal of public sector arrangements for the correction of refractive error was conducted in eight Pacific Island countries. Mixed methods (questionnaire and semi-structured interviews) were used to collect information from eye care personnel (from Fiji, Papua New Guinea, Solomon Islands, Vanuatu, Cook Islands, Samoa, Tonga and Tuvalu) attending a regional eye health workshop in 2005. Fiji, Tonga and Vanuatu had Vision 2020 eye care plans that included refraction services, but not spectacle provision. There was wide variation in public sector spectacle dispensing services, but, except in Samoa, ready-made spectacles and a full cost recovery pricing strategy were the mainstay. There were no systems for the registration of personnel, nor guidelines for clinical or systems management. The refraction staff to population ratio varied considerably. Solomon Islands, Tuvalu and Vanuatu had the best coverage by services, either fixed or outreach. Most services had little promotional activity or community engagement. To be successful, it would seem that public sector refraction services should answer a real and perceived need, fit within prevailing policy and legislation, value, train, retain and equip employees, be well managed, be accessible and affordable, be responsive to consumers, and provide ongoing good quality outcomes. To this end, a checklist to aid the initiation and maintenance of refraction and spectacle systems in low-resource countries has been constructed.
Quantitative charge-tags for sterol and oxysterol analysis.
Crick, Peter J; William Bentley, T; Abdel-Khalik, Jonas; Matthews, Ian; Clayton, Peter T; Morris, Andrew A; Bigger, Brian W; Zerbinati, Chiara; Tritapepe, Luigi; Iuliano, Luigi; Wang, Yuqin; Griffiths, William J
2015-02-01
Global sterol analysis is challenging owing to the extreme diversity of sterol natural products, the tendency of cholesterol to dominate in abundance over all other sterols, and the structural lack of a strong chromophore or readily ionized functional group. We developed a method to overcome these challenges by using different isotope-labeled versions of the Girard P reagent (GP) as quantitative charge-tags for the LC-MS analysis of sterols including oxysterols. Sterols/oxysterols in plasma were extracted in ethanol containing deuterated internal standards, separated by C18 solid-phase extraction, and derivatized with GP, with or without prior oxidation of 3β-hydroxy to 3-oxo groups. By use of different isotope-labeled GPs, it was possible to analyze in a single LC-MS analysis both sterols/oxysterols that naturally possess a 3-oxo group and those with a 3β-hydroxy group. Intra- and interassay CVs were <15%, and recoveries for representative oxysterols and cholestenoic acids were 85%-108%. By adopting a multiplex approach to isotope labeling, we analyzed up to 4 different samples in a single run. Using plasma samples, we could demonstrate the diagnosis of inborn errors of metabolism and also the export of oxysterols from brain via the jugular vein. This method allows the profiling of the widest range of sterols/oxysterols in a single analytical run and can be used to identify inborn errors of cholesterol synthesis and metabolism. © 2014 American Association for Clinical Chemistry.
Image Restoration in Cryo-electron Microscopy
Penczek, Pawel A.
2011-01-01
Image restoration techniques are used to obtain, given experimental measurements, the best possible approximation of the original object within the limits imposed by instrumental conditions and noise level in the data. In molecular electron microscopy, we are mainly interested in linear methods that preserve the respective relationships between mass densities within the restored map. Here, we describe the methodology of image restoration in structural electron microscopy, and more specifically, we will focus on the problem of the optimum recovery of Fourier amplitudes given electron microscope data collected under various defocus settings. We discuss in detail two classes of commonly used linear methods, the first of which consists of methods based on pseudoinverse restoration, and which is further subdivided into mean-square error, chi-square error, and constrained based restorations, where the methods in the latter two subclasses explicitly incorporates non-white distribution of noise in the data. The second class of methods is based on the Wiener filtration approach. We show that the Wiener filter-based methodology can be used to obtain a solution to the problem of amplitude correction (or “sharpening”) of the electron microscopy map that makes it visually comparable to maps determined by X-ray crystallography, and thus amenable to comparable interpretation. Finally, we present a semi-heuristic Wiener filter-based solution to the problem of image restoration given sets of heterogeneous solutions. We conclude the chapter with a discussion of image restoration protocols implemented in commonly used single particle software packages. PMID:20888957
Asymmetric anode and cathode extraction structure fast recovery diode
NASA Astrophysics Data System (ADS)
Xie, Jiaqiang; Ma, Li; Gao, Yong
2018-05-01
This paper presents an asymmetric anode structure and cathode extraction fast and soft recovery diode. The device anode is partial-heavily doped and partial-lightly doped. The P+ region is introduced into the cathode. Firstly, the characteristics of the diode are simulated and analyzed. Secondly, the diode was fabricated and its characteristics were tested. The experimental results are in good agreement with the simulation results. The results show that, compared with the P–i–N diode, although the forward conduction characteristic of the diode is declined, the reverse recovery peak current is reduced by 47%, the reverse recovery time is shortened by 20% and the softness factor is doubled. In addition, the breakdown voltage is increased by 10%. Project supported by the National Natural Science Foundation of China (No. 51177133).
Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin
2016-09-03
While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.
Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin
2016-01-01
While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174
A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks
Costa, Daniel G.; Guedes, Luiz Affonso
2011-01-01
Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908
Ground Taxi Navigation Problems and Training Solutions
NASA Technical Reports Server (NTRS)
Quinn, Cheryl; Walter, Kim E.; Rosekind, Mark (Technical Monitor)
1997-01-01
Adverse weather conditions can put considerable strain on the National Airspace System. Even small decreases in visibility on the airport surface can create delays, hinder safe movement and lead to errors. Studies of Aviation Safety Reporting System (ASRS) surface movement incidents support the need for technologies and procedures to improve ground operations in low-visibility conditions. This study examined 139 ASRS reports of low-visibility surface movement incidents at 10 major U.S. airports. Errors were characterized in terms of incident type, contributing factors and consequences. The incidents in the present sample were comprised of runway transgressions, taxiway excursions and ground conflicts. The primary contributing factors were Airport Layout and Markings, Communication and Distraction. In half the incidents the controller issued a new clearance or the flight crew took an evasive action and in the remaining half, no recovery attempt was made because the error was detected after the fact. By gaining a better understanding the factors that affect crew navigation in low visibility and the types of errors that are likely to occur, it will be possible to develop more robust technologies to aid pilots in the ground taxi task. Implications for crew training and procedure development for low-visibility ground taxi are also discussed.
Otte, Willem M; van der Marel, Kajo; van Meer, Maurits P A; van Rijen, Peter C; Gosselaar, Peter H; Braun, Kees P J; Dijkhuizen, Rick M
2015-08-01
Hemispherectomy is often followed by remarkable recovery of cognitive and motor functions. This reflects plastic capacities of the remaining hemisphere, involving large-scale structural and functional adaptations. Better understanding of these adaptations may (1) provide new insights in the neuronal configuration and rewiring that underlies sensorimotor outcome restoration, and (2) guide development of rehabilitation strategies to enhance recovery after hemispheric lesioning. We assessed brain structure and function in a hemispherectomy model. With MRI we mapped changes in white matter structural integrity and gray matter functional connectivity in eight hemispherectomized rats, compared with 12 controls. Behavioral testing involved sensorimotor performance scoring. Diffusion tensor imaging and resting-state functional magnetic resonance imaging were acquired 7 and 49 days post surgery. Hemispherectomy caused significant sensorimotor deficits that largely recovered within 2 weeks. During the recovery period, fractional anisotropy was maintained and white matter volume and axial diffusivity increased in the contralateral cerebral peduncle, suggestive of preserved or improved white matter integrity despite overall reduced white matter volume. This was accompanied by functional adaptations in the contralateral sensorimotor network. The observed white matter modifications and reorganization of functional network regions may provide handles for rehabilitation strategies improving functional recovery following large lesions.
Experimental sulfate amendment alters peatland bacterial community structure.
Strickman, R J S; Fulthorpe, R R; Coleman Wasik, J K; Engstrom, D R; Mitchell, C P J
2016-10-01
As part of a long-term, peatland-scale sulfate addition experiment, the impact of varying sulfate deposition on bacterial community responses was assessed using 16S tag encoded pyrosequencing. In three separate areas of the peatland, sulfate manipulations included an eight year quadrupling of atmospheric sulfate deposition (experimental), a 3-year recovery to background deposition following 5years of elevated deposition (recovery), and a control area. Peat concentrations of methylmercury (MeHg), a bioaccumulative neurotoxin, were measured, the production of which is attributable to a growing list of microorganisms, including many sulfate-reducing Deltaproteobacteria. The total bacterial and Deltaproteobacterial community structures in the experimental treatment differed significantly from those in the control and recovery treatments that were either indistinguishable or very similar to one another. Notably, the relatively rapid return (within three years) of bacterial community structure in the recovery treatment to a state similar to the control, demonstrates significant resilience of the peatland bacterial community to changes in atmospheric sulfate deposition. Changes in MeHg accumulation between sulfate treatments correlated with changes in the Deltaproteobacterial community, suggesting that sulfate may affect MeHg production through changes in the community structure of this group. Copyright © 2016 Elsevier B.V. All rights reserved.
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
GOCE Precise Science Orbits for the Entire Mission and their Use for Gravity Field Recovery
NASA Astrophysics Data System (ADS)
Jäggi, Adrian; Bock, Heike; Meyer, Ulrich; Weigelt, Matthias
The Gravity field and steady-state Ocean Circulation Explorer (GOCE), ESA's first Earth Explorer Core Mission, was launched on March 17, 2009 into a sun-synchronous dusk-dawn orbit and re-entered into the Earth's atmosphere on November 11, 2013. It was equipped with a three-axis gravity gradiometer for high-resolution recovery of the Earth's gravity field, as well as with a 12-channel, dual-frequency Global Positioning System (GPS) receiver for precise orbit determination (POD), instrument time-tagging, and the determination of the long wavelength part of the Earth’s gravity field. A precise science orbit (PSO) product was provided during the entire mission by the GOCE High-level Processing Facility (HPF) from the GPS high-low Satellite-to-Satellite Tracking (hl-SST) data. We present the reduced-dynamic and kinematic PSO results for the entire mission period. Orbit comparisons and validations with independent Satellite Laser Ranging (SLR) measurements demonstrate the high quality of both orbit products being close to 2 cm 1-D RMS, but also reveal a correlation between solar activity, GPS data availability, and the quality of the orbits. We use the 1-sec kinematic positions of the GOCE PSO product for gravity field determination and present GPS-only solutions covering the entire mission period. The generated gravity field solutions reveal severe systematic errors centered along the geomagnetic equator, which may be traced back to the GPS carrier phase observations used for the kinematic orbit determination. The nature of the systematic errors is further investigated and reprocessed orbits free of systematic errors along the geomagnetic equator are derived. Eventually, the potential of recovering time variable signals from GOCE kinematic positions is assessed.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1993-01-01
The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.
Against Structural Constraints in Subject-Verb Agreement Production
ERIC Educational Resources Information Center
Gillespie, Maureen; Pearlmutter, Neal J.
2013-01-01
Syntactic structure has been considered an integral component of agreement computation in language production. In agreement error studies, clause-boundedness (Bock & Cutting, 1992) and hierarchical feature-passing (Franck, Vigliocco, & Nicol, 2002) predict that local nouns within clausal modifiers should produce fewer errors than do those within…
Fowler, David; Hodgekins, Jo; French, Paul; Marshall, Max; Freemantle, Nick; McCrone, Paul; Everard, Linda; Lavis, Anna; Jones, Peter B; Amos, Tim; Singh, Swaran; Sharma, Vimal; Birchwood, Max
2018-01-01
Provision of early intervention services has increased the rate of social recovery in patients with first-episode psychosis; however, many individuals have continuing severe and persistent problems with social functioning. We aimed to assess the efficacy of early intervention services augmented with social recovery therapy in patients with first-episode psychosis. The primary hypothesis was that social recovery therapy plus early intervention services would lead to improvements in social recovery. We did this single-blind, phase 2, randomised controlled trial (SUPEREDEN3) at four specialist early intervention services in the UK. We included participants who were aged 16-35 years, had non-affective psychosis, had been clients of early intervention services for 12-30 months, and had persistent and severe social disability, defined as engagement in less than 30 h per week of structured activity. Participants were randomly assigned (1:1), via computer-generated randomisation with permuted blocks (sizes of four to six), to receive social recovery therapy plus early intervention services or early intervention services alone. Randomisation was stratified by sex and recruitment centre (Norfolk, Birmingham, Lancashire, and Sussex). By necessity, participants were not masked to group allocation, but allocation was concealed from outcome assessors. The primary outcome was time spent in structured activity at 9 months, as measured by the Time Use Survey. Analysis was by intention to treat. This trial is registered with ISRCTN, number ISRCTN61621571. Between Oct 1, 2012, and June 20, 2014, we randomly assigned 155 participants to receive social recovery therapy plus early intervention services (n=76) or early intervention services alone (n=79); the intention-to-treat population comprised 154 patients. At 9 months, 143 (93%) participants had data for the primary outcome. Social recovery therapy plus early intervention services was associated with an increase in structured activity of 8·1 h (95% CI 2·5-13·6; p=0·0050) compared with early intervention services alone. No adverse events were deemed attributable to study therapy. Our findings show a clinically important benefit of enhanced social recovery on structured activity in patients with first-episode psychosis who received social recovery therapy plus early intervention services. Social recovery therapy might be useful in improving functional outcomes in people with first-episode psychosis, particularly in individuals not motivated to engage in existing psychosocial interventions targeting functioning, or who have comorbid difficulties preventing them from doing so. National Institute for Health Research. Copyright © 2017 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
Tolerant compressed sensing with partially coherent sensing matrices
NASA Astrophysics Data System (ADS)
Birnbaum, Tobias; Eldar, Yonina C.; Needell, Deanna
2017-08-01
Most of compressed sensing (CS) theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. Despite the abundance of work utilizing incoherent sensing matrices, for this type of tolerant recovery we suggest that coherence is actually beneficial . We promote the use of coherent sampling when tolerant support recovery is acceptable, and demonstrate its advantages empirically. In addition, we provide a first step towards theoretical analysis by considering a specific reconstruction method for selected signal classes.
Performance of optimum detector structures for noisy intersymbol interference channels
NASA Technical Reports Server (NTRS)
Womer, J. D.; Fritchman, B. D.; Kanal, L. N.
1971-01-01
The errors which arise in transmitting digital information by radio or wireline systems because of additive noise from successively transmitted signals interfering with one another are described. The probability of error and the performance of optimum detector structures are examined. A comparative study of the performance of certain detector structures and approximations to them, and the performance of a transversal equalizer are included.
Effects of syllable structure in aphasic errors: implications for a new model of speech production.
Romani, Cristina; Galluzzi, Claudia; Bureca, Ivana; Olson, Andrew
2011-03-01
Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices. Copyright © 2010 Elsevier Inc. All rights reserved.
The relationship between wisdom and abstinence behaviors in women in recovery from substance abuse.
Digangi, Julia A; Jason, Leonard A; Mendoza, Leslie; Miller, Steve A; Contreras, Richard
2013-01-01
Wisdom is theorized to be an important construct in recovery from substance abuse. In order to explore the role of wisdom in substance abuse recovery behaviors, the present study had two goals. First, it sought to examine the factor structure of a wisdom scale, the Foundational Value Scale (FVS) in a community sample of women in recovery from substance abuse. Second, the study examined how wisdom predicted the women's beliefs about their ability to abstain from future substance use. 116 women in recovery from substance abuse disorders were recruited from self-run recovery homes and a substance abuse recovery convention. Results from an exploratory factor analysis indicated that a modified version of the FVS has good internal consistency reliability and is composed of three wisdom-related dimensions. The three factors were then used to create a higher-order wisdom factor in a structural equation model (SEM) that was used to predict abstinence self-efficacy behaviors. Results from the SEM showed that the wisdom factor was predictive of greater abstinence self-efficacy behaviors. The FVS was found to be a reliable measure with women in recovery from substance abuse. In addition, wisdom predicted beliefs about self-efficacy such that those who reported higher levels of wisdom felt more confident in their abilities to abstain from alcohol. The results of this study indicate that wisdom is an important construct in the abstinence behaviors of women who are in recovery from substance abuse disorders.
Bartel, Thomas W.; Yaniv, Simone L.
1997-01-01
The 60 min creep data from National Type Evaluation Procedure (NTEP) tests performed at the National Institute of Standards and Technology (NIST) on 65 load cells have been analyzed in order to compare their creep and creep recovery responses, and to compare the 60 min creep with creep over shorter time periods. To facilitate this comparison the data were fitted to a multiple-term exponential equation, which adequately describes the creep and creep recovery responses of load cells. The use of such a curve fit reduces the effect of the random error in the indicator readings on the calculated values of the load cell creep. Examination of the fitted curves show that the creep recovery responses, after inversion by a change in sign, are generally similar in shape to the creep response, but smaller in magnitude. The average ratio of the absolute value of the maximum creep recovery to the maximum creep is 0.86; however, no reliable correlation between creep and creep recovery can be drawn from the data. The fitted curves were also used to compare the 60 min creep of the NTEP analysis with the 30 min creep and other parameters calculated according to the Organization Internationale de Métrologie Légale (OIML) R 60 analysis. The average ratio of the 30 min creep value to the 60 min value is 0.84. The OIML class C creep tolerance is less than 0.5 of the NTEP tolerance for classes III and III L. PMID:27805151
Article Errors in the English Writing of Saudi EFL Preparatory Year Students
ERIC Educational Resources Information Center
Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.
2017-01-01
This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…
ERIC Educational Resources Information Center
Chen, Yu-Chuan
2015-01-01
This study aims to investigate the direction and strength of the relationships among service recovery, relationship quality, and brand image in higher education industries. This research provides a framework for school managers to understand service recovery from an operations perspective. Structural equation models were used to test the proposed…
Multiplicity Control in Structural Equation Modeling
ERIC Educational Resources Information Center
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
The Zero Product Principle Error.
ERIC Educational Resources Information Center
Padula, Janice
1996-01-01
Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)
Errors of Inference in Structural Equation Modeling
ERIC Educational Resources Information Center
McCoach, D. Betsy; Black, Anne C.; O'Connell, Ann A.
2007-01-01
Although structural equation modeling (SEM) is one of the most comprehensive and flexible approaches to data analysis currently available, it is nonetheless prone to researcher misuse and misconceptions. This article offers a brief overview of the unique capabilities of SEM and discusses common sources of user error in drawing conclusions from…
The error structure of the SMAP single and dual channel soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
Knowledge of the temporal error structure for remotely-sensed surface soil moisture retrievals can improve our ability to exploit them for hydrology and climate studies. This study employs a triple collocation type analysis to investigate both the total variance and temporal auto-correlation of erro...
Quantifying Adventitious Error in a Covariance Structure as a Random Effect
Wu, Hao; Browne, Michael W.
2017-01-01
We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463
Achievable flatness in a large microwave power transmitting antenna
NASA Technical Reports Server (NTRS)
Ried, R. C.
1980-01-01
A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.
2009-03-01
Coverage to Loss Ratio Increasing Indiv idual Losses f rom Contamination Daining Recovery Decreasing Losses Through Community Recov ery Community...Localized Stimulus Plan Switch Noname 1 Daining Recov ery Converter Business Loss Structure 64 were never directly refunded by federal...Incentiv es Revitalization Plan Legislative Financial Backing of Localized Stimulus Plan Switch Daining Recovery Converter Tax Rev enue Loss Structure
The forecast for RAC extrapolation: mostly cloudy.
Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie
2011-09-01
The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.
ERIC Educational Resources Information Center
Vogel, Ronald J.; And Others
The types and ranges of errors made on applications to the Basic Educational Opportunity Grant (BEOG) program were studied, along with procedures used in recovering overpayments. The objective was to assess the scope and nature of misreporting and misuse of the BEOG program. A 1975-1976 study reviewed cases referred to the U.S. Office of Education…
High performance interconnection between high data rate networks
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.
1992-01-01
The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.
Mak, Winnie W S; Chan, Randolph C H; Yau, Sania S W
2018-05-29
Considering the lack of existing measures on attitudes toward personal recovery and the need to acknowledge the cultural milieu in recovery attitude assessment, the present study developed and validated the Attitudes towards Recovery Questionnaire (ARQ) in a sample of people in recovery of mental illness, family carers, and mental health service providers in Hong Kong. The ARQ was developed based on existing literature and measures of recovery, and focus group discussions with various stakeholders. Findings of the multi-sample confirmatory factor analyses supported a five-factor structure: (1) resilience as a person in recovery, (2) self-appreciation and development, (3) self-direction, (4) family involvement, and (5) social ties and integration. The ARQ was positively correlated with recovery outcomes, empowerment, recovery knowledge, and recovery orientation of mental health services. As a tool for examining recovery attitudes, the ARQ informs us of the mindset across stakeholders and areas that need enhancement to facilitate the recovery process. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
Improving the satellite communication efficiency of the accumulative acknowledgement strategies
NASA Astrophysics Data System (ADS)
Duarte, Otto Carlos M. B.; de Lima, Heliomar Medeiros
The performances of two finite buffer error recovery strategies are analyzed. In both strategies the retransmission request decision between selective repeat and continuous retransmission is based on an imminent buffer overflow condition. These are accumulative acknowledgment schemes, but in the second strategy the selective-repeat control frame is uniquely an individual negative acknowledgment. The two strategies take advantage of the availability of a greater buffer capacity, making the most of the selective repeat, postponing the sending of a continuous retransmission request. Numerical results show a better performance very close to the ideal, but it does not integrally conform to the high-level data link control (HDLC) procedures. It is shown that these strategies are well suited for high-speed data transfer in the high-error-rate satellite environment.
Fast converging minimum probability of error neural network receivers for DS-CDMA communications.
Matyjas, John D; Psaromiligkos, Ioannis N; Batalama, Stella N; Medley, Michael J
2004-03-01
We consider a multilayer perceptron neural network (NN) receiver architecture for the recovery of the information bits of a direct-sequence code-division-multiple-access (DS-CDMA) user. We develop a fast converging adaptive training algorithm that minimizes the bit-error rate (BER) at the output of the receiver. The adaptive algorithm has three key features: i) it incorporates the BER, i.e., the ultimate performance evaluation measure, directly into the learning process, ii) it utilizes constraints that are derived from the properties of the optimum single-user decision boundary for additive white Gaussian noise (AWGN) multiple-access channels, and iii) it embeds importance sampling (IS) principles directly into the receiver optimization process. Simulation studies illustrate the BER performance of the proposed scheme.
A novel aliasing-free subband information fusion approach for wideband sparse spectral estimation
NASA Astrophysics Data System (ADS)
Luo, Ji-An; Zhang, Xiao-Ping; Wang, Zhi
2017-12-01
Wideband sparse spectral estimation is generally formulated as a multi-dictionary/multi-measurement (MD/MM) problem which can be solved by using group sparsity techniques. In this paper, the MD/MM problem is reformulated as a single sparse indicative vector (SIV) recovery problem at the cost of introducing an additional system error. Thus, the number of unknowns is reduced greatly. We show that the system error can be neglected under certain conditions. We then present a new subband information fusion (SIF) method to estimate the SIV by jointly utilizing all the frequency bins. With orthogonal matching pursuit (OMP) leveraging the binary property of SIV's components, we develop a SIF-OMP algorithm to reconstruct the SIV. The numerical simulations demonstrate the performance of the proposed method.
Havens: Explicit Reliable Memory Regions for HPC Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
2016-01-01
Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less
Knights, Jonathan; Rohatagi, Shashank
2015-12-01
Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.
NASA Technical Reports Server (NTRS)
Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor
2016-01-01
When an Unmanned Aircraft System (UAS) encounters an intruder and is unable to maintain required temporal and spatial separation between the two vehicles, it is referred to as a loss of well-clear. In this state, the UAS must make its best attempt to regain separation while maximizing the minimum separation between itself and the intruder. When encountering a non-cooperative intruder (an aircraft operating under visual flight rules without ADS-B or an active transponder) the UAS must rely on the radar system to provide the intruders location, velocity, and heading information. As many UAS have limited climb and descent performance, vertical position andor vertical rate errors make it difficult to determine whether an intruder will pass above or below them. To account for that, there is a proposal by RTCA Special Committee 228 to prohibit guidance systems from providing vertical guidance to regain well-clear to UAS in an encounter with a non-cooperative intruder unless their radar system has vertical position error below 175 feet (95) and vertical velocity errors below 200 fpm (95). Two sets of fast-time parametric studies was conducted, each with 54000 pairwise encounters between a UAS and non-cooperative intruder to determine the suitability of offering vertical guidance to regain well clear to a UAS in the presence of radar sensor noise. The UAS was not allowed to maneuver until it received well-clear recovery guidance. The maximum severity of the loss of well-clear was logged and used as the primary indicator of the separation achieved by the UAS. One set of 54000 encounters allowed the UAS to maneuver either vertically or horizontally, while the second permitted horizontal maneuvers, only. Comparing the two data sets allowed researchers to see the effect of allowing vertical guidance to a UAS for a particular encounter and vertical rate error. Study results show there is a small reduction in the average severity of a loss of well-clear when vertical maneuvers are suppressed, for all vertical error rate thresholds examined. However, results also show that in roughly 35 of the encounters where a vertical maneuver was selected, forcing the UAS to do a horizontal maneuver instead increased the severity of the loss of well-clear for that encounter. Finally, results showed a small reduction in the number of severe losses of well-clear when the high performance UAS (2000 fpm climb and descent rate) was allowed to maneuver vertically, and the vertical rate error was below 500 fpm. Overall, the results show that using a single vertical rate threshold is not advisable, and that limiting a UAS to horizontal maneuvers when vertical rate errors are above 175 fpm can make a UAS less safe about a third of the time. It is suggested that the hard limit be removed, and system manufacturers instructed to account for their own UAS performance, as well as vertical rate error and encounter geometry, when determining whether or not to provide vertical guidance to regain well-clear.
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truong, Samson S.
2014-01-01
Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of Multi Utility Technology Test Bed, X-56A, aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of X-56A. The ground vibration test validated structural dynamic finite element model of the X-56A is created in this study. The structural dynamic finite element model of the X-56A is improved using a model tuning tool. In this study, two different weight configurations of the X-56A have been improved in a single optimization run.
Considerations in the design of large space structures
NASA Technical Reports Server (NTRS)
Hedgepeth, J. M.; Macneal, R. H.; Knapp, K.; Macgillivray, C. S.
1981-01-01
Several analytical studies of topics relevant to the design of large space structures are presented. Topics covered are: the types and quantitative evaluation of the disturbances to which large Earth-oriented microwave reflectors would be subjected and the resulting attitude errors of such spacecraft; the influence of errors in the structural geometry of the performance of radiofrequency antennas; the effect of creasing on the flatness of tensioned reflector membrane surface; and an analysis of the statistics of damage to truss-type structures due to meteoroids.
Sohn, J H; Smith, R; Yoong, E; Hudson, N; Kim, T I
2004-01-01
A novel laboratory wind tunnel, with the capability to control factors such as air flow-rate, was developed to measure the kinetics of odour emissions from liquid effluent. The tunnel allows the emission of odours and other volatiles under an atmospheric transport system similar to ambient conditions. Sensors for wind speed, temperature and humidity were installed and calibrated. To calibrate the wind tunnel, trials were performed to determine the gas recovery efficiency under different air flow-rates (ranging from 0.001 to 0.028m3/s) and gas supply rates (ranging from 2.5 to 10.0 L/min) using a standard CO gas mixture. The results have shown gas recovery efficiencies ranging from 61.7 to 106.8%, while the average result from the trials was 81.14%. From statistical analysis, it was observed that the highest, most reliable gas recovery efficiency of the tunnel was 88.9%. The values of air flow-rate and gas supply rate corresponding to the highest gas recovery efficiency were 0.028 m3/s and 10.0 L/min respectively. This study suggested that the wind tunnel would provide precise estimates of odour emission rate. However, the wind tunnel needs to be calibrated to compensate for errors caused by different air flow-rates.
NASA Astrophysics Data System (ADS)
Liang, Xuecheng
Dynamic hardness (Pd) of 22 different pure metals and alloys having a wide range of elastic modulus, static hardness, and crystal structure were measured in a gas pulse system. The indentation contact diameter with an indenting sphere and the radius (r2) of curvature of the indentation were determined by the curve fitting of the indentation profile data. r 2 measured by the profilometer was compared with that calculated from Hertz equation in both dynamic and static conditions. The results indicated that the curvature change due to elastic recovery after unloading is approximately proportional to the parameters predicted by Hertz equation. However, r 2 is less than the radius of indenting sphere in many cases which is contradictory to Hertz analysis. This discrepancy is believed due to the difference between Hertzian and actual stress distributions underneath the indentation. Factors which influence indentation elastic recovery were also discussed. It was found that Tabor dynamic hardness formula always gives a lower value than that directly from dynamic hardness definition DeltaE/V because of errors mainly from Tabor's rebound equation and the assumption that dynamic hardness at the beginning of rebound process (Pr) is equal to kinetic energy change of an impact sphere over the formed crater volume (Pd) in the derivation process for Tabor's dynamic hardness formula. Experimental results also suggested that dynamic to static hardness ratio of a material is primarily determined by its crystal structure and static hardness. The effects of strain rate and temperature rise on this ratio were discussed. A vacuum rotating arm apparatus was built to measure Pd at 70, 127, and 381 mum sphere sizes, these results exhibited that Pd is highly depended on the sphere size due to the strain rate effects. P d was also used to substitute for static hardness to correlate with abrasion and erosion resistance of metals and alloys. The particle size effects observed in erosion were also explained in terms of Pd change caused by sphere size change.
Simões, Nathália Soares; de Oliveira, Hanna Leijoto; da Silva, Ricky Cássio Santos; Teixeira, Leila Suleimara; Sales, Thaís Lorenna Souza; de Castro, Whocely Victor; de Paiva, Maria José Nunes; Sanches, Cristina; Borges, Keyller Bastos
2018-05-17
In this work a hollow mesoporous structured molecularly imprinted polymer was synthetized and used as adsorbent in pipette-tip solid-phase extraction for the determination of lamivudine (3TC), zidovudine (AZT) and efavirenz (EFZ) from plasma of human immunodeficiency virus (HIV) infected patients by high-performance liquid chromatography (HPLC). All parameters that influence the recovery of the pipette tip based on hollow mesoporous molecularly imprinted polymer solid-phase extraction (PT-HM-MIP-SPE) method were systematically studied and discussed in detail. The adsorbent material was prepared using methacrylic acid and 4-vinylpyridine as functional monomers, ethylene glycol dimethacrylate as crosslinker, acetonitrile as solvent, 4,4'-azobis(4-cyanovaleric acid) as radical initiator, benzalkonium chloride as surfactant), 3TC, and AZT as templates. The simultaneous separation of 3TC, AZT and EFZ by HPLC-UV was performed using a Gemini C18 Phenomenexࣨ column (250 mm × 4.6 mm, 5 μm) and mobile phase consisting of acetonitrile: water pH 3.2 (68:32, v/v), flow rate of 1.0 mL min -1 and λ = 260 nm. The method was linear over the concentration range from 0.25 to 10 μg mL -1 for 3TC and EFZ, and 0.05 to 2.0 μg mL -1 for AZT, with correlation coefficients larger than 0.99 for all analytes. Recovery ± relative standard deviations (RSDs %) were 41.99±2.38 %, 82.29±1.63 %, and 83.72±7.52 % for 3TC, AZT, and EFZ, respectively. The RSDs and relative errors (REs) were lower than 15 % for intra and interday assays. The method has been successfully applied for monitoring HIV-infected patients outside the therapeutic dosage.2 This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Goldstein-Piekarski, Andrea N.; Greer, Stephanie M.; Stark, Shauna; Stark, Craig E.
2016-01-01
Sleep deprivation impairs the formation of new memories. However, marked interindividual variability exists in the degree to which sleep loss compromises learning, the mechanistic reasons for which are unclear. Furthermore, which physiological sleep processes restore learning ability following sleep deprivation are similarly unknown. Here, we demonstrate that the structural morphology of human hippocampal subfields represents one factor determining vulnerability (and conversely, resilience) to the impact of sleep deprivation on memory formation. Moreover, this same measure of brain morphology was further associated with the quality of nonrapid eye movement slow wave oscillations during recovery sleep, and by way of such activity, determined the success of memory restoration. Such findings provide a novel human biomarker of cognitive susceptibility to, and recovery from, sleep deprivation. Moreover, this metric may be of special predictive utility for professions in which memory function is paramount yet insufficient sleep is pervasive (e.g., aviation, military, and medicine). SIGNIFICANCE STATEMENT Sleep deprivation does not impact all people equally. Some individuals show cognitive resilience to the effects of sleep loss, whereas others express striking vulnerability, the reasons for which remain largely unknown. Here, we demonstrate that structural features of the human brain, specifically those within the hippocampus, accurately predict which individuals are susceptible (or conversely, resilient) to memory impairments caused by sleep deprivation. Moreover, this same structural feature determines the success of memory restoration following subsequent recovery sleep. Therefore, structural properties of the human brain represent a novel biomarker predicting individual vulnerability to (and recovery from) the effects of sleep loss, one with occupational relevance in professions where insufficient sleep is pervasive yet memory function is paramount. PMID:26911684