Sample records for filter restart computer

  1. A Thick-Restart Lanczos Algorithm with Polynomial Filtering for Hermitian Eigenvalue Problems

    DOE PAGES

    Li, Ruipeng; Xi, Yuanzhe; Vecharynski, Eugene; ...

    2016-08-16

    Polynomial filtering can provide a highly effective means of computing all eigenvalues of a real symmetric (or complex Hermitian) matrix that are located in a given interval, anywhere in the spectrum. This paper describes a technique for tackling this problem by combining a thick-restart version of the Lanczos algorithm with deflation ("locking'') and a new type of polynomial filter obtained from a least-squares technique. Furthermore, the resulting algorithm can be utilized in a “spectrum-slicing” approach whereby a very large number of eigenvalues and associated eigenvectors of the matrix are computed by extracting eigenpairs located in different subintervals independently from onemore » another.« less

  2. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Li, Weixuan; Zeng, Lingzao

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less

  3. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  4. Quantification of the fungal fraction released from various preloaded fibrous filters during a simulated ventilation restart.

    PubMed

    Morisseau, K; Joubert, A; Le Coq, L; Andres, Y

    2017-05-01

    This study aimed to demonstrate that particles, especially those associated with fungi, could be released from fibrous filters used in the air-handling unit (AHU) of heating, ventilation and air-conditioning (HVAC) systems during ventilation restarts. Quantification of the water retention capacity and SEM pictures of the filters was used to show the potential for fungal proliferation in unused or preloaded filters. Five fibrous filters with various particle collection efficiencies were studied: classes G4, M5, M6, F7, and combined F7 according to European standard EN779:2012. Filters were clogged with micronized rice particles containing the fungus Penicillium chrysogenum and then incubated for three weeks at 25°C and 90% relative humidity. The results indicated that the five clogged tested filters had various fungal growth capacities depending on their water retention capacity. Preloaded filters were subjected to a simulated ventilation restart in a controlled filtration device to quantify that the fraction of particles released was around 1% for the G4, 0.1% for the M5 and the M6, and 0.001% for the F7 and the combined F7 filter. The results indicate that the likelihood of fungal particle release by low efficiency filters is significantly higher than by high efficiency filters. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. The performance of biological anaerobic filters packed with sludge-fly ash ceramic particles (SFCP) and commercial ceramic particles (CCP) during the restart period: effect of the C/N ratios and filter media.

    PubMed

    Yue, Qinyan; Han, Shuxin; Yue, Min; Gao, Baoyu; Li, Qian; Yu, Hui; Zhao, Yaqin; Qi, Yuanfeng

    2009-11-01

    Two lab-scale upflow biological anaerobic filters (BAF) packed with sludge-fly ash ceramic particles (SFCP) and commercial ceramic particles (CCP) were employed to investigate effects of the C/N ratios and filter media on the BAF performance during the restart period. The results indicated that BAF could be restarted normally after one-month cease. The C/N ratio of 4.0 was the thresholds of nitrate removal and nitrite accumulation. TN removal and phosphate uptake reached the maximum value at the same C/N ratio of 5.5. Ammonia formation was also found and excreted a negative influence on TN removal, especially when higher C/N ratios were applied. Nutrients were mainly degraded within the height of 25 cm from the bottom. In addition, SFCP, as novel filter media manufactured by wastes-dewatered sludge and fly ash, represented a better potential in inhibiting nitrite accumulation, TN removal and phosphate uptake due to their special characteristics in comparison with CCP.

  6. Electrically heated particulate filter restart strategy

    DOEpatents

    Gonze, Eugene V [Pinckney, MI; Ament, Frank [Troy, MI

    2011-07-12

    A control system that controls regeneration of a particulate filter is provided. The system generally includes a propagation module that estimates a propagation status of combustion of particulate matter in the particulate filter. A regeneration module controls current to the particulate filter to re-initiate regeneration based on the propagation status.

  7. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  8. Digital-computer normal shock position and restart control of a Mach 2.5 axisymmetric mixed-compression inlet

    NASA Technical Reports Server (NTRS)

    Neiner, G. H.; Cole, G. L.; Arpasi, D. J.

    1972-01-01

    Digital computer control of a mixed-compression inlet is discussed. The inlet was terminated with a choked orifice at the compressor face station to dynamically simulate a turbojet engine. Inlet diffuser exit airflow disturbances were used. A digital version of a previously tested analog control system was used for both normal shock and restart control. Digital computer algorithms were derived using z-transform and finite difference methods. Using a sample rate of 1000 samples per second, the digital normal shock and restart controls essentially duplicated the inlet analog computer control results. At a sample rate of 100 samples per second, the control system performed adequately but was less stable.

  9. Ways of achieving continuous service from computers

    NASA Technical Reports Server (NTRS)

    Quinn, M. J., Jr.

    1974-01-01

    This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.

  10. Checkpointing for a hybrid computing node

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cher, Chen-Yong

    2016-03-08

    According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.

  11. McrEngine: A Scalable Checkpointing System Using Data-Aware Aggregation and Compression

    DOE PAGES

    Islam, Tanzima Zerin; Mohror, Kathryn; Bagchi, Saurabh; ...

    2013-01-01

    High performance computing (HPC) systems use checkpoint-restart to tolerate failures. Typically, applications store their states in checkpoints on a parallel file system (PFS). As applications scale up, checkpoint-restart incurs high overheads due to contention for PFS resources. The high overheads force large-scale applications to reduce checkpoint frequency, which means more compute time is lost in the event of failure. We alleviate this problem through a scalable checkpoint-restart system, mcrEngine. McrEngine aggregates checkpoints from multiple application processes with knowledge of the data semantics available through widely-used I/O libraries, e.g., HDF5 and netCDF, and compresses them. Our novel scheme improves compressibility ofmore » checkpoints up to 115% over simple concatenation and compression. Our evaluation with large-scale application checkpoints show that mcrEngine reduces checkpointing overhead by up to 87% and restart overhead by up to 62% over a baseline with no aggregation or compression.« less

  12. NICMOS Filter Wheel Test

    NASA Astrophysics Data System (ADS)

    Malhotra, Sangeeta

    2003-07-01

    This is an engineering test to verify the aliveness, functionality, operability, and electro-mechanical calibration of the NICMOS filter wheel motors and assembly after NCS restart in August 2003. This test has been designed to obviate concerns over possible deformation or breakage of the fitter wheel "soda-straw" shafts due to excess rotational drag torque and/or bending moments which may be imparted due to changes in the dewar metrology from warm-up/cool-down. This test should be executed after the NCS {and filter wheel housing} has reached and approximately equilibrated to its nominal Cycle 11 operating temperature.

  13. On the Convergence of an Implicitly Restarted Arnoldi Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, Richard B.

    We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.

  14. Checkpointing Shared Memory Programs at the Application-level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Schulz, M; Szwed, P

    2004-09-08

    Trends in high-performance computing are making it necessary for long-running applications to tolerate hardware faults. The most commonly used approach is checkpoint and restart(CPR)-the state of the computation is saved periodically on disk, and when a failure occurs, the computation is restarted from the last saved state. At present, it is the responsibility of the programmer to instrument applications for CPR. Our group is investigating the use of compiler technology to instrument codes to make them self-checkpointing and self-restarting, thereby providing an automatic solution to the problem of making long-running scientific applications resilient to hardware faults. Our previous work focusedmore » on message-passing programs. In this paper, we describe such a system for shared-memory programs running on symmetric multiprocessors. The system has two components: (i)a pre-compiler for source-to-source modification of applications, and (ii) a runtime system that implements a protocol for coordinating CPR among the threads of the parallel application. For the sake of concreteness, we focus on a non-trivial subset of OpenMP that includes barriers and locks. One of the advantages of this approach is that the ability to tolerate faults becomes embedded within the application itself, so applications become self-checkpointing and self-restarting on any platform. We demonstrate this by showing that our transformed benchmarks can checkpoint and restart on three different platforms (Windows/x86, Linux/x86, and Tru64/Alpha). Our experiments show that the overhead introduced by this approach is usually quite small; they also suggest ways in which the current implementation can be tuned to reduced overheads further.« less

  15. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  16. Checkpoint triggering in a computer system

    DOEpatents

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  17. Comparison of different filter methods for data assimilation in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Lange, Natascha; Berkhahn, Simon; Erdal, Daniel; Neuweiler, Insa

    2016-04-01

    The unsaturated zone is an important compartment, which plays a role for the division of terrestrial water fluxes into surface runoff, groundwater recharge and evapotranspiration. For data assimilation in coupled systems it is therefore important to have a good representation of the unsaturated zone in the model. Flow processes in the unsaturated zone have all the typical features of flow in porous media: Processes can have long memory and as observations are scarce, hydraulic model parameters cannot be determined easily. However, they are important for the quality of model predictions. On top of that, the established flow models are highly non-linear. For these reasons, the use of the popular Ensemble Kalman filter as a data assimilation method to estimate state and parameters in unsaturated zone models could be questioned. With respect to the long process memory in the subsurface, it has been suggested that iterative filters and smoothers may be more suitable for parameter estimation in unsaturated media. We test the performance of different iterative filters and smoothers for data assimilation with a focus on parameter updates in the unsaturated zone. In particular we compare the Iterative Ensemble Kalman Filter and Smoother as introduced by Bocquet and Sakov (2013) as well as the Confirming Ensemble Kalman Filter and the modified Restart Ensemble Kalman Filter proposed by Song et al. (2014) to the original Ensemble Kalman Filter (Evensen, 2009). This is done with simple test cases generated numerically. We consider also test examples with layering structure, as a layering structure is often found in natural soils. We assume that observations are water content, obtained from TDR probes or other observation methods sampling relatively small volumes. Particularly in larger data assimilation frameworks, a reasonable balance between computational effort and quality of results has to be found. Therefore, we compare computational costs of the different methods as well as the quality of open loop model predictions and the estimated parameters. Bocquet, M. and P. Sakov, 2013: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlinear Processes in Geophysics 20(5): 803-818. Evensen, G., 2009: Data assimilation: The ensemble Kalman filter. Springer Science & Business Media. Song, X.H., L.S. Shi, M. Ye, J.Z. Yang and I.M. Navon, 2014: Numerical comparison of iterative ensemble Kalman filters for unsaturated flow inverse modeling. Vadose Zone Journal 13(2), 10.2136/vzj2013.05.0083.

  18. Attitude Sensor and Gyro Calibration for Messenger

    NASA Technical Reports Server (NTRS)

    O'Shaughnessy, Daniel; Pittelkau, Mark E.

    2007-01-01

    The Redundant Inertial Measurement Unit Attitude Determination/Calibration (RADICAL(TM)) filter was used to estimate star tracker and gyro calibration parameters using MESSENGER telemetry data from three calibration events. We present an overview of the MESSENGER attitude sensors and their configuration is given, the calibration maneuvers are described, the results are compared with previous calibrations, and variations and trends in the estimated calibration parameters are examined. The warm restart and covariance bump features of the RADICAL(TM) filter were used to estimate calibration parameters from two disjoint telemetry streams. Results show that the calibration parameters converge faster with much less transient variation during convergence than when the filter is cold-started at the start of each telemetry stream.

  19. Efficient Parallel Algorithms on Restartable Fail-Stop Processors

    DTIC Science & Technology

    1991-01-01

    resource (memory), and ( 3 ) that processors, memory and their interconnection must be The model of parallel computation known as the Par- perfectly...setting), arid ure an(I restart errors. We describe these arguments if] [AAtPS 871 (in a deterministic setting). Fault-tolerance Section 3 . of...grannmarity at the processor level --- for recent work on where Al is the nmber of failures during this step’s gate granilarities see [All 90, Pip 85

  20. NASA automatic system for computer program documentation, volume 2

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.

    1972-01-01

    The DYNASOR 2 program is used for the dynamic nonlinear analysis of shells of revolution. The equations of motion of the shell are solved using Houbolt's numerical procedure. The displacements and stress resultants are determined for both symmetrical and asymmetrical loading conditions. Asymmetrical dynamic buckling can be investigated. Solutions can be obtained for highly nonlinear problems utilizing as many as five of the harmonics generated by SAMMSOR program. A restart capability allows the user to restart the program at a specified time. For Vol. 1, see N73-22129.

  1. Promoting Recruitment using Information Management Efficiently (PRIME): study protocol for a stepped-wedge cluster randomised controlled trial within the REstart or STop Antithrombotics Randomised Trial (RESTART).

    PubMed

    Maxwell, Amy E; Dennis, Martin; Rudd, Anthony; Weir, Christopher J; Parker, Richard A; Al-Shahi Salman, Rustam

    2017-03-01

    Research into methods to boost recruitment has been identified as the highest priority for randomised controlled trial (RCT) methodological research in the United Kingdom. Slow recruitment delays the delivery of research and inflates costs. Using electronic patient records has been shown to boost recruitment to ongoing RCTs in primary care by identifying potentially eligible participants, but this approach remains relatively unexplored in secondary care, and for stroke in particular. The REstart or STop Antithrombotics Randomised Trial (RESTART; ISRCTN71907627) is an ongoing RCT of secondary prevention after stroke due to intracerebral haemorrhage. Promoting Recruitment using Information Management Efficiently (PRIME) is a stepped-wedge cluster randomised trial of a complex intervention to help RESTART sites increase their recruitment and attain their own target numbers of participants. Seventy-two hospital sites that were located in England, Wales or Scotland and were active in RESTART in June 2015 opted into PRIME. Sites were randomly allocated (using a computer-generated block randomisation algorithm, stratified by hospital location in Scotland vs. England/Wales) to one of 12 months in which the intervention would be delivered. All sites began in the control state. The intervention was delivered by a recruitment co-ordinator via a teleconference with each site. The intervention involved discussing recruitment strategies, providing software for each site to extract from their own stroke audit data lists of patients who were potentially eligible for RESTART, and a second teleconference to review progress 6 months later. The recruitment co-ordinator was blinded to the timing of the intervention until 2 months before it was due at a site. Staff at RESTART sites were blinded to the nature and timing of the intervention. The primary outcome is the total number of patients randomised into RESTART per month per site and will be analysed in a negative binomial generalised linear mixed model. PRIME began in September 2015. The last intervention was delivered in August 2016. Six-month follow-up will be complete in February 2017. The final results of PRIME will be analysed and disseminated in 2017. The PRIME study was registered in the Northern Ireland Hub for Trials Methodology Research Studies Within a Trial (SWAT) repository (SWAT22) on 23 December 2015.

  2. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  3. Preventing messaging queue deadlocks in a DMA environment

    DOEpatents

    Blocksome, Michael A; Chen, Dong; Gooding, Thomas; Heidelberger, Philip; Parker, Jeff

    2014-01-14

    Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate and interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.

  4. Computer program documentation user information for the RSO-tape print program (RSOPRNT)

    NASA Technical Reports Server (NTRS)

    Gibbs, P. M. (Principal Investigator)

    1980-01-01

    A user's guide for the RSOPRNT, a TRASYS Master Restart Output Tape (RSO) reader is presented. Background information and sample runstreams, as well as, references, input requirements and options, are included.

  5. Orbit Determination and Maneuver Detection Using Event Representation with Thrust-Fourier-Coefficients

    NASA Astrophysics Data System (ADS)

    Lubey, D.; Ko, H.; Scheeres, D.

    The classical orbit determination (OD) method of dealing with unknown maneuvers is to restart the OD process with post-maneuver observations. However, it is also possible to continue the OD process through such unknown maneuvers by representing those unknown maneuvers with an appropriate event representation. It has been shown in previous work (Ko & Scheeres, JGCD 2014) that any maneuver performed by a satellite transitioning between two arbitrary orbital states can be represented as an equivalent maneuver connecting those two states using Thrust-Fourier-Coefficients (TFCs). Event representation using TFCs rigorously provides a unique control law that can generate the desired secular behavior for a given unknown maneuver. This paper presents applications of this representation approach to orbit prediction and maneuver detection problem across unknown maneuvers. The TFCs are appended to a sequential filter as an adjoint state to compensate unknown perturbing accelerations and the modified filter estimates the satellite state and thrust coefficients by processing OD across the time of an unknown maneuver. This modified sequential filter with TFCs is capable of fitting tracking data and maintaining an OD solution in the presence of unknown maneuvers. Also, the modified filter is found effective in detecting a sudden change in TFC values which indicates a maneuver. In order to illustrate that the event representation approach with TFCs is robust and sufficiently general to be easily adjustable, different types of measurement data are processed with the filter in a realistic LEO setting. Further, cases with mis-modeling of non-gravitational force are included in our study to verify the versatility and efficiency of our presented algorithm. Simulation results show that the modified sequential filter with TFCs can detect and estimate the orbit and thrust parameters in the presence of unknown maneuvers with or without measurement data during maneuvers. With no measurement data during maneuvers, the modified filter with TFCs uses an existing pre-maneuver orbit solution to compute a post-maneuver orbit solution by forcing TFCs to compensate for an unknown maneuver. With observation data available during maneuvers, maneuver start time and stop time is determined

  6. Pervasive Restart In MOOSE-based Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derek Gaston; Cody Permann; David Andrs

    Multiphysics applications are inherently complicated. Solving for multiple, interacting physical phenomena involves the solution of multiple equations, and each equation has its own data dependencies. Feeding the correct data to these equations at exactly the right time requires extensive effort in software design. In an ideal world, multiphysics applications always run to completion and produce correct answers. Unfortunately, in reality, there can be many reasons why a simulation might fail: power outage, system failure, exceeding a runtime allotment on a supercomputer, failure of the solver to converge, etc. A failure after many hours spent computing can be a significant setbackmore » for a project. Therefore, the ability to “continue” a solve from the point of failure, rather than starting again from scratch, is an essential component of any high-quality simulation tool. This process of “continuation” is commonly termed “restart” in the computational community. While the concept of restarting an application sounds ideal, the aforementioned complexities and data dependencies present in multiphysics applications make its implementation decidedly non-trivial. A running multiphysics calculation accumulates an enormous amount of “state”: current time, solution history, material properties, status of mechanical contact, etc. This “state” data comes in many different forms, including scalar, tensor, vector, and arbitrary, application-specific data types. To be able to restart an application, you must be able to both store and retrieve this data, effectively recreating the state of the application before the failure. When utilizing the Multiphysics Object Oriented Simulation Environment (MOOSE) framework developed at Idaho National Laboratory, this state data is stored both internally within the framework itself (such as solution vectors and the current time) and within the applications that use the framework. In order to implement restart in MOOSE-based applications, the total state of the system (both within the framework and without) must be stored and retrieved. To this end, the MOOSE team has implemented a “pervasive” restart capability which allows any object within MOOSE (or within a MOOSE-based application) to be declared as “state” data, and handles the storage and retrieval of said data.« less

  7. Theoretical and software considerations for general dynamic analysis using multilevel substructured models

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1985-01-01

    The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.

  8. Advanced transportation system studies technical area 3: Alternate propulsion subsystem concepts, volume 2

    NASA Technical Reports Server (NTRS)

    Levak, Daniel

    1993-01-01

    The Alternate Propulsion Subsystem Concepts contract had five tasks defined for the first year. The tasks were: F-1A Restart Study, J-2S Restart Study, Propulsion Database Development, Space Shuttle Main Engine (SSME) Upper Stage Use, and CER's for Liquid Propellant Rocket Engines. The detailed study results, with the data to support the conclusions from various analyses, are being reported as a series of five separate Final Task Reports. Consequently, this volume only reports the required programmatic information concerning Computer Aided Design Documentation, and New Technology Reports. A detailed Executive Summary, covering all the tasks, is also available as Volume 1.

  9. Restart: The Resurgence of Computer Science in UK Schools

    ERIC Educational Resources Information Center

    Brown, Neil C. C.; Sentance, Sue; Crick, Tom; Humphreys, Simon

    2014-01-01

    Computer science in UK schools is undergoing a remarkable transformation. While the changes are not consistent across each of the four devolved nations of the UK (England, Scotland, Wales and Northern Ireland), there are developments in each that are moving the subject to become mandatory for all pupils from age 5 onwards. In this article, we…

  10. Increasing available FIFO space to prevent messaging queue deadlocks in a DMA environment

    DOEpatents

    Blocksome, Michael A [Rochester, MN; Chen, Dong [Croton On Hudson, NY; Gooding, Thomas [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Parker, Jeff [Rochester, MN

    2012-02-07

    Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.

  11. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  12. A simple strategy for varying the restart parameter in GMRES(m)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Jessup, E R; Kolev, T V

    2007-10-02

    When solving a system of linear equations with the restarted GMRES method, a fixed restart parameter is typically chosen. We present numerical experiments that demonstrate the beneficial effects of changing the value of the restart parameter in each restart cycle on the total time to solution. We propose a simple strategy for varying the restart parameter and provide some heuristic explanations for its effectiveness based on analysis of the symmetric case.

  13. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    PubMed

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  14. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    DOE PAGES

    Hargrove, Paul H.; Duell, Jason C.

    2006-09-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instancemore » reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd.« less

  15. Launching large computing applications on a disk-less cluster

    NASA Astrophysics Data System (ADS)

    Schwemmer, Rainer; Caicedo Carvajal, Juan Manuel; Neufeld, Niko

    2011-12-01

    The LHCb Event Filter Farm system is based on a cluster of the order of 1.500 disk-less Linux nodes. Each node runs one instance of the filtering application per core. The amount of cores in our current production environment is 8 per machine for the old cluster and 12 per machine on extension of the cluster. Each instance has to load about 1.000 shared libraries, weighting 200 MB from several directory locations from a central repository. The repository is currently hosted on a SAN and exported via NFS. The libraries are all available in the local file system cache on every node. Loading a library still causes a huge number of requests to the server though, because the loader will try to probe every available path. Measurements show there are between 100.000-200.000 calls per application instance start up. Multiplied by the numbers of cores in the farm, this translates into a veritable DDoS attack on the servers, which lasts several minutes. Since the application is being restarted frequently, a better solution had to be found.scp Rolling out the software to the nodes is out of the question, because they have no disks and the software in it's entirety is too large to put into a ram disk. To solve this problem we developed a FUSE based file systems which acts as a permanent, controllable cache that keeps the essential files that are necessary in stock.

  16. A technique for accelerating the convergence of restarted GMRES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Jessup, E R; Manteuffel, T

    2004-03-09

    We have observed that the residual vectors at the end of each restart cycle of restarted GMRES often alternate direction in a cyclic fashion, thereby slowing convergence. We present a new technique for accelerating the convergence of restarted GMRES by disrupting this alternating pattern. The new algorithm resembles a full conjugate gradient method with polynomial preconditioning, and its implementation requires minimal changes to the standard restarted GMRES algorithm.

  17. Enhanced spatial resolution in fluorescence molecular tomography using restarted L1-regularized nonlinear conjugate gradient algorithm.

    PubMed

    Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing

    2014-04-01

    Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.

  18. Restarts in Conversation and Literature.

    ERIC Educational Resources Information Center

    Person, Raymond F., Jr.

    1996-01-01

    Analyzes restarts, a common feature of conversation, in literary discourse. The term "restart" refers to the repetition of a word or words within an utterance by the same speaker. Restarts in literary discourse are of two types: (1) those produced by the characters in their "real" narrative world and (2) those produced by the narrators themselves.…

  19. Stability of lanthanum oxide-based H 2S sorbents in realistic fuel processor/fuel cell operation

    NASA Astrophysics Data System (ADS)

    Valsamakis, Ioannis; Si, Rui; Flytzani-Stephanopoulos, Maria

    We report that lanthana-based sulfur sorbents are an excellent choice as once-through chemical filters for the removal of trace amounts of H 2S and COS from any fuel gas at temperatures matching those of solid oxide fuel cells. We have examined sorbents based on lanthana and Pr-doped lanthana with up to 30 at.% praseodymium, having high desulfurization efficiency, as measured by their ability to remove H 2S from simulated reformate gas streams to below 50 ppbv with corresponding sulfur capacity exceeding 50 mg S g sorbent -1 at 800 °C. Intermittent sorbent operation with air-rich boiler exhaust-type gas mixtures and with frequent shutdowns and restarts is possible without formation of lanthanide oxycarbonate phases. Upon restart, desulfurization continues from where it left at the end of the previous cycle. These findings are important for practical applications of these sorbents as sulfur polishing units of fuel gases in the presence of small or large amounts of water vapor, and with the regular shutdown/start-up operation practiced in fuel processors/fuel cell systems, both stationary and mobile, and of any size/scale.

  20. CODEM User Manual

    DTIC Science & Technology

    2010-09-01

    If the registration succeeds, at computer restart, “.cdm” files will be associated to a new icon that is the CODEM logo. Double-clicking on such a...should I proceed? A5 : It is suggested that you add your extensions in a separate Jar file. You will then not be able to use the executable file, but

  1. Affinity-aware checkpoint restart

    DOE PAGES

    Saini, Ajay; Rezaei, Arash; Mueller, Frank; ...

    2014-12-08

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  2. Affinity-aware checkpoint restart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saini, Ajay; Rezaei, Arash; Mueller, Frank

    Current checkpointing techniques employed to overcome faults for HPC applications result in inferior application performance after restart from a checkpoint for a number of applications. This is due to a lack of page and core affinity awareness of the checkpoint/restart (C/R) mechanism, i.e., application tasks originally pinned to cores may be restarted on different cores, and in case of non-uniform memory architectures (NUMA), quite common today, memory pages associated with tasks on a NUMA node may be associated with a different NUMA node after restart. Here, this work contributes a novel design technique for C/R mechanisms to preserve task-to-core mapsmore » and NUMA node specific page affinities across restarts. Experimental results with BLCR, a C/R mechanism, enhanced with affinity awareness demonstrate significant performance benefits of 37%-73% for the NAS Parallel Benchmark codes and 6-12% for NAMD with negligible overheads instead of up to nearly four times longer an execution times without affinity-aware restarts on 16 cores.« less

  3. Dynaflow User’s Guide

    DTIC Science & Technology

    1988-11-01

    264 ANALYSIS RESTART. ............. ..... ....... 269 1.0 TITLE CARD. .............. ............. 271 2.0 CONTROL CARDS...stress soil model will provide a tool for such analysis of waterfront structures. To understand the significance of liquefaction, it is important to note...Implementing this effective stress soil model into a finite element computer program would allow analysis of soil and structure together. TECHNICAL BACKGROUND

  4. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  5. Dynamic Restarting Schemes for Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Simon, Horst D.

    1999-03-10

    In studies of restarted Davidson method, a dynamic thick-restart scheme was found to be excellent in improving the overall effectiveness of the eigen value method. This paper extends the study of the dynamic thick-restart scheme to the Lanczos method for symmetric eigen value problems and systematically explore a range of heuristics and strategies. We conduct a series of numerical tests to determine their relative strength and weakness on a class of electronic structure calculation problems.

  6. A pressure flux-split technique for computation of inlet flow behavior

    NASA Technical Reports Server (NTRS)

    Pordal, H. S.; Khosla, P. K.; Rubin, S. G.

    1991-01-01

    A method for calculating the flow field in aircraft engine inlets is presented. The phenomena of inlet unstart and restart are investigated. Solutions of the reduced Navier-Stokes (RNS) equations are obtained with a time consistent direct sparse matrix solver that computes the transient flow field both internal and external to the inlet. Time varying shocks and time varying recirculation regions can be efficiently analyzed. The code is quite general and is suitable for the computation of flow for a wide variety of geometries and over a wide range of Mach and Reynolds numbers.

  7. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  8. First Passage under Restart

    NASA Astrophysics Data System (ADS)

    Pal, Arnab; Reuveni, Shlomi

    2017-01-01

    First passage under restart has recently emerged as a conceptual framework suitable for the description of a wide range of phenomena, but the endless variety of ways in which restart mechanisms and first passage processes mix and match hindered the identification of unifying principles and general truths. Hope that these exist came from a recently discovered universality displayed by processes under optimal, constant rate, restart—but extensions and generalizations proved challenging as they marry arbitrarily complex processes and restart mechanisms. To address this challenge, we develop a generic approach to first passage under restart. Key features of diffusion under restart—the ultimate poster boy for this wide and diverse class of problems—are then shown to be completely universal.

  9. Growth Restart/Recovery Lines involving the vertebral body: a rare, incidental finding and diagnostic challenge in two patients

    PubMed Central

    Sajko, Sandy; Stuber, Kent; Wessely, Michelle

    2011-01-01

    Objective To present the phenomenon of growth restart lines and create awareness of the possible differential diagnoses. Clinical Features Two case reports outlining the presentation of growth restart lines found in the vertebrae of trampolinists. Emphasis in each case is placed on correlating the patient history with radiographic findings. Intervention and Outcome In both cases a conservative chiropractic treatment plan was initiated once the differential diagnoses could be ruled out. Conclusion Although the range of etiologies of growth restart lines is extensive, these case reports illustrate the importance of a comprehensive case history when presented with the radiographic finding of growth restart lines. PMID:22131568

  10. Nonlinear Meshfree Analysis Program (NMAP) Version 1.0 (User’s Manual)

    DTIC Science & Technology

    2012-12-01

    divided by the number of time increments used in the analysis . In addition to prescribing total nodal displacements in the neutral file, users are...conditions, the user must define material properties, initial conditions, and a variety of control parameters for the NMAP analysis . These data are provided...a script file. Restart A restart function is provided in the NMAP code, where the user may restart an analysis using a set of restart files. In

  11. Conjoin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sjaardema, Gregory

    2010-08-06

    Conjoin is a code for joining sequentially in time multiple exodusII database files. It is used to create a single results or restart file from multiple results or restart files which typically arise as the result of multiple restarted analyses. The resulting output file will be the union of the input files with a status variable indicating the status of each element at the various time planes.Combining multiple exodusII files arising from a restarted analysis or combining multiple exodusII files arising from a finite element analysis with dynamic topology changes.

  12. Restarting antiplatelet therapy after spontaneous intracerebral hemorrhage: Functional outcomes.

    PubMed

    Chen, Ching-Jen; Ding, Dale; Buell, Thomas J; Testai, Fernando D; Koch, Sebastian; Woo, Daniel; Worrall, Bradford B

    2018-05-30

    To compare the functional outcomes and health-related quality of life metrics of restarting vs not restarting antiplatelet therapy (APT) in patients presenting with intracerebral hemorrhage (ICH) in the ERICH (Ethnic/Racial Variations of Intracerebral Hemorrhage) study. Adult patients aged 18 years and older who were on APT before ICH and were alive at hospital discharge were included. Patients were dichotomized based on whether or not APT was restarted after hospital discharge. The primary outcome was a modified Rankin Scale score of 0-2 at 90 days. Secondary outcomes were excellent outcome (modified Rankin Scale score 0-1), mortality, Barthel Index, and health status (EuroQol-5 dimensions [EQ-5D] and EQ-5D visual analog scale scores) at 90 days. The APT and no APT cohorts comprised 127 and 732 patients, respectively. Restarting APT was associated with lower rates of good functional outcome (36.5% vs 40.8%; p = 0.021) and lower Barthel Index scores at 90 days ( p = 0.041). The 2 cohorts were then matched in a 1:1 ratio, and the matched cohorts each comprised 107 patients. No difference in primary outcome was observed between restarting vs not restarting APT (35.5% vs 43.9%; p = 0.105). There were also no differences between the secondary outcomes of the 2 cohorts. Restarting APT in patients with ICH of mild to moderate severity after acute hospitalization is not associated with worse functional outcomes or health-related quality of life at 90 days. In patients with significant cardiovascular risk factors who experience an ICH, restarting APT remains the decision of the treating practitioner. © 2018 American Academy of Neurology.

  13. 46 CFR 58.25-30 - Automatic restart.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... auxiliary steering gear and each power actuating system must restart automatically when electrical power is... 46 Shipping 2 2014-10-01 2014-10-01 false Automatic restart. 58.25-30 Section 58.25-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY...

  14. 46 CFR 58.25-30 - Automatic restart.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... auxiliary steering gear and each power actuating system must restart automatically when electrical power is... 46 Shipping 2 2013-10-01 2013-10-01 false Automatic restart. 58.25-30 Section 58.25-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY...

  15. 46 CFR 58.25-30 - Automatic restart.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... auxiliary steering gear and each power actuating system must restart automatically when electrical power is... 46 Shipping 2 2012-10-01 2012-10-01 false Automatic restart. 58.25-30 Section 58.25-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY...

  16. 46 CFR 58.25-30 - Automatic restart.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... auxiliary steering gear and each power actuating system must restart automatically when electrical power is... 46 Shipping 2 2011-10-01 2011-10-01 false Automatic restart. 58.25-30 Section 58.25-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY...

  17. 46 CFR 58.25-30 - Automatic restart.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... auxiliary steering gear and each power actuating system must restart automatically when electrical power is... 46 Shipping 2 2010-10-01 2010-10-01 false Automatic restart. 58.25-30 Section 58.25-30 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY...

  18. Extending Clause Learning of SAT Solvers with Boolean Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Zengler, Christoph; Küchlin, Wolfgang

    We extend clause learning as performed by most modern SAT Solvers by integrating the computation of Boolean Gröbner bases into the conflict learning process. Instead of learning only one clause per conflict, we compute and learn additional binary clauses from a Gröbner basis of the current conflict. We used the Gröbner basis engine of the logic package Redlog contained in the computer algebra system Reduce to extend the SAT solver MiniSAT with Gröbner basis learning. Our approach shows a significant reduction of conflicts and a reduction of restarts and computation time on many hard problems from the SAT 2009 competition.

  19. 40 CFR 86.1236-85 - Engine starting and restarting.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Engine starting and restarting. 86... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED...-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.1236-85 Engine starting and restarting. (a) Starting...

  20. Field Study on the Efficacy of the New Restart Provision for Hours of Service

    DOT National Transportation Integrated Search

    2014-01-01

    The objective of this research project was to examine the efficacy of the new restart rule promulgated as partof the Hours of Service of Drivers Final Rule, published on December 27, 2011, with a compliance date of July 1, 2013. Under the new restart...

  1. 40 CFR 86.136-90 - Engine starting and restarting.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Engine starting and restarting. 86.136... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... Complete Heavy-Duty Vehicles; Test Procedures § 86.136-90 Engine starting and restarting. (a) Otto-cycle...

  2. 40 CFR 1066.425 - Engine starting and restarting.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Engine starting and restarting. 1066... POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Vehicle Preparation and Running a Test § 1066.425 Engine starting and restarting. (a) Start the vehicle's engine as follows: (1) At the beginning of the test cycle...

  3. 40 CFR 86.136-90 - Engine starting and restarting.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Engine starting and restarting. 86.136... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... Complete Heavy-Duty Vehicles; Test Procedures § 86.136-90 Engine starting and restarting. (a) Otto-cycle...

  4. 40 CFR 86.136-90 - Engine starting and restarting.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Engine starting and restarting. 86.136... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... Complete Heavy-Duty Vehicles; Test Procedures § 86.136-90 Engine starting and restarting. (a) Otto-cycle...

  5. 40 CFR 86.1236-85 - Engine starting and restarting.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Engine starting and restarting. 86... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED...-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.1236-85 Engine starting and restarting. (a) Starting...

  6. 40 CFR 86.136-90 - Engine starting and restarting.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Engine starting and restarting. 86.136... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... Complete Heavy-Duty Vehicles; Test Procedures § 86.136-90 Engine starting and restarting. (a) Otto-cycle...

  7. 40 CFR 86.1236-85 - Engine starting and restarting.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Engine starting and restarting. 86... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED...-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.1236-85 Engine starting and restarting. (a) Starting...

  8. 40 CFR 86.1236-85 - Engine starting and restarting.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Engine starting and restarting. 86... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED...-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.1236-85 Engine starting and restarting. (a) Starting...

  9. 40 CFR 1066.425 - Engine starting and restarting.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Engine starting and restarting. 1066... POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Vehicle Preparation and Running a Test § 1066.425 Engine starting and restarting. (a) Start the vehicle's engine as follows: (1) At the beginning of the test cycle...

  10. SERT 2 hollow cathode multiple restarts in space

    NASA Technical Reports Server (NTRS)

    Kerslake, W. R.; Finke, R. C.

    1973-01-01

    Future missions, both station keeping and primary electric propulsion, will require multiple thrust restarts after periods of inactivity from a few hours to over one year. Although not a part of the original SERT 2 (Space Electric Rocket Test) flight objective, the opportunity to demonstrate multiple cathode restarts in space became available following completion of thruster running. Both neutralizer and main cathodes of each flight thruster were restarted repeatedly following storage periods up to 490 days. No deterioration of cathode heaters was noted nor was any change required in starting voltages or currents.

  11. Mental health effects of the Three Mile Island nuclear reactor restart.

    PubMed

    Dew, M A; Bromet, E J; Schulberg, H C; Dunn, L O; Parkinson, D K

    1987-08-01

    Controversy over potential mental health effects of the Three Mile Island Unit-1 restart led the authors to examine prospectively the pattern of psychiatric symptoms in a sample of Three Mile Island area mothers of young children. Symptom levels after restart were elevated over previous levels; a sizable subcohort of the sample reported relatively serious degrees of postrestart distress. History of diagnosable major depression and generalized anxiety following the Three Mile Island accident, plus symptoms and beliefs about personal risk prior to the restart, best predicted postrestart symptoms.

  12. Terminal shock position and restart control of a Mach 2.7, two-dimensional, twin duct mixed compression inlet

    NASA Technical Reports Server (NTRS)

    Cole, G. L.; Neiner, G. H.; Baumbick, R. J.

    1973-01-01

    Experimental results of terminal shock and restart control system tests of a two-dimensional, twin-duct mixed compression inlet are presented. High-response (110-Hz bandwidth) overboard bypass doors were used, both as the variable to control shock position and as the means of disturbing the inlet airflow. An inherent instability in inlet shock position resulted in noisy feedback signals and thus restricted the terminal shock position control performance that was achieved. Proportional-plus-integral type controllers using either throat exit static pressure or shock position sensor feedback gave adequate low-frequency control. The inlet restart control system kept the terminal shock control loop closed throughout the unstart-restart transient. The capability to restart the inlet was non limited by the inlet instability.

  13. Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.

    2017-08-01

    The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.

  14. Local rollback for fault-tolerance in parallel computing systems

    DOEpatents

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  15. Analysis of False Starts in Spontaneous Speech.

    ERIC Educational Resources Information Center

    O'Shaughnessy, Douglas

    A primary difference between spontaneous speech and read speech concerns the use of false starts, where a speaker interrupts the flow of speech to restart his or her utterance. A study examined the acoustic aspects of such restarts in a widely-used speech database, examining approximately 1000 utterances, about 10% of which contained a restart.…

  16. 40 CFR 86.536-78 - Engine starting and restarting.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Engine starting and restarting. 86.536... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.536-78 Engine starting and restarting. (a...

  17. 40 CFR 86.536-78 - Engine starting and restarting.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Engine starting and restarting. 86.536... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.536-78 Engine starting and restarting. (a...

  18. 40 CFR 86.536-78 - Engine starting and restarting.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Engine starting and restarting. 86.536... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.536-78 Engine starting and restarting. (a...

  19. 40 CFR 86.536-78 - Engine starting and restarting.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Engine starting and restarting. 86.536... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.536-78 Engine starting and restarting. (a...

  20. An all-FORTRAN version of NASTRAN for the VAX

    NASA Technical Reports Server (NTRS)

    Purves, L.

    1981-01-01

    All FORTRAN version of NASA structural analysis program NASATRAN is implemented on DEC VAX-series computer. Applications of NASATRAN extend to almost every type of linear structure and construction. Two special features are available in VAX version; program is executed from terminal in manner permitting use of VAX interactive debugger, and links are interactively restarted when desired by first making copy of all NASATRAN work files.

  1. Branching Search

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-12-01

    Search processes play key roles in various scientific fields. A widespread and effective search-process scheme, which we term Restart Search, is based on the following restart algorithm: i) set a timer and initiate a search task; ii) if the task was completed before the timer expired, then stop; iii) if the timer expired before the task was completed, then go back to the first step and restart the search process anew. In this paper a branching feature is added to the restart algorithm: at every transition from the algorithm's third step to its first step branching takes place, thus multiplying the search effort. This branching feature yields a search-process scheme which we term Branching Search. The running time of Branching Search is analyzed, closed-form results are established, and these results are compared to the coresponding running-time results of Restart Search.

  2. 40 CFR 85.2210 - Engine restart 2500 rpm/idle test-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Engine restart 2500 rpm/idle test-EPA... Warranty Short Tests § 85.2210 Engine restart 2500 rpm/idle test—EPA 81. (a)(1) General calendar year... engines. (ii) In a state for which the Administrator has approved a State Implementation Plan revision...

  3. Reasons Why Children and Adolescents With Attention-Deficit/Hyperactivity Disorder Stop and Restart Taking Medicine.

    PubMed

    Brinkman, William B; Simon, John O; Epstein, Jeffery N

    2018-04-01

    To describe the prevalence of reasons why children and adolescents stop and restart attention-deficit/hyperactivity disorder (ADHD) medicine and whether functional impairment is present after stopping medicine. We used the prospective longitudinal cohort from the Multimodal Treatment of Study of Children With ADHD. At the 12-year follow-up, when participants were a mean of 21.1 years old, 372 participants (76% male, 64% white) reported ever taking ADHD medicine. Participants reported the age when they last stopped and/or restarted ADHD medicine and also endorsed reasons for stopping and restarting. Seventy-seven percent (286 of 372) reported stopping medicine for a month or longer at some time during childhood or adolescence. Participants were a mean of 13.3 years old when they last stopped medicine. The most commonly endorsed reasons for stopping medication related to 1) medicine not needed/helping, 2) adverse effects, 3) logistical barriers of getting or taking medication, and 4) social concerns or stigma. Seventeen percent (64 of 372) reported restarting medicine after stopping for a month or longer. Commonly endorsed reasons for restarting related to medicine being needed or medicine helping; and resolution of logistical barriers to getting or taking medicine. For both stopping and restarting, the proportion endorsing some reasons differed by age range, with the overall pattern suggesting that parental involvement in decisions decreased with age. Nearly all participants had impairment at the assessment after stopping, regardless of whether medication was resumed. Different reasons for stopping and/or restarting medicine are relevant at different times for different teens. Tailored strategies may help engage adolescents as full partners in their treatment plan. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  4. Fission Chain Restart Theory

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2017-07-31

    We present that fast nanosecond timescale neutron and gamma-ray counting can be performed with a (liquid) scintillator array. Fission chains in metal evolve over a timescale of tens of nanoseconds. If the metal is surrounded by moderator, neutrons leaking from the metal can thermalize and diffuse in the moderator. With finite probability, the diffusing neutrons can return to the metal and restart the fast fission chain. The timescale for this restart process is microseconds. A theory describing time evolving fission chains for metal surrounded by moderator, including this restart process, is presented. Finally, this theory is sufficiently simple for itmore » to be implemented for real-time analysis.« less

  5. Guidance system operations plan for manned CM earth orbital missions using program Skylark 1. Section 2: Data links

    NASA Technical Reports Server (NTRS)

    Hamilton, M. H.

    1972-01-01

    A computer program to define the digital uplink and downlink for use in manned command module orbital missions is presented. The subjects discussed are: (1) digital uplink to command module, (2) CMC digital downlink, (3) downlist formats, (4) description of telemetered qualities, (5) flagbits, and (6) effects of Fresh Start (V36) and Hardware Restart on flagword and channel bits.

  6. Convolutional Dictionary Learning: Acceleration and Convergence

    NASA Astrophysics Data System (ADS)

    Chun, Il Yong; Fessler, Jeffrey A.

    2018-04-01

    Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.

  7. Stuck fermentation: development of a synthetic stuck wine and study of a restart procedure.

    PubMed

    Maisonnave, Pierre; Sanchez, Isabelle; Moine, Virginie; Dequin, Sylvie; Galeote, Virginie

    2013-05-15

    Stuck fermentation is a major problem in winemaking, resulting in large losses in the wine industry. Specific starter yeasts are used to restart stuck fermentations in conditions determined essentially on the basis of empirical know-how. We have developed a model synthetic stuck wine and an industrial process-based procedure for restarting fermentations, for studies of the conditions required to restart stuck fermentations. We used a basic medium containing 13.5% v/v ethanol and 16 g/L fructose, pH 3.3, to test the effect of various nutrients (vitamins, amino acids, minerals, oligoelements), with the aim of developing a representative and discriminative stuck fermentation model. Cell growth appeared to be a key factor for the efficient restarting of stuck fermentations. Micronutrients, such as vitamins, also strongly affected the efficiency of the restart procedure. For the validation of this medium, we compared the performances of three wine yeast strains in the synthetic stuck fermentation and three naturally stuck wine fermentations. Strain performance was ranked similar in the synthetic medium and in the "Malbec" and "Sauvignon" natural stuck wines. However, two strains were ranked differently in the "Gros Manseng" stuck wine. Nutrient content seemed to be a crucial factor in fermentation restart conditions, generating differences between yeast strains. However, the specific sensitivity of yeast strains to the composition of the wine may also have had an effect. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Characterization of fine particle and gaseous emissions during school bus idling.

    PubMed

    Kinsey, J S; Williams, D C; Dong, Y; Logan, R

    2007-07-15

    The particulate matter (PM) and gaseous emissions from six diesel school buses were determined over a simulated waiting period typical of schools in the northeastern U.S. Testing was conducted for both continuous idle and hot restart conditions using a suite of on-line particle and gas analyzers installed in the U.S. Environmental Protection Agency's Diesel Emissions Aerosol Laboratory. The specific pollutants measured encompassed total PM-2.5 mass (PM < or = 2.5 microm in aerodynamic diameter), PM-2.5 number concentration, particle size distribution, particle-surface polycyclic aromatic hydrocarbons (PAHs), and a tracer gas (1,1,1,2,3,3,3-heptafluoropropane) in the diluted sample stream. Carbon monoxide (CO), carbon dioxide, nitrogen oxides (NO(x)), total hydrocarbons (THC), oxygen, formaldehyde, and the tracer gas were also measured in the raw exhaust. Results of the study showed little difference in the measured emissions between a 10 min post-restart idle and a 10 min continuous idle with the exception of THC and formaldehyde. However, an emissions pulse was observed during engine restart. A predictive equation was developed from the experimental data, which allows a comparison between continuous idle and hot restart for NO(x), CO, PM2.5, and PAHs and which considers factors such as the restart emissions pulse and periods when the engine is not running. This equation indicates that restart is the preferred operating scenario as long as there is no extended idling after the engine is restarted.

  9. 40 CFR 60.1640 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... close my municipal waste combustion unit and not restart it? 60.1640 Section 60.1640 Protection of... NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion... do if I plan to permanently close my municipal waste combustion unit and not restart it? (a) If you...

  10. 40 CFR 62.15090 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 62.15090 Section 62.15090... Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15090 What must I do if I close my municipal waste combustion unit and then restart...

  11. 40 CFR 60.1635 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 60.1635 Section 60.1635... Combustion Units Constructed on or Before August 30, 1999 Model Rule-Increments of Progress § 60.1635 What must I do if I close my municipal waste combustion unit and then restart my municipal waste combustion...

  12. 40 CFR 62.15090 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 62.15090 Section 62.15090... Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15090 What must I do if I close my municipal waste combustion unit and then restart...

  13. 40 CFR 60.1635 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 60.1635 Section 60.1635... Combustion Units Constructed on or Before August 30, 1999 Model Rule-Increments of Progress § 60.1635 What must I do if I close my municipal waste combustion unit and then restart my municipal waste combustion...

  14. 40 CFR 60.1640 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... close my municipal waste combustion unit and not restart it? 60.1640 Section 60.1640 Protection of... NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion... do if I plan to permanently close my municipal waste combustion unit and not restart it? (a) If you...

  15. 40 CFR 60.1640 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... close my municipal waste combustion unit and not restart it? 60.1640 Section 60.1640 Protection of... NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion... do if I plan to permanently close my municipal waste combustion unit and not restart it? (a) If you...

  16. 40 CFR 60.1640 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... close my municipal waste combustion unit and not restart it? 60.1640 Section 60.1640 Protection of... NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion... do if I plan to permanently close my municipal waste combustion unit and not restart it? (a) If you...

  17. 40 CFR 62.15090 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 62.15090 Section 62.15090... Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15090 What must I do if I close my municipal waste combustion unit and then restart...

  18. 40 CFR 60.1635 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 60.1635 Section 60.1635... Combustion Units Constructed on or Before August 30, 1999 Model Rule-Increments of Progress § 60.1635 What must I do if I close my municipal waste combustion unit and then restart my municipal waste combustion...

  19. 40 CFR 60.1640 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... close my municipal waste combustion unit and not restart it? 60.1640 Section 60.1640 Protection of... NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion... do if I plan to permanently close my municipal waste combustion unit and not restart it? (a) If you...

  20. 40 CFR 60.1635 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 60.1635 Section 60.1635... Combustion Units Constructed on or Before August 30, 1999 Model Rule-Increments of Progress § 60.1635 What must I do if I close my municipal waste combustion unit and then restart my municipal waste combustion...

  1. 40 CFR 62.15090 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 62.15090 Section 62.15090... Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15090 What must I do if I close my municipal waste combustion unit and then restart...

  2. 40 CFR 62.15090 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 62.15090 Section 62.15090... Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15090 What must I do if I close my municipal waste combustion unit and then restart...

  3. 40 CFR 60.1635 - What must I do if I close my municipal waste combustion unit and then restart my municipal waste...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... waste combustion unit and then restart my municipal waste combustion unit? 60.1635 Section 60.1635... Combustion Units Constructed on or Before August 30, 1999 Model Rule-Increments of Progress § 60.1635 What must I do if I close my municipal waste combustion unit and then restart my municipal waste combustion...

  4. Walking on a user similarity network towards personalized recommendations.

    PubMed

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance.

  5. 40 CFR 62.15095 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... close my municipal waste combustion unit and not restart it? 62.15095 Section 62.15095 Protection of... Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15095 What must I do if I plan to permanently close my municipal waste combustion unit and not restart...

  6. 40 CFR 62.15095 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... close my municipal waste combustion unit and not restart it? 62.15095 Section 62.15095 Protection of... Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15095 What must I do if I plan to permanently close my municipal waste combustion unit and not restart...

  7. 40 CFR 62.15095 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... close my municipal waste combustion unit and not restart it? 62.15095 Section 62.15095 Protection of... Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15095 What must I do if I plan to permanently close my municipal waste combustion unit and not restart...

  8. 40 CFR 62.15095 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... close my municipal waste combustion unit and not restart it? 62.15095 Section 62.15095 Protection of... Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15095 What must I do if I plan to permanently close my municipal waste combustion unit and not restart...

  9. 40 CFR 62.15095 - What must I do if I plan to permanently close my municipal waste combustion unit and not restart it?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... close my municipal waste combustion unit and not restart it? 62.15095 Section 62.15095 Protection of... Combustion Units Constructed on or Before August 30, 1999 Compliance Schedule and Increments of Progress § 62.15095 What must I do if I plan to permanently close my municipal waste combustion unit and not restart...

  10. Multibus Avionic Architecture Design Study (MAADS).

    DTIC Science & Technology

    1983-10-01

    7.0 PASKING PROFILE N0.2 p 0.1 N 1.? N 0.N THREAT p 0.2 1.1 FUSSION N1.3 N 2.S N 1.0 TARGET Nl 1.0 1FFr HF 2.6 FUSION FL I GT CONTROL H 3.0 H 12.0 H IS...System Restart o Pilot Interf ace o Pilot Initiated Restart o Recovery Restart - Power Transient (EMP, Nuclear ) o In-Flight Reload o Normal System

  11. ESP – Data from Restarted Life Tests of Various Silicon Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Jim

    2010-10-06

    Current funding has allowed the restart of testing of various silicone materials placed in Life Tests or Aging Studies from past efforts. Some of these materials have been in test since 1982, with no testing for approximately 10 years, until funding allowed the restart in FY97. Charts for the various materials at different thickness, compression, and temperature combinations illustrate trends for the load-bearing properties of the materials.

  12. The efficacy of a restart break for recycling with optimal performance depends critically on circadian timing.

    PubMed

    Van Dongen, Hans P A; Belenky, Gregory; Vila, Bryan J

    2011-07-01

    Under simulated shift-work conditions, we investigated the efficacy of a restart break for maintaining neurobehavioral functioning across consecutive duty cycles, as a function of the circadian timing of the duty periods. As part of a 14-day experiment, subjects underwent two cycles of five simulated daytime or nighttime duty days, separated by a 34-hour restart break. Cognitive functioning and high-fidelity driving simulator performance were tested 4 times per day during the two duty cycles. Lapses on a psychomotor vigilance test (PVT) served as the primary outcome variable. Selected sleep periods were recorded polysomnographically. The experiment was conducted under standardized, controlled laboratory conditions with continuous monitoring. Twenty-seven healthy adults (13 men, 14 women; aged 22-39 years) participated in the study. Subjects were randomly assigned to a nighttime duty (experimental) condition or a daytime duty (control) condition. The efficacy of the 34-hour restart break for maintaining neurobehavioral functioning from the pre-restart duty cycle to the post-restart duty cycle was compared between these two conditions. Relative to the daytime duty condition, the nighttime duty condition was associated with reduced amounts of sleep, whereas sleep latencies were shortened and slow-wave sleep appeared to be conserved. Neurobehavioral performance measures ranging from lapses of attention on the PVT to calculated fuel consumption on the driving simulators remained optimal across time of day in the daytime duty schedule, but degraded across time of night in the nighttime duty schedule. The 34-hour restart break was efficacious for maintaining PVT performance and other objective neurobehavioral functioning profiles from one duty cycle to the next in the daytime duty condition, but not in the nighttime duty condition. Subjective sleepiness did not reliably track objective neurobehavioral deficits. The 34-hour restart break was adequate for maintaining performance in the case of optimal circadian placement of sleep and duty periods (control condition) but was inadequate (and perhaps even detrimental) for maintaining performance in a simulated nighttime duty schedule (experimental condition). Current US transportation hours-of-service regulations mandate time off duty but do not consider the circadian aspects of shift scheduling. Reinforcing a recent trend of applying sleep science to inform policymaking for duty and rest times, our findings indicate that restart provisions in hours-of-service regulations could be improved by taking the circadian timing of the duty schedules into account.

  13. Promoting Recruitment using Information Management Efficiently (PRIME): a stepped-wedge, cluster randomised trial of a complex recruitment intervention embedded within the REstart or Stop Antithrombotics Randomised Trial.

    PubMed

    Maxwell, Amy E; Parker, Richard A; Drever, Jonathan; Rudd, Anthony; Dennis, Martin S; Weir, Christopher J; Al-Shahi Salman, Rustam

    2017-12-28

    Few interventions are proven to increase recruitment in clinical trials. Recruitment to RESTART, a randomised controlled trial of secondary prevention after stroke due to intracerebral haemorrhage, has been slower than expected. Therefore, we sought to investigate an intervention to boost recruitment to RESTART. We conducted a stepped-wedge, cluster randomised trial of a complex intervention to increase recruitment, embedded within the RESTART trial. The primary objective was to investigate if the PRIME complex intervention (a recruitment co-ordinator who conducts a recruitment review, provides access to bespoke stroke audit data exports, and conducts a follow-up review after 6 months) increases the recruitment rate to RESTART. We included 72 hospital sites located in England, Wales, or Scotland that were active in RESTART in June 2015. All sites began in the control state and were allocated using block randomisation stratified by hospital location (Scotland versus England/Wales) to start the complex intervention in one of 12 different months. The primary outcome was the number of patients randomised into RESTART per month per site. We quantified the effect of the complex intervention on the primary outcome using a negative binomial, mixed model adjusting for site, December/January months, site location, and background time trends in recruitment rate. We recruited and randomised 72 sites and recorded their monthly recruitment to RESTART over 24 months (March 2015 to February 2017 inclusive), providing 1728 site-months of observations for the primary analysis. The adjusted rate ratio for the number of patients randomised per month after allocation to the PRIME complex intervention versus control time before allocation to the PRIME complex intervention was 1.06 (95% confidence interval 0.55 to 2.03, p = 0.87). Although two thirds of respondents to the 6-month follow-up questionnaire agreed that the audit reports were useful, only six patients were reported to have been randomised using the audit reports. Respondents frequently reported resource and time pressures as being key barriers to running the audit reports. The PRIME complex intervention did not significantly improve the recruitment rate to RESTART. Further research is needed to establish if PRIME might be beneficial at an earlier stage in a prevention trial or for prevention dilemmas that arise more often in clinical practice.

  14. Analysis of factors associated with hesitation to restart farming after depopulation of animals due to 2010 foot-and-mouth disease epidemic in Japan.

    PubMed

    Kadowaki, Hazumu; Kayano, Taishi; Tobinaga, Takaharu; Tsutsumi, Atsuro; Watari, Michiko; Makita, Kohei

    2016-09-01

    An outbreak of foot-and-mouth disease (FMD) occurred in Miyazaki Prefecture, Japan, in 2010. This epidemic was controlled with culling and vaccination, and resulted in the death of nearly 290,000 animals. This paper describes the factors associated with hesitation to restart farming after the epidemic. A questionnaire survey was conducted to assess the mental health of farmers one year after the end of the FMD epidemic in affected areas, and univariate and multivariable analyses were performed. Of 773 farms which had answered the question about restart farming, 55.4% (428/773) had resumed or were planning to resume operation. The farms hesitated restarting were characterized by small scale (P=0.06) and having multiple sources of income (P<0.01). Personal attributes associated with hesitation to restart were advanced age of the owner (P<0.01), with someone with bad physical conditions (P=0.04) and small family size (P<0.01). Factors related to disease control during the epidemic that were associated with hesitation to restart were vaccination of animals (P<0.01), not assisting with culling on other farms (P<0.01), and higher satisfaction with information provided by the government (P=0.02). We found that farmers hesitated to resume farming because they had a limited labor force, had an alternative business or were mentally distressed during disease control.

  15. Rad53 regulates replication fork restart after DNA damage in Saccharomyces cerevisiae

    PubMed Central

    Szyjka, Shawn J.; Aparicio, Jennifer G.; Viggiani, Christopher J.; Knott, Simon; Xu, Weihong; Tavaré, Simon; Aparicio, Oscar M.

    2008-01-01

    Replication fork stalling at a DNA lesion generates a damage signal that activates the Rad53 kinase, which plays a vital role in survival by stabilizing stalled replication forks. However, evidence that Rad53 directly modulates the activity of replication forks has been lacking, and the nature of fork stabilization has remained unclear. Recently, cells lacking the Psy2–Pph3 phosphatase were shown to be defective in dephosphorylation of Rad53 as well as replication fork restart after DNA damage, suggesting a mechanistic link between Rad53 deactivation and fork restart. To test this possibility we examined the progression of replication forks in methyl-methanesulfonate (MMS)-damaged cells, under different conditions of Rad53 activity. Hyperactivity of Rad53 in pph3Δ cells slows fork progression in MMS, whereas deactivation of Rad53, through expression of dominant-negative Rad53-KD, is sufficient to allow fork restart during recovery. Furthermore, combined deletion of PPH3 and PTC2, a second, unrelated Rad53 phosphatase, results in complete replication fork arrest and lethality in MMS, demonstrating that Rad53 deactivation is a key mechanism controlling fork restart. We propose a model for regulation of replication fork progression through damaged DNA involving a cycle of Rad53 activation and deactivation that coordinates replication restart with DNA repair. PMID:18628397

  16. Ground Software Maintenance Facility (GSMF) user's manual

    NASA Technical Reports Server (NTRS)

    Aquila, V.; Derrig, D.; Griffith, G.

    1986-01-01

    Instructions for the Ground Software Maintenance Facility (GSMF) system user is provided to operate the GSMF in all modes. The GSMF provides the resources for the Automatic Test Equipment (ATE) computer program maintenance (GCOS and GOAL). Applicable reference documents are listed. An operational overview and descriptions of the modes in terms of operator interface, options, equipment, material utilization, and operational procedures are contained. Test restart procedures are described. The GSMF documentation tree is presented including the user manual.

  17. Quantifying Uncertainty from Computational Factors in Simulations of a Model Ballistic System

    DTIC Science & Technology

    2017-08-01

    Comparison of runs 6–9 with the corresponding simulations from the stop time study (Tables 22 and 23) show that the restart series produces...Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other authorized...0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing

  18. The Efficacy of a Restart Break for Recycling with Optimal Performance Depends Critically on Circadian Timing

    PubMed Central

    Van Dongen, Hans P.A.; Belenky, Gregory; Vila, Bryan J.

    2011-01-01

    Objectives: Under simulated shift-work conditions, we investigated the efficacy of a restart break for maintaining neurobehavioral functioning across consecutive duty cycles, as a function of the circadian timing of the duty periods. Design: As part of a 14-day experiment, subjects underwent two cycles of five simulated daytime or nighttime duty days, separated by a 34-hour restart break. Cognitive functioning and high-fidelity driving simulator performance were tested 4 times per day during the two duty cycles. Lapses on a psychomotor vigilance test (PVT) served as the primary outcome variable. Selected sleep periods were recorded polysomnographically. Setting: The experiment was conducted under standardized, controlled laboratory conditions with continuous monitoring. Participants: Twenty-seven healthy adults (13 men, 14 women; aged 22–39 years) participated in the study. Interventions: Subjects were randomly assigned to a nighttime duty (experimental) condition or a daytime duty (control) condition. The efficacy of the 34-hour restart break for maintaining neurobehavioral functioning from the pre-restart duty cycle to the post-restart duty cycle was compared between these two conditions. Results: Relative to the daytime duty condition, the nighttime duty condition was associated with reduced amounts of sleep, whereas sleep latencies were shortened and slow-wave sleep appeared to be conserved. Neurobehavioral performance measures ranging from lapses of attention on the PVT to calculated fuel consumption on the driving simulators remained optimal across time of day in the daytime duty schedule, but degraded across time of night in the nighttime duty schedule. The 34-hour restart break was efficacious for maintaining PVT performance and other objective neurobehavioral functioning profiles from one duty cycle to the next in the daytime duty condition, but not in the nighttime duty condition. Subjective sleepiness did not reliably track objective neurobehavioral deficits. Conclusions: The 34-hour restart break was adequate for maintaining performance in the case of optimal circadian placement of sleep and duty periods (control condition) but was inadequate (and perhaps even detrimental) for maintaining performance in a simulated nighttime duty schedule (experimental condition). Current US transportation hours-of-service regulations mandate time off duty but do not consider the circadian aspects of shift scheduling. Reinforcing a recent trend of applying sleep science to inform policymaking for duty and rest times, our findings indicate that restart provisions in hours-of-service regulations could be improved by taking the circadian timing of the duty schedules into account. Citation: Van Dongen HPA; Belenky G; Vila BJ. The efficacy of a restart break for recycling with optimal performance depends critically on circadian timing. SLEEP 2011;34(7):917-929. PMID:21731142

  19. Psychosocial effects of restarting a TMI reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-01-01

    ORNL is studying human responses to hazardous environmental phenomena. This study attempts to understand the human behavior associated with the restart of TMI-1 Reactor after a nuclear event occurred at TMI-2.

  20. Analysis of factors associated with hesitation to restart farming after depopulation of animals due to 2010 foot-and-mouth disease epidemic in Japan

    PubMed Central

    KADOWAKI, Hazumu; KAYANO, Taishi; TOBINAGA, Takaharu; TSUTSUMI, Atsuro; WATARI, Michiko; MAKITA, Kohei

    2016-01-01

    An outbreak of foot-and-mouth disease (FMD) occurred in Miyazaki Prefecture, Japan, in 2010. This epidemic was controlled with culling and vaccination, and resulted in the death of nearly 290,000 animals. This paper describes the factors associated with hesitation to restart farming after the epidemic. A questionnaire survey was conducted to assess the mental health of farmers one year after the end of the FMD epidemic in affected areas, and univariate and multivariable analyses were performed. Of 773 farms which had answered the question about restart farming, 55.4% (428/773) had resumed or were planning to resume operation. The farms hesitated restarting were characterized by small scale (P=0.06) and having multiple sources of income (P<0.01). Personal attributes associated with hesitation to restart were advanced age of the owner (P<0.01), with someone with bad physical conditions (P=0.04) and small family size (P<0.01). Factors related to disease control during the epidemic that were associated with hesitation to restart were vaccination of animals (P<0.01), not assisting with culling on other farms (P<0.01), and higher satisfaction with information provided by the government (P=0.02). We found that farmers hesitated to resume farming because they had a limited labor force, had an alternative business or were mentally distressed during disease control. PMID:27149890

  1. On optimal infinite impulse response edge detection filters

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1991-01-01

    The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.

  2. Structural analysis of cylindrical thrust chambers, volume 3

    NASA Technical Reports Server (NTRS)

    Pearson, M. L.

    1981-01-01

    A system of three computer programs is described for use in conjunction with the BOPAGE finite element program. The programs are demonstrated by analyzing cumulative plastic deformation in a regeneratively cooled rocket thrust chamber. The codes provide the capability to predict geometric and material nonlinear behavior of cyclically loaded structures without performing a cycle-by-cycle analysis over the life of the structure. The program set consists of a BOPACE restart tape reader routine, and extrapolation program and a plot package.

  3. Computer simulation results of attitude estimation of earth orbiting satellites

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1976-01-01

    Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.

  4. Increased capability gas generator for Space Shuttle APU. Development/hot restart test report

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The design, fabrication, and testing of an increased capability gas generator for use in space shuttles are described. Results show an unlimited hot restart capability in the range of feed pressures from 400 psi to 80 psi. Effects of vacuum on hot restart were not addressed, and only beginning-of-life bed conditions were tested. No starts with bubbles were performed. A minimum expected life of 35 hours or more is projected, and the design will maintain a surface temperature of 350 F or more.

  5. A Job Pause Service under LAM/MPI+BLCR for Transparent Fault Tolerance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Mueller, Frank; Engelmann, Christian

    2007-01-01

    Checkpoint/restart (C/R) has become a requirement for long-running jobs in large-scale clusters due to a meantime- to-failure (MTTF) in the order of hours. After a failure, C/R mechanisms generally require a complete restart of an MPI job from the last checkpoint. A complete restart, however, is unnecessary since all but one node are typically still alive. Furthermore, a restart may result in lengthy job requeuing even though the original job had not exceeded its time quantum. In this paper, we overcome these shortcomings. Instead of job restart, we have developed a transparent mechanism for job pause within LAM/MPI+BLCR. This mechanismmore » allows live nodes to remain active and roll back to the last checkpoint while failed nodes are dynamically replaced by spares before resuming from the last checkpoint. Our methodology includes LAM/MPI enhancements in support of scalable group communicationwith fluctuating number of nodes, reuse of network connections, transparent coordinated checkpoint scheduling and a BLCR enhancement for job pause. Experiments in a cluster with the NAS Parallel Benchmark suite show that our overhead for job pause is comparable to that of a complete job restart. A minimal overhead of 5.6% is only incurred in case migration takes place while the regular checkpoint overhead remains unchanged. Yet, our approach alleviates the need to reboot the LAM run-time environment, which accounts for considerable overhead resulting in net savings of our scheme in the experiments. Our solution further provides full transparency and automation with the additional benefit of reusing existing resources. Executing continues after failures within the scheduled job, i.e., the application staging overhead is not incurred again in contrast to a restart. Our scheme offers additional potential for savings through incremental checkpointing and proactive diskless live migration, which we are currently working on.« less

  6. Vectorization of linear discrete filtering algorithms

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1977-01-01

    Linear filters, including the conventional Kalman filter and versions of square root filters devised by Potter and Carlson, are studied for potential application on streaming computers. The square root filters are known to maintain a positive definite covariance matrix in cases in which the Kalman filter diverges due to ill-conditioning of the matrix. Vectorization of the filters is discussed, and comparisons are made of the number of operations and storage locations required by each filter. The Carlson filter is shown to be the most efficient of the filters on the Control Data STAR-100 computer.

  7. SERT 2 thruster space restart, 1974

    NASA Technical Reports Server (NTRS)

    Kerslake, W. R.; Finke, R. C.

    1975-01-01

    The results of testing the flight thrusters on the SERT spacecraft during the 1974 test period are presented. The most notable result was the clearing of the high voltage short from thruster 2 and the successful stable operation of its ion beam. Test periods were limited to 70 minutes or less by earth eclipse of the spacecraft solar array and by ground station coverage limitations. Thruster 2 was restarted 26 times with an ion beam produced 21 times. The high voltage short remains in thruster 1, but the cathodes were restarted 12 times to demonstrate continued restart capability. The propellant feed systems, power processors, and spacecraft ancillary equipment were demonstrated to be functional after 4 1/2 years in space. In addition to the thruster tests, a neutralizer cathode was operated separately to demonstrate that the potential level of a spacecraft could be controlled by the neutralizer alone.

  8. Solution structure of the N-terminal domain of a replication restart primosome factor, PriC, in Escherichia coli

    PubMed Central

    Aramaki, Takahiko; Abe, Yoshito; Katayama, Tsutomu; Ueda, Tadashi

    2013-01-01

    In eubacterial organisms, the oriC-independent primosome plays an essential role in replication restart after the dissociation of the replication DNA-protein complex by DNA damage. PriC is a key protein component in the replication restart primosome. Our recent study suggested that PriC is divided into two domains: an N-terminal and a C-terminal domain. In the present study, we determined the solution structure of the N-terminal domain, whose structure and function have remained unknown until now. The revealed structure was composed of three helices and one extended loop. We also observed chemical shift changes in the heteronuclear NMR spectrum and oligomerization in the presence of ssDNA. These abilities may contribute to the PriC-ssDNA complex, which is important for the replication restart primosome. PMID:23868391

  9. Mechanisms of bacterial DNA replication restart

    PubMed Central

    Windgassen, Tricia A; Wessel, Sarah R; Bhattacharyya, Basudeb

    2018-01-01

    Abstract Multi-protein DNA replication complexes called replisomes perform the essential process of copying cellular genetic information prior to cell division. Under ideal conditions, replisomes dissociate only after the entire genome has been duplicated. However, DNA replication rarely occurs without interruptions that can dislodge replisomes from DNA. Such events produce incompletely replicated chromosomes that, if left unrepaired, prevent the segregation of full genomes to daughter cells. To mitigate this threat, cells have evolved ‘DNA replication restart’ pathways that have been best defined in bacteria. Replication restart requires recognition and remodeling of abandoned replication forks by DNA replication restart proteins followed by reloading of the replicative DNA helicase, which subsequently directs assembly of the remaining replisome subunits. This review summarizes our current understanding of the mechanisms underlying replication restart and the proteins that drive the process in Escherichia coli (PriA, PriB, PriC and DnaT). PMID:29202195

  10. Walking on a User Similarity Network towards Personalized Recommendations

    PubMed Central

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance. PMID:25489942

  11. Extreme-scale Algorithms and Solver Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less

  12. COMPUTATIONS ON THE PERFORMANCE OF PARTICLE FILTERS AND ELECTRONIC AIR CLEANERS

    EPA Science Inventory

    The paper discusses computations on the performance of particle filters and electronic air cleaners (EACs). The collection efficiency of particle filters and ACs is calculable if certain factors can be assumed or calibrated. For fibrous particulate filters, measurement of colle...

  13. Summary of Part 75 Administrative Processes: Table 6

    EPA Pesticide Factsheets

    Learn how to submit information for a new unit, new stack or new FGD; unit shutdown and restart, long term cold storage (LTCS), expected restart date, postponement of Appendix E testing, backup fuel used, and notice of combustion of emergency fuel.

  14. Checkpoint and restart procedures for single and multi-stage structural model analysis in NASTRAN/COSMIC on a CDC 176

    NASA Technical Reports Server (NTRS)

    Camp, George H.; Fallon, Dennis J.

    1987-01-01

    The Underwater Explosions Research Division (UERD) of the David Taylor Naval Ship Research and Development Center makes extensive use of NASTRAN/COSMIC on a CDC 176 to evaluate the structural response of ship structures subjected to underwater explosion shock loadings in the time domain. As relatively new users, UERD engineers have experienced difficulties with the checkpoint/restart feature because of the vague instructions in the user manual. Working procedures for the application of the checkpoint/restart feature to the transient analysis using NASTRAN/COSMIC are illustrated.

  15. The crystal structure of Neisseria gonorrhoeae PriB reveals mechanistic differences among bacterial DNA replication restart pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Jinlan; George, Nicholas P.; Duckett, Katrina L.

    2010-05-25

    Reactivation of repaired DNA replication forks is essential for complete duplication of bacterial genomes. However, not all bacteria encode homologs of the well-studied Escherichia coli DNA replication restart primosome proteins, suggesting that there might be distinct mechanistic differences among DNA replication restart pathways in diverse bacteria. Since reactivation of repaired DNA replication forks requires coordinated DNA and protein binding by DNA replication restart primosome proteins, we determined the crystal structure of Neisseria gonorrhoeae PriB at 2.7 {angstrom} resolution and investigated its ability to physically interact with DNA and PriA helicase. Comparison of the crystal structures of PriB from N. gonorrhoeaemore » and E. coli reveals a well-conserved homodimeric structure consisting of two oligosaccharide/oligonucleotide-binding (OB) folds. In spite of their overall structural similarity, there is significant species variation in the type and distribution of surface amino acid residues. This correlates with striking differences in the affinity with which each PriB homolog binds single-stranded DNA and PriA helicase. These results provide evidence that mechanisms of DNA replication restart are not identical across diverse species and that these pathways have likely become specialized to meet the needs of individual organisms.« less

  16. Requirements for Kalman filtering on the GE-701 whole word computer

    NASA Technical Reports Server (NTRS)

    Pines, S.; Schmidt, S. F.

    1978-01-01

    The results of a study to determine scaling, storage, and word length requirements for programming the Kalman filter on the GE-701 Whole Word Computer are reported. Simulation tests are presented which indicate that the Kalman filter, using a square root formulation with process noise added, utilizing MLS, radar altimeters, and airspeed as navigation aids, may be programmed for the GE-701 computer to successfully navigate and control the Boeing B737-100 during landing approach, landing rollout, and turnoff. The report contains flow charts, equations, computer storage, scaling, and word length recommendations for the Kalman filter on the GE-701 Whole Word computer.

  17. Commercial Motor Vehicle (CMV) Driver Restart Study: Final Report

    DOT National Transportation Integrated Search

    2017-03-01

    A congressionally-mandated naturalistic study was conducted to evaluate the operational, safety, fatigue, and health impacts of the restart provisions in Sections 395.3(c) and 395.3(d) of Title 49, Code of Federal Regulations. A total of 235 commerci...

  18. Replication restart in UV-irradiated Escherichia coli involving pols II, III, V, PriA, RecA and RecFOR proteins.

    PubMed

    Rangarajan, Savithri; Woodgate, Roger; Goodman, Myron F

    2002-02-01

    In Escherichia coli, UV-irradiated cells resume DNA synthesis after a transient inhibition by a process called replication restart. To elucidate the role of several key proteins involved in this process, we have analysed the time dependence of replication restart in strains carrying a combination of mutations in lexA, recA, polB (pol II), umuDC (pol V), priA, dnaC, recF, recO or recR. We find that both pol II and the origin-independent primosome-assembling function of PriA are essential for the immediate recovery of DNA synthesis after UV irradiation. In their absence, translesion replication or 'replication readthrough' occurs approximately 50 min after UV and is pol V-dependent. In a wild-type, lexA+ background, mutations in recF, recO or recR block both pathways. Similar results were obtained with a lexA(Def) recF strain. However, lexA(Def) recO or lexA(Def) recR strains, although unable to facilitate PriA-pol II-dependent restart, were able to perform pol V-dependent readthrough. The defects in restart attributed to mutations in recF, recO or recR were suppressed in a recA730 lexA(Def) strain expressing constitutively activated RecA (RecA*). Our data suggest that in a wild-type background, RecF, O and R are important for the induction of the SOS response and the formation of RecA*-dependent recombination intermediates necessary for PriA/Pol II-dependent replication restart. In con-trast, only RecF is required for the activation of RecA that leads to the formation of pol V (UmuD'2C) and facilitates replication readthrough.

  19. The REstart or STop Antithrombotics Randomised Trial (RESTART) after stroke due to intracerebral haemorrhage: study protocol for a randomised controlled trial.

    PubMed

    Al-Shahi Salman, Rustam; Dennis, Martin S; Murray, Gordon D; Innes, Karen; Drever, Jonathan; Dinsmore, Lynn; Williams, Carol; White, Philip M; Whiteley, William N; Sandercock, Peter A G; Sudlow, Cathie L M; Newby, David E; Sprigg, Nikola; Werring, David J

    2018-03-05

    For adults surviving stroke due to spontaneous (non-traumatic) intracerebral haemorrhage (ICH) who had taken an antithrombotic (i.e. anticoagulant or antiplatelet) drug for the prevention of vaso-occlusive disease before the ICH, it is unclear whether starting antiplatelet drugs results in an increase in the risk of recurrent ICH or a beneficial net reduction of all serious vascular events compared to avoiding antiplatelet drugs. The REstart or STop Antithrombotics Randomised Trial (RESTART) is an investigator-led, randomised, open, assessor-blind, parallel-group, randomised trial comparing starting versus avoiding antiplatelet drugs for adults surviving antithrombotic-associated ICH at 122 hospital sites in the United Kingdom. RESTART uses a central, web-based randomisation system using a minimisation algorithm, with 1:1 treatment allocation to which central research staff are masked. Central follow-up includes annual postal or telephone questionnaires to participants and their general (family) practitioners, with local provision of information about adverse events and outcome events. The primary outcome is recurrent symptomatic ICH. The secondary outcomes are: symptomatic haemorrhagic events; symptomatic vaso-occlusive events; symptomatic stroke of uncertain type; other fatal events; modified Rankin Scale score; adherence to antiplatelet drug(s). The magnetic resonance imaging (MRI) sub-study involves the conduct of brain MRI according to a standardised imaging protocol before randomisation to investigate heterogeneity of treatment effect according to the presence of brain microbleeds. Recruitment began on 22 May 2013. The target sample size is at least 720 participants in the main trial (at least 550 in the MRI sub-study). Final results of RESTART will be analysed and disseminated in 2019. ISRCTN71907627 ( www.isrctn.com/ISRCTN71907627 ). Prospectively registered on 25 April 2013.

  20. Reasons for non-recruitment of eligible patients to a randomised controlled trial of secondary prevention after intracerebral haemorrhage: observational study.

    PubMed

    Maxwell, Amy E; MacLeod, Mary Joan; Joyson, Anu; Johnson, Sharon; Ramadan, Hawraman; Bellfield, Ruth; Byrne, Anthony; McGhee, Caroline; Rudd, Anthony; Price, Fiona; Vasileiadis, Evangelos; Holden, Melinda; Hewitt, Jonathan; Carpenter, Michael; Needle, Ann; Valentine, Stacey; Patel, Farzana; Harrington, Frances; Mudd, Paul; Emsley, Hedley; Gregary, Bindu; Kane, Ingrid; Muir, Keith; Tiwari, Divya; Owusu-Agyei, Peter; Temple, Natalie; Sekaran, Lakshmanan; Ragab, Suzanne; England, Timothy; Hedstrom, Amanda; Jones, Phil; Jones, Sarah; Doherty, Mandy; McCarron, Mark O; Cohen, David L; Tysoe, Sharon; Al-Shahi Salman, Rustam

    2017-04-05

    Recruitment to randomised prevention trials is challenging, not least for intracerebral haemorrhage (ICH) associated with antithrombotic drug use. We investigated reasons for not recruiting apparently eligible patients at hospital sites that keep screening logs in the ongoing REstart or STop Antithrombotics Randomised Trial (RESTART), which seeks to determine whether to start antiplatelet drugs after ICH. By the end of May 2015, 158 participants had been recruited at 108 active sites in RESTART. The trial coordinating centre invited all sites that kept screening logs to submit screening log data, followed by one reminder. We checked the integrity of data, focused on the completeness of data about potentially eligible patients and categorised the reasons they were not randomised. Of 108 active sites, 39 (36%) provided usable screening log data over a median of ten (interquartile range = 5-13) months of recruitment per site. During this time, sites screened 633 potentially eligible patients and randomised 53 (8%) of them. The main reasons why 580 patients were not randomised were: 43 (7%) patients started anticoagulation, 51 (9%) patients declined, 148 (26%) patients' stroke physicians were not uncertain about using antiplatelet drugs, 162 (28%) patients were too unwell and 176 (30%) patients were not randomised due to other reasons. RESTART recruited ~8% of eligible patients. If more physicians were uncertain about the therapeutic dilemma that RESTART is addressing, RESTART could have recruited up to four times as many participants. The trial coordinating centre continues to engage with physicians about their uncertainty. EU Clinical Trials, EudraCT 2012-003190-26 . Registered on 3 July 2012.

  1. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  2. Checkpoint-dependent RNR induction promotes fork restart after replicative stress.

    PubMed

    Morafraile, Esther C; Diffley, John F X; Tercero, José Antonio; Segurado, Mónica

    2015-01-20

    The checkpoint kinase Rad53 is crucial to regulate DNA replication in the presence of replicative stress. Under conditions that interfere with the progression of replication forks, Rad53 prevents Exo1-dependent fork degradation. However, although EXO1 deletion avoids fork degradation in rad53 mutants, it does not suppress their sensitivity to the ribonucleotide reductase (RNR) inhibitor hydroxyurea (HU). In this case, the inability to restart stalled forks is likely to account for the lethality of rad53 mutant cells after replication blocks. Here we show that Rad53 regulates replication restart through the checkpoint-dependent transcriptional response, and more specifically, through RNR induction. Thus, in addition to preventing fork degradation, Rad53 prevents cell death in the presence of HU by regulating RNR-expression and localization. When RNR is induced in the absence of Exo1 and RNR negative regulators, cell viability of rad53 mutants treated with HU is increased and the ability of replication forks to restart after replicative stress is restored.

  3. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  4. ReSTART: A Novel Framework for Resource-Based Triage in Mass-Casualty Events.

    PubMed

    Mills, Alex F; Argon, Nilay T; Ziya, Serhan; Hiestand, Brian; Winslow, James

    2014-01-01

    Current guidelines for mass-casualty triage do not explicitly use information about resource availability. Even though this limitation has been widely recognized, how it should be addressed remains largely unexplored. The authors present a novel framework developed using operations research methods to account for resource limitations when determining priorities for transportation of critically injured patients. To illustrate how this framework can be used, they also develop two specific example methods, named ReSTART and Simple-ReSTART, both of which extend the widely adopted triage protocol Simple Triage and Rapid Treatment (START) by using a simple calculation to determine priorities based on the relative scarcity of transportation resources. The framework is supported by three techniques from operations research: mathematical analysis, optimization, and discrete-event simulation. The authors? algorithms were developed using mathematical analysis and optimization and then extensively tested using 9,000 discrete-event simulations on three distributions of patient severity (representing low, random, and high acuity). For each incident, the expected number of survivors was calculated under START, ReSTART, and Simple-ReSTART. A web-based decision support tool was constructed to help providers make prioritization decisions in the aftermath of mass-casualty incidents based on ReSTART. In simulations, ReSTART resulted in significantly lower mortality than START regardless of which severity distribution was used (paired t test, p<.01). Mean decrease in critical mortality, the percentage of immediate and delayed patients who die, was 8.5% for low-acuity distribution (range ?2.2% to 21.1%), 9.3% for random distribution (range ?0.2% to 21.2%), and 9.1% for high-acuity distribution (range ?0.7% to 21.1%). Although the critical mortality improvement due to ReSTART was different for each of the three severity distributions, the variation was less than 1 percentage point, indicating that the ReSTART policy is relatively robust to different severity distributions. Taking resource limitations into account in mass-casualty situations, triage has the potential to increase the expected number of survivors. Further validation is required before field implementation; however, the framework proposed in here can serve as the foundation for future work in this area. 2014.

  5. Reconfigurable Analog PDE computation for Baseband and RFComputation

    DTIC Science & Technology

    2017-03-01

    waveguiding PDEs. One-dimensional ladder topologies enable linear delays, linear-phase analog filters , as well as analog beamforming, potentially at RF...performance. This discussion focuses on ODE / PDE analog computation available in SoC FPAA structures. One such computation is a ladder filter (Fig...Implementation of a one-dimensional ladder filter for computing inductor (L) and capacitor (C) lines. These components can be implemented in CABs or as

  6. Justification of Filter Selection for Robot Balancing in Conditions of Limited Computational Resources

    NASA Astrophysics Data System (ADS)

    Momot, M. V.; Politsinskaia, E. V.; Sushko, A. V.; Semerenko, I. A.

    2016-08-01

    The paper considers the problem of mathematical filter selection, used for balancing of wheeled robot in conditions of limited computational resources. The solution based on complementary filter is proposed.

  7. Implementation on a nonlinear concrete cracking algorithm in NASTRAN

    NASA Technical Reports Server (NTRS)

    Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.

    1976-01-01

    A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.

  8. pcircle - A Suite of Scalable Parallel File System Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  9. Implicity restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations

    NASA Technical Reports Server (NTRS)

    Sorensen, Danny C.

    1996-01-01

    Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large-scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.

  10. 40 CFR 1065.930 - Engine starting, restarting, and shutdown.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... cranking time as normal. (c) Respond to engine stalling with the following steps: (1) If the engine stalls... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Engine starting, restarting, and...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Field Testing and Portable Emission Measurement...

  11. Filters | CTIO

    Science.gov Websites

    Visitor's Computer Guidelines Network Connection Request Instruments Instruments by Telescope IR Instruments MOSAIC Filters Hydra Filters IR Filters ANDICAM Filters Y4KCam filters CTIO Various Filters Filters for 5.75X5.75-inch Filters MOSAIC Filters Hydra Filters IR Filters ANDICAM Filters Y4KCam filters CTIO Various

  12. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  13. Survey of digital filtering

    NASA Technical Reports Server (NTRS)

    Nagle, H. T., Jr.

    1972-01-01

    A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

  14. 40 CFR 60.5120 - What must I do if I close my SSI unit and then restart it?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Model Rule-Increments of Progress... standards, and operating limits on the date your unit restarts operation. ...

  15. 40 CFR 86.236-94 - Engine starting and restarting.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Engine starting and restarting. 86.236... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... New Medium-Duty Passenger Vehicles; Cold Temperature Test Procedures § 86.236-94 Engine starting and...

  16. 40 CFR 1065.526 - Repeating void modes or test intervals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... or test intervals in any circumstances that would be inconsistent with good engineering judgment. For... that include hybrid energy storage features or emission controls that involve physical or chemical... shut down, restart the engine. (2) Use good engineering judgment to restart the test sequence using the...

  17. 40 CFR 1065.526 - Repeating void modes or test intervals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... or test intervals in any circumstances that would be inconsistent with good engineering judgment. For... that include hybrid energy storage features or emission controls that involve physical or chemical... shut down, restart the engine. (2) Use good engineering judgment to restart the test sequence using the...

  18. 40 CFR 86.236-94 - Engine starting and restarting.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Engine starting and restarting. 86.236... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... New Medium-Duty Passenger Vehicles; Cold Temperature Test Procedures § 86.236-94 Engine starting and...

  19. 40 CFR 86.236-94 - Engine starting and restarting.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Engine starting and restarting. 86.236... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... New Medium-Duty Passenger Vehicles; Cold Temperature Test Procedures § 86.236-94 Engine starting and...

  20. 40 CFR 86.236-94 - Engine starting and restarting.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Engine starting and restarting. 86.236... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... New Medium-Duty Passenger Vehicles; Cold Temperature Test Procedures § 86.236-94 Engine starting and...

  1. 40 CFR 85.2211 - Engine restart idle test-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Engine restart idle test-EPA 81. 85.2211 Section 85.2211 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty Short...

  2. A computational fluid dynamics simulation of the hypersonic flight of the Pegasus(TM) vehicle using an artificial viscosity model and a nonlinear filtering method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mendoza, John Cadiz

    1995-01-01

    The computational fluid dynamics code, PARC3D, is tested to see if its use of non-physical artificial dissipation affects the accuracy of its results. This is accomplished by simulating a shock-laminar boundary layer interaction and several hypersonic flight conditions of the Pegasus(TM) launch vehicle using full artificial dissipation, low artificial dissipation, and the Engquist filter. Before the filter is applied to the PARC3D code, it is validated in one-dimensional and two-dimensional form in a MacCormack scheme against the Riemann and convergent duct problem. For this explicit scheme, the filter shows great improvements in accuracy and computational time as opposed to the nonfiltered solutions. However, for the implicit PARC3D code it is found that the best estimate of the Pegasus experimental heat fluxes and surface pressures is the simulation utilizing low artificial dissipation and no filter. The filter does improve accuracy over the artificially dissipative case but at a computational expense greater than that achieved by the low artificial dissipation case which has no computational time penalty and shows better results. For the shock-boundary layer simulation, the filter does well in terms of accuracy for a strong impingement shock but not as well for weaker shock strengths. Furthermore, for the latter problem the filter reduces the required computational time to convergence by 18.7 percent.

  3. A real-time recursive filter for the attitude determination of the Spacelab instrument pointing subsystem

    NASA Technical Reports Server (NTRS)

    West, M. E.

    1992-01-01

    A real-time estimation filter which reduces sensitivity to system variations and reduces the amount of preflight computation is developed for the instrument pointing subsystem (IPS). The IPS is a three-axis stabilized platform developed to point various astronomical observation instruments aboard the shuttle. Currently, the IPS utilizes a linearized Kalman filter (LKF), with premission defined gains, to compensate for system drifts and accumulated attitude errors. Since the a priori gains are generated for an expected system, variations result in a suboptimal estimation process. This report compares the performance of three real-time estimation filters with the current LKF implementation. An extended Kalman filter and a second-order Kalman filter are developed to account for the system nonlinearities, while a linear Kalman filter implementation assumes that the nonlinearities are negligible. The performance of each of the four estimation filters are compared with respect to accuracy, stability, settling time, robustness, and computational requirements. It is shown, that for the current IPS pointing requirements, the linear Kalman filter provides improved robustness over the LKF with less computational requirements than the two real-time nonlinear estimation filters.

  4. An efficient implementation of a high-order filter for a cubed-sphere spectral element model

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Gyu; Cheong, Hyeong-Bin

    2017-03-01

    A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.

  5. Project Re-Start. A Program for Homeless Adults.

    ERIC Educational Resources Information Center

    Pelzer, Dagmar F.; And Others

    Project Re-Start, of the Dade County Public Schools in Florida, was funded under the Adult Education Act and the Stewart B. McKinney Homeless Assistance Act. Classes in literacy skills, General Educational Development (GED) preparation, English for speakers of other languages, employability skills, and life coping skills were conducted at most of…

  6. 77 FR 65419 - Virginia Electric and Power Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-26

    ... restart for North Anna 1 and 2, after the earthquake of August 23, 2011, Virginia Electric and Power... reevaluates the plant's design basis for earthquakes and for associated necessary retrofits. (2) Prior to the approval of restart for North Anna 1 and 2, after the earthquake of August 23, 2011, the licensee should be...

  7. Restartable solid motor stage for shuttle applications

    NASA Technical Reports Server (NTRS)

    Rohrbaugh, D. J.

    1973-01-01

    The application of restartable solid motor stages to shuttle missions has been shown to provide a viable supplement to the shuttle program. Restartable solid motors in the 3000 pound class provide a small expendable transfer stage that reduces the demand on the shuttle for the lower energy missions. Shuttle operational requirements and preliminary performance data provided an input for defining design features required for restartable solid motor applications. These data provided a basis for a configuration definition that is compatible with shuttle operations. Mission by mission analysis showed the impact on a NASA supplied mission model. The results showed a 15% reduction in the number of shuttle flights required. In addition the amount of shuttle capability used to complete the mission objectives was significantly reduced. For example, in the 1979 missions there was a 62% reduction in shuttle capability used. The study also showed that the solid motor could provide a supplement to the TUG that would allow TUGS to be used in a recoverable rather than an expendable mode. The study shows a 71% reduction in the number of TUGs that would be expended.

  8. Wind Energy Conversion System Analysis Model (WECSAM) computer program documentation

    NASA Astrophysics Data System (ADS)

    Downey, W. T.; Hendrick, P. L.

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation.

  9. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less

  10. Discovery of the Kalman filter as a practical tool for aerospace and industry

    NASA Technical Reports Server (NTRS)

    Mcgee, L. A.; Schmidt, S. F.

    1985-01-01

    The sequence of events which led the researchers at Ames Research Center to the early discovery of the Kalman filter shortly after its introduction into the literature is recounted. The scientific breakthroughs and reformulations that were necessary to transform Kalman's work into a useful tool for a specific aerospace application are described. The resulting extended Kalman filter, as it is now known, is often still referred to simply as the Kalman filter. As the filter's use gained in popularity in the scientific community, the problems of implementation on small spaceborne and airborne computers led to a square-root formulation of the filter to overcome numerical difficulties associated with computer word length. The work that led to this new formulation is also discussed, including the first airborne computer implementation and flight test. Since then the applications of the extended and square-root formulations of the Kalman filter have grown rapidly throughout the aerospace industry.

  11. Real-valued composite filters for correlation-based optical pattern recognition

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Balendra, Anushia

    1992-01-01

    Advances in the technology of optical devices such as spatial light modulators (SLMs) have influenced the research and growth of optical pattern recognition. In the research leading to this report, the design of real-valued composite filters that can be implemented using currently available SLMs for optical pattern recognition and classification was investigated. The design of real-valued minimum average correlation energy (RMACE) filter was investigated. Proper selection of the phase of the output response was shown to reduce the correlation energy. The performance of the filter was evaluated using computer simulations and compared with the complex filters. It was found that the performance degraded only slightly. Continuing the above investigation, the design of a real filter that minimizes the output correlation energy and the output variance due to noise was developed. Simulation studies showed that this filter had better tolerance to distortion and noise compared to that of the RMACE filter. Finally, the space domain design of RMACE filter was developed and implemented on the computer. It was found that the sharpness of the correlation peak was slightly reduced but the filter design was more computationally efficient than the complex filter.

  12. Reduction of noise and image artifacts in computed tomography by nonlinear filtration of projection images

    NASA Astrophysics Data System (ADS)

    Demirkaya, Omer

    2001-07-01

    This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.

  13. Comparison of Nonlinear Filtering Techniques for Lunar Surface Roving Navigation

    NASA Technical Reports Server (NTRS)

    Kimber, Lemon; Welch, Bryan W.

    2008-01-01

    Leading up to the Apollo missions the Extended Kalman Filter, a modified version of the Kalman Filter, was developed to estimate the state of a nonlinear system. Throughout the Apollo missions, Potter's Square Root Filter was used for lunar navigation. Now that NASA is returning to the Moon, the filters used during the Apollo missions must be compared to the filters that have been developed since that time, the Bierman-Thornton Filter (UD) and the Unscented Kalman Filter (UKF). The UD Filter involves factoring the covariance matrix into UDUT and has similar accuracy to the Square Root Filter; however it requires less computation time. Conversely, the UKF, which uses sigma points, is much more computationally intensive than any of the filters; however it produces the most accurate results. The Extended Kalman Filter, Potter's Square Root Filter, the Bierman-Thornton UD Filter, and the Unscented Kalman Filter each prove to be the most accurate filter depending on the specific conditions of the navigation system.

  14. Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography

    DTIC Science & Technology

    2017-05-01

    contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human

  15. Modeling Flow Past a Tilted Vena Cava Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, M A; Wang, S L

    Inferior vena cava filters are medical devices used to prevent pulmonary embolism (PE) from deep vein thrombosis. In particular, retrievable filters are well-suited for patients who are unresponsive to anticoagulation therapy and whose risk of PE decreased with time. The goal of this work is to use computational fluid dynamics to evaluate the flow past an unoccluded and partially occluded Celect inferior vena cava filter. In particular, the hemodynamic response to thrombus volume and filter tilt is examined, and the results are compared with flow conditions that are known to be thrombogenic. A computer model of the filter inside amore » model vena cava is constructed using high resolution digital photographs and methods of computer aided design. The models are parameterized using the Overture software framework, and a collection of overlapping grids is constructed to discretize the flow domain. The incompressible Navier-Stokes equations are solved, and the characteristics of the flow (i.e., velocity contours and wall shear stresses) are computed. The volume of stagnant and recirculating flow increases with thrombus volume. In addition, as the filter increases tilt, the cava wall adjacent to the tilted filter is subjected to low velocity flow that gives rise to regions of low wall shear stress. The results demonstrate the ease of IVC filter modeling with the Overture software framework. Flow conditions caused by the tilted Celect filter may elevate the risk of intrafilter thrombosis and facilitate vascular remodeling. This latter condition also increases the risk of penetration and potential incorporation of the hook of the filter into the vena caval wall, thereby complicating filter retrieval. Consequently, severe tilt at the time of filter deployment may warrant early clinical intervention.« less

  16. Computational Modeling of Blood Flow in the TrapEase Inferior Vena Cava Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, M A; Henshaw, W D; Wang, S L

    To evaluate the flow hemodynamics of the TrapEase vena cava filter using three dimensional computational fluid dynamics, including simulated thrombi of multiple shapes, sizes, and trapping positions. The study was performed to identify potential areas of recirculation and stagnation and areas in which trapped thrombi may influence intrafilter thrombosis. Computer models of the TrapEase filter, thrombi (volumes ranging from 0.25mL to 2mL, 3 different shapes), and a 23mm diameter cava were constructed. The hemodynamics of steady-state flow at Reynolds number 600 was examined for the unoccluded and partially occluded filter. Axial velocity contours and wall shear stresses were computed. Flowmore » in the unoccluded TrapEase filter experienced minimal disruption, except near the superior and inferior tips where low velocity flow was observed. For spherical thrombi in the superior trapping position, stagnant and recirculating flow was observed downstream of the thrombus; the volume of stagnant flow and the peak wall shear stress increased monotonically with thrombus volume. For inferiorly trapped spherical thrombi, marked disruption to the flow was observed along the cava wall ipsilateral to the thrombus and in the interior of the filter. Spherically shaped thrombus produced a lower peak wall shear stress than conically shaped thrombus and a larger peak stress than ellipsoidal thrombus. We have designed and constructed a computer model of the flow hemodynamics of the TrapEase IVC filter with varying shapes, sizes, and positions of thrombi. The computer model offers several advantages over in vitro techniques including: improved resolution, ease of evaluating different thrombus sizes and shapes, and easy adaptation for new filter designs and flow parameters. Results from the model also support a previously reported finding from photochromic experiments that suggest the inferior trapping position of the TrapEase IVC filter leads to an intra-filter region of recirculating/stagnant flow with very low shear stress that may be thrombogenic.« less

  17. 40 CFR 86.1336-84 - Engine starting, restarting, and shutdown.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (4) If a failure to start occurs during the hot start portion of the test and is caused by engine... stalling. (1) If the engine stalls during the initial idle period of either the cold or hot start test, the engine shall be restarted immediately using the appropriate cold or hot starting procedure and the test...

  18. 40 CFR 86.1336-84 - Engine starting, restarting, and shutdown.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (4) If a failure to start occurs during the hot start portion of the test and is caused by engine... stalling. (1) If the engine stalls during the initial idle period of either the cold or hot start test, the engine shall be restarted immediately using the appropriate cold or hot starting procedure and the test...

  19. 40 CFR 86.1336-84 - Engine starting, restarting, and shutdown.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (4) If a failure to start occurs during the hot start portion of the test and is caused by engine... stalling. (1) If the engine stalls during the initial idle period of either the cold or hot start test, the engine shall be restarted immediately using the appropriate cold or hot starting procedure and the test...

  20. 78 FR 46395 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-31

    ... after logon 10:00:020--CAS receives a message from Client Application --Counter re-starts 10:00:070--No... receives a message from Client Application --Counter restarts (2) 10:00:000--Heartbeat Request sent to Client Application within login 10:00:020--CAS receives a message from Client Application --Counter re...

  1. Clinical application of computed tomography for the diagnosis of feline hepatic lipidosis.

    PubMed

    Nakamura, Momoko; Chen, Hui-Min; Momoi, Yasuyuki; Iwasaki, Toshiroh

    2005-11-01

    The usefulness of computed tomography (CT) for the diagnosis of feline hepatic lipidosis (FHL) was evaluated. Liver CT number was 54.7+/-5.6 HU (mean+/-SD) in 26 healthy cats. We fast 6 healthy cats for 72 hr to induced FHL experimentally and the cats were assessed by CT and serum biochemical analysis. Liver CT number of the six cats was 53.8+/-3.0 HU before fasting, 46.8+/-2.4 HU after fasting, and 50.2+/-3.6 HU two weeks after restarted feeding. The decreased CT number was associated with the elevation of serum non-esterified fatty acid (NEFA) and beta-hydroxybutyrate levels. These results indicate that measurement of CT number of the liver is an effective procedure for the diagnosis of FHL.

  2. Fluid behavior in microgravity environment

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Lee, C. C.; Tsao, Y. D.

    1990-01-01

    The instability of liquid and gas interface can be induced by the presence of longitudinal and lateral accelerations, vehicle vibration, and rotational fields of spacecraft in a microgravity environment. In a spacecraft design, the requirements of settled propellant are different for tank pressurization, engine restart, venting, or propellent transfer. In this paper, the dynamical behavior of liquid propellant, fluid reorientation, and propellent resettling have been carried out through the execution of a CRAY X-MP super computer to simulate fluid management in a microgravity environment. Characteristics of slosh waves excited by the restoring force field of gravity jitters have also been investigated.

  3. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  4. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    PubMed

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  5. Mouse embryonic stem cells have increased capacity for replication fork restart driven by the specific Filia-Floped protein complex.

    PubMed

    Zhao, Bo; Zhang, Weidao; Cun, Yixian; Li, Jingzheng; Liu, Yan; Gao, Jing; Zhu, Hongwen; Zhou, Hu; Zhang, Rugang; Zheng, Ping

    2018-01-01

    Pluripotent stem cells (PSCs) harbor constitutive DNA replication stress during their rapid proliferation and the consequent genome instability hampers their applications in regenerative medicine. It is therefore important to understand the regulatory mechanisms of replication stress response in PSCs. Here, we report that mouse embryonic stem cells (ESCs) are superior to differentiated cells in resolving replication stress. Specifically, ESCs utilize a unique Filia-Floped protein complex-dependent mechanism to efficiently promote the restart of stalled replication forks, therefore maintaining genomic stability. The ESC-specific Filia-Floped complex resides on replication forks under normal conditions. Replication stress stimulates their recruitment to stalling forks and the serine 151 residue of Filia is phosphorylated in an ATR-dependent manner. This modification enables the Filia-Floped complex to act as a functional scaffold, which then promotes the stalling fork restart through a dual mechanism: both enhancing recruitment of the replication fork restart protein, Blm, and stimulating ATR kinase activation. In the Blm pathway, the scaffolds recruit the E3 ubiquitin ligase, Trim25, to the stalled replication forks, and in turn Trim25 tethers and concentrates Blm at stalled replication forks through ubiquitination. In differentiated cells, the recruitment of the Trim25-Blm complex to replication forks and the activation of ATR signaling are much less robust due to lack of the ESC-specific Filia-Floped scaffold. Thus, our study reveals that ESCs utilize an additional and unique regulatory layer to efficiently promote the stalled fork restart and maintain genomic stability.

  6. PriC-mediated DNA replication restart requires PriC complex formation with the single-stranded DNA-binding protein.

    PubMed

    Wessel, Sarah R; Marceau, Aimee H; Massoni, Shawn C; Zhou, Ruobo; Ha, Taekjip; Sandler, Steven J; Keck, James L

    2013-06-14

    Frequent collisions between cellular DNA replication complexes (replisomes) and obstacles such as damaged DNA or frozen protein complexes make DNA replication fork progression surprisingly sporadic. These collisions can lead to the ejection of replisomes prior to completion of replication, which, if left unrepaired, results in bacterial cell death. As such, bacteria have evolved DNA replication restart mechanisms that function to reload replisomes onto abandoned DNA replication forks. Here, we define a direct interaction between PriC, a key Escherichia coli DNA replication restart protein, and the single-stranded DNA-binding protein (SSB), a protein that is ubiquitously associated with DNA replication forks. PriC/SSB complex formation requires evolutionarily conserved residues from both proteins, including a pair of Arg residues from PriC and the C terminus of SSB. In vitro, disruption of the PriC/SSB interface by sequence changes in either protein blocks the first step of DNA replication restart, reloading of the replicative DnaB helicase onto an abandoned replication fork. Consistent with the critical role of PriC/SSB complex formation in DNA replication restart, PriC variants that cannot bind SSB are non-functional in vivo. Single-molecule experiments demonstrate that PriC binding to SSB alters SSB/DNA complexes, exposing single-stranded DNA and creating a platform for other proteins to bind. These data lead to a model in which PriC interaction with SSB remodels SSB/DNA structures at abandoned DNA replication forks to create a DNA structure that is competent for DnaB loading.

  7. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  8. CUDA-based acceleration of collateral filtering in brain MR images

    NASA Astrophysics Data System (ADS)

    Li, Cheng-Yuan; Chang, Herng-Hua

    2017-02-01

    Image denoising is one of the fundamental and essential tasks within image processing. In medical imaging, finding an effective algorithm that can remove random noise in MR images is important. This paper proposes an effective noise reduction method for brain magnetic resonance (MR) images. Our approach is based on the collateral filter which is a more powerful method than the bilateral filter in many cases. However, the computation of the collateral filter algorithm is quite time-consuming. To solve this problem, we improved the collateral filter algorithm with parallel computing using GPU. We adopted CUDA, an application programming interface for GPU by NVIDIA, to accelerate the computation. Our experimental evaluation on an Intel Xeon CPU E5-2620 v3 2.40GHz with a NVIDIA Tesla K40c GPU indicated that the proposed implementation runs dramatically faster than the traditional collateral filter. We believe that the proposed framework has established a general blueprint for achieving fast and robust filtering in a wide variety of medical image denoising applications.

  9. Understanding Victims of Technological Disaster: Beliefs and Worries of Three Mile Island.

    ERIC Educational Resources Information Center

    Prince-Embury, Sandra; Rooney, James

    The primary purpose of the present study was to examine how prevalent were concerns about restarting Three Mile Island nuclear reactor Unit I among people within a five-mile radius of the plant four years after the accident involving reactor Unit II. Also explored were concerns related to expectations about the restart of Unit I, perception of…

  10. 77 FR 18874 - Virginia Electric and Power Company; Receipt of Request for Action

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-28

    ... follows: (1) Prior to the approval of restart for North Anna 1 and 2, after the earthquake of August 23... reevaluates the plant's design basis for earthquakes and for associated retrofits. (2) Prior to the approval of restart for North Anna 1 and 2, after the earthquake of August 23, 2011, the licensee should be...

  11. Nature of radio feature formed by re-started jet activity in 3C 84 and its relation with γ-ray emissions

    NASA Astrophysics Data System (ADS)

    Nagai, H.; Chida, H.; Kino, M.; Orienti, M.; D'Ammando, F.; Giovannini, G.; Hiura, K.

    2016-02-01

    Re-started jet activity occurred in the bright nearby radio source 3C 84 in about 2005. The re-started jet is forming a prominent component (namely C3) at the tip of jet. The component has showed an increase in radio flux density for more than 7 years while the radio spectrum remains optically thin. This suggests that the component is the head of a radio lobe including a hotspot where the particle acceleration occurs. Thus, 3C 84 is a unique laboratory to study the physical properties at the very early stage of radio source evolution. Another important aspect is that high energy and very high energy γ-ray emissions are detected from this source. The quest for the site of γ-ray emission is quite important to obtain a better understanding of γ-ray emission mechanisms in radio galaxies. In this paper, we review the observational results from very long baseline interferometry (VLBI) monitoring of 3C 84 reported in series of our previous papers. We argue the nature of re-started jet/radio lobe and its relation with high-energy emission.

  12. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  13. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  14. Correction of Bowtie-Filter Normalization and Crescent Artifacts for a Clinical CBCT System.

    PubMed

    Zhang, Hong; Kong, Vic; Huang, Ke; Jin, Jian-Yue

    2017-02-01

    To present our experiences in understanding and minimizing bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a clinical cone beam computed tomography system. Bowtie-filter position and profile variations during gantry rotation were studied. Two previously proposed strategies (A and B) were applied to the clinical cone beam computed tomography system to correct bowtie-filter crescent artifacts. Physical calibration and analytical approaches were used to minimize the norm phantom misalignment and to correct for bowtie-filter normalization artifacts. A combined procedure to reduce bowtie-filter crescent artifacts and bowtie-filter normalization artifacts was proposed and tested on a norm phantom, CatPhan, and a patient and evaluated using standard deviation of Hounsfield unit along a sampling line. The bowtie-filter exhibited not only a translational shift but also an amplitude variation in its projection profile during gantry rotation. Strategy B was better than strategy A slightly in minimizing bowtie-filter crescent artifacts, possibly because it corrected the amplitude variation, suggesting that the amplitude variation plays a role in bowtie-filter crescent artifacts. The physical calibration largely reduced the misalignment-induced bowtie-filter normalization artifacts, and the analytical approach further reduced bowtie-filter normalization artifacts. The combined procedure minimized both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts, with Hounsfield unit standard deviation being 63.2, 45.0, 35.0, and 18.8 Hounsfield unit for the best correction approaches of none, bowtie-filter crescent artifacts, bowtie-filter normalization artifacts, and bowtie-filter normalization artifacts + bowtie-filter crescent artifacts, respectively. The combined procedure also demonstrated reduction of bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a CatPhan and a patient. We have developed a step-by-step procedure that can be directly used in clinical cone beam computed tomography systems to minimize both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts.

  15. An audit of manufacturers' implementation of reconstruction filters in single-photon emission computed tomography.

    PubMed

    Lawson, Richard S; White, Duncan; Cade, Sarah C; Hall, David O; Kenny, Bob; Knight, Andy; Livieratos, Lefteris; Nijran, Kuldip

    2013-08-01

    The Nuclear Medicine Software Quality Group of the Institute of Physics and Engineering in Medicine has conducted an audit to compare the ways in which different manufacturers implement the filters used in single-photon emission computed tomography. The aim of the audit was to identify differences between manufacturers' implementations of the same filter and to find means for converting parameters between systems. Computer-generated data representing projection images of an ideal test object were processed using seven different commercial nuclear medicine systems. Images were reconstructed using filtered back projection and a Butter worth filter with three different cutoff frequencies and three different orders. The audit found large variations between the frequency-response curves of what were ostensibly the same filters on different systems. The differences were greater than could be explained simply by different Butter worth formulae. Measured cutoff frequencies varied between 40 and 180% of that expected. There was also occasional confusion with respect to frequency units. The audit concluded that the practical implementation of filtering, such as the size of the kernel, has a profound effect on the results, producing large differences between systems. Nevertheless, this work shows how users can quantify the frequency response of their own systems so that it will be possible to compare two systems in order to find filter parameters on each that produce equivalent results. These findings will also make it easier for users to replicate filters similar to other published results, even if they are using a different computer system.

  16. Computer-aided design of nano-filter construction using DNA self-assembly

    NASA Astrophysics Data System (ADS)

    Mohammadzadegan, Reza; Mohabatkar, Hassan

    2007-01-01

    Computer-aided design plays a fundamental role in both top-down and bottom-up nano-system fabrication. This paper presents a bottom-up nano-filter patterning process based on DNA self-assembly. In this study we designed a new method to construct fully designed nano-filters with the pores between 5 nm and 9 nm in diameter. Our calculations illustrated that by constructing such a nano-filter we would be able to separate many molecules.

  17. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  18. LLSURE: local linear SURE-based edge-preserving image filtering.

    PubMed

    Qiu, Tianshuang; Wang, Aiqi; Yu, Nannan; Song, Aimin

    2013-01-01

    In this paper, we propose a novel approach for performing high-quality edge-preserving image filtering. Based on a local linear model and using the principle of Stein's unbiased risk estimate as an estimator for the mean squared error from the noisy image only, we derive a simple explicit image filter which can filter out noise while preserving edges and fine-scale details. Moreover, this filter has a fast and exact linear-time algorithm whose computational complexity is independent of the filtering kernel size; thus, it can be applied to real time image processing tasks. The experimental results demonstrate the effectiveness of the new filter for various computer vision applications, including noise reduction, detail smoothing and enhancement, high dynamic range compression, and flash/no-flash denoising.

  19. On the application of under-decimated filter banks

    NASA Technical Reports Server (NTRS)

    Lin, Y.-P.; Vaidyanathan, P. P.

    1994-01-01

    Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.

  20. On the application of under-decimated filter banks

    NASA Astrophysics Data System (ADS)

    Lin, Y.-P.; Vaidyanathan, P. P.

    1994-11-01

    Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.

  1. Range image registration based on hash map and moth-flame optimization

    NASA Astrophysics Data System (ADS)

    Zou, Li; Ge, Baozhen; Chen, Lei

    2018-03-01

    Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.

  2. Re-starting smoking in the postpartum period after receiving a smoking cessation intervention: a systematic review.

    PubMed

    Jones, Matthew; Lewis, Sarah; Parrott, Steve; Wormall, Stephen; Coleman, Tim

    2016-06-01

    In pregnant smoking cessation trial participants, to estimate (1) among women abstinent at the end of pregnancy, the proportion who re-start smoking at time-points afterwards (primary analysis) and (2) among all trial participants, the proportion smoking at the end of pregnancy and at selected time-points during the postpartum period (secondary analysis). Trials identified from two Cochrane reviews plus searches of Medline and EMBASE. Twenty-seven trials were included. The included trials were randomized or quasi-randomized trials of within-pregnancy cessation interventions given to smokers who reported abstinence both at end of pregnancy and at one or more defined time-points after birth. Outcomes were validated biochemically and self-reported continuous abstinence from smoking and 7-day point prevalence abstinence. The primary random-effects meta-analysis used longitudinal data to estimate mean pooled proportions of re-starting smoking; a secondary analysis used cross-sectional data to estimate the mean proportions smoking at different postpartum time-points. Subgroup analyses were performed on biochemically validated abstinence. The pooled mean proportion re-starting at 6 months postpartum was 43% [95% confidence interval (CI) = 16-72%, I(2)  = 96.7%] (11 trials, 571 abstinent women). The pooled mean proportion smoking at the end of pregnancy was 87% (95% CI = 84-90%, I(2)  = 93.2%) and 94% (95% CI = 92-96%, I(2)  = 88%) at 6 months postpartum (23 trials, 9262 trial participants). Findings were similar when using biochemically validated abstinence. In clinical trials of smoking cessation interventions during pregnancy only 13% are abstinent at term. Of these, 43% re-start by 6 months postpartum. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  3. Are lapsed donors willing to resume blood donation, and what determines their motivation to do so?

    PubMed

    van Dongen, Anne; Abraham, Charles; Ruiter, Robert A C; Schaalma, Herman P; de Kort, Wim L A M; Dijkstra, J Anneke; Veldhuizen, Ingrid J T

    2012-06-01

    This study investigated the possibility of rerecruiting lapsed blood donors. Reasons for donation cessation, motivation to restart donation, and modifiable components of donation motivation were examined. We distinguished between lapsed donors who had passively withdrawn by merely not responding to donation invitations and donors who had contacted the blood bank to actively withdraw. A cross-sectional survey was sent to 400 actively lapsed donors and to 400 passively lapsed donors, measuring intention to restart donation and psychological correlates of restart intention. The data were analyzed using multiple regression analyses. The response rate among actively lapsed donors was higher than among passively lapsed donors (37% vs. 25%). Actively lapsed donors typically ceased donating because of physical reactions, while passively lapsed donors quit because of a busy lifestyle. Nonetheless, 51% of actively lapsed responders and 80% of passively lapsed responders were willing to restart donations. Multiple regression analysis showed that, for passively lapsed donors, cognitive attitude was the strongest correlate of intention to donate in the future (β=0.605, p<0.001), with affective attitude (β=0.239, p<0.05) and self-efficacy (β=0.266, p<0.001) explaining useful proportions of the variance as well. For actively lapsed donors, cognitive attitude was also the strongest correlate of intention (β=0.601, p<0.001), with affective attitude (β=0.345, p<0.001) and moral norm (β=-0.118, p<0.05) explaining smaller proportions of the variance. The majority of lapsed donors indicated a moderate to high intention to restart donations. Interventions focusing on boosting cognitive and affective attitudes and self-efficacy could further raise such intentions. © 2011 American Association of Blood Banks.

  4. EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN.

    PubMed

    Nguyen, Dang-Manh; Peters, Jörg

    2017-01-01

    Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP 0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP 0 filters outperform the more complex known boundary filters. NP 0 filters typically reduce the L ∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP 0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute.

  5. EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN*

    PubMed Central

    Nguyen, Dang-Manh; Peters, Jörg

    2017-01-01

    Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP0 filters outperform the more complex known boundary filters. NP0 filters typically reduce the L∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute. PMID:29081643

  6. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  7. Commentary: restarting NTD programme activities after the Ebola outbreak in Liberia.

    PubMed

    Thomas, Brent C; Kollie, Karsor; Koudou, Benjamin; Mackenzie, Charles

    2017-05-01

    It is widely known that the recent Ebola Virus Disease (EVD) in West Africa caused a serious disruption to the national health system, with many of ongoing disease focused programmes, such as mass drug administration (MDA) for onchocerciasis (ONC), lymphatic filariasis (LF) and schistosomiasis (SCH), being suspended or scaled-down. As these MDA programmes attempt to restart post-EVD it is important to understand the challenges that may be encountered. This commentary addresses the opinions of the major health sectors involved, as well as those of community members, regarding logistic needs and challenges faced as these important public health programmes consider restarting. There appears to be a strong desire by the communities to resume NTD programme activities, although it is clear that some important challenges remain, the most prominent being those resulting from the severe loss of trained staff.

  8. A resolved two-way coupled CFD/6-DOF approach for predicting embolus transport and the embolus-trapping efficiency of IVC filters.

    PubMed

    Aycock, Kenneth I; Campbell, Robert L; Manning, Keefe B; Craven, Brent A

    2017-06-01

    Inferior vena cava (IVC) filters are medical devices designed to provide a mechanical barrier to the passage of emboli from the deep veins of the legs to the heart and lungs. Despite decades of development and clinical use, IVC filters still fail to prevent the passage of all hazardous emboli. The objective of this study is to (1) develop a resolved two-way computational model of embolus transport, (2) provide verification and validation evidence for the model, and (3) demonstrate the ability of the model to predict the embolus-trapping efficiency of an IVC filter. Our model couples computational fluid dynamics simulations of blood flow to six-degree-of-freedom simulations of embolus transport and resolves the interactions between rigid, spherical emboli and the blood flow using an immersed boundary method. Following model development and numerical verification and validation of the computational approach against benchmark data from the literature, embolus transport simulations are performed in an idealized IVC geometry. Centered and tilted filter orientations are considered using a nonlinear finite element-based virtual filter placement procedure. A total of 2048 coupled CFD/6-DOF simulations are performed to predict the embolus-trapping statistics of the filter. The simulations predict that the embolus-trapping efficiency of the IVC filter increases with increasing embolus diameter and increasing embolus-to-blood density ratio. Tilted filter placement is found to decrease the embolus-trapping efficiency compared with centered filter placement. Multiple embolus-trapping locations are predicted for the IVC filter, and the trapping locations are predicted to shift upstream and toward the vessel wall with increasing embolus diameter. Simulations of the injection of successive emboli into the IVC are also performed and reveal that the embolus-trapping efficiency decreases with increasing thrombus load in the IVC filter. In future work, the computational tool could be used to investigate IVC filter design improvements, the effect of patient anatomy on embolus transport and IVC filter embolus-trapping efficiency, and, with further development and validation, optimal filter selection and placement on a patient-specific basis.

  9. Pulse cleaning flow models and numerical computation of candle ceramic filters.

    PubMed

    Tian, Gui-shan; Ma, Zhen-ji; Zhang, Xin-yi; Xu, Ting-xiang

    2002-04-01

    Analytical and numerical computed models are developed for reverse pulse cleaning system of candle ceramic filters. A standard turbulent model is demonstrated suitably to the designing computation of reverse pulse cleaning system from the experimental and one-dimensional computational result. The computed results can be used to guide the designing of reverse pulse cleaning system, which is optimum Venturi geometry. From the computed results, the general conclusions and the designing methods are obtained.

  10. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    PubMed

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  11. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm

    PubMed Central

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-01-01

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions. PMID:28925979

  12. Computational Simulations of Inferior Vena Cava (IVC) Filter Placement and Hemodynamics in Patient-Specific Geometries

    NASA Astrophysics Data System (ADS)

    Aycock, Kenneth; Sastry, Shankar; Kim, Jibum; Shontz, Suzanne; Campbell, Robert; Manning, Keefe; Lynch, Frank; Craven, Brent

    2013-11-01

    A computational methodology for simulating inferior vena cava (IVC) filter placement and IVC hemodynamics was developed and tested on two patient-specific IVC geometries: a left-sided IVC, and an IVC with a retroaortic left renal vein. Virtual IVC filter placement was performed with finite element analysis (FEA) using non-linear material models and contact modeling, yielding maximum vein displacements of approximately 10% of the IVC diameters. Blood flow was then simulated using computational fluid dynamics (CFD) with four cases for each patient IVC: 1) an IVC only, 2) an IVC with a placed filter, 3) an IVC with a placed filter and a model embolus, all at resting flow conditions, and 4) an IVC with a placed filter and a model embolus at exercise flow conditions. Significant hemodynamic differences were observed between the two patient IVCs, with the development of a right-sided jet (all cases) and a larger stagnation region (cases 3-4) in the left-sided IVC. These results support further investigation of the effects of IVC filter placement on a patient-specific basis.

  13. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  14. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE PAGES

    Gardner, David J.; Reynolds, Daniel R.

    2017-01-05

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  15. A Low Cost Structurally Optimized Design for Diverse Filter Types

    PubMed Central

    Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar

    2016-01-01

    A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133

  16. Effects of periods of nonuse and fluctuating ammonia concentration on biofilter performance.

    PubMed

    Chen, Ying-Xu; Yin, Jun; Wang, Kai-Xiong; Fang, Shi

    2004-01-01

    A systematic study on the transient behavior of odor treatment using biofilters is described. The biofilters were exposed to variations in contaminant loading and periods of nonuse. Two bench-scale biofilters with different filter media were used. Mixtures of compost/perlite (5:1) and dry sludge/granular active carbon (5:1) were used as filter media. Ammonia (NH3), one of the main malodorous gases, was used as the target compound. The response of each biofilter to variations in contaminant mass loading, periodic nonuse, water content, and inlet concentration pulse was studied. The nonuse period comprised of two stages: the "idle phase" when no air was passing through the biofilters, and the "no-contaminant-loading phase" when only humidified air was passing through the biofilters. Concentration spike was applied to study the effects of shock loading on the biofilter performance. Biofilters responded effectively to NH3 concentration variations and shock loading by rapidly recovering to the original removal rates within 6-12h. The results indicated re-acclimation times ranged from several hours to longer than a day. Longer idle phase produced longer re-acclimation periods than periods of no contaminant loading. When the media was dried during the biofiltration process, elimination capacity dropped accordingly for both biofilters. After 24 h of drying, the biofilter experiment could be restarted and run for a few days for recovering.

  17. An Operating Environment for the Jellybean Machine

    DTIC Science & Technology

    1988-05-01

    MODEL 48 5.4.4 Restarting a Context The operating system provides one primitive message (RESTART-CONTEXT) and two system calls (XFERID and XFER.ADDR) to...efficient, powerful services is reqired to support this "stem. To provide this supportive operating environment, I developed an operating system kernel that...serves many of the initial needs of our machine. This Jellybean Operating System Software provides an object- based storage model, where typed

  18. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  19. [Restoration filtering based on projection power spectrum for single-photon emission computed tomography].

    PubMed

    Kubo, N

    1995-04-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical "least squares filter" theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the "Butterworth" filtering method (cut-off frequency of 0.15 cycles/pixel), and "Wiener" filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99mTc filled cylinder, were used. NMSE of the "Butterworth" filter, "Wiener" filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images.

  20. Extravehicular mobility unit thermal simulator

    NASA Technical Reports Server (NTRS)

    Hixon, C. W.; Phillips, M. A.

    1973-01-01

    The analytical methods, thermal model, and user's instructions for the SIM bay extravehicular mobility unit (EMU) routine are presented. This digital computer program was developed for detailed thermal performance predictions of the crewman performing a command module extravehicular activity during transearth coast. It accounts for conductive, convective, and radiative heat transfer as well as fluid flow and associated flow control components. The program is a derivative of the Apollo lunar surface EMU digital simulator. It has the operational flexibility to accept card or magnetic tape for both the input data and program logic. Output can be tabular and/or plotted and the mission simulation can be stopped and restarted at the discretion of the user. The program was developed for the NASA-JSC Univac 1108 computer system and several of the capabilities represent utilization of unique features of that system. Analytical methods used in the computer routine are based on finite difference approximations to differential heat and mass balance equations which account for temperature or time dependent thermo-physical properties.

  1. Modeling the internal combustion engine

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.; Mcbride, B. J.

    1985-01-01

    A flexible and computationally economical model of the internal combustion engine was developed for use on large digital computer systems. It is based on a system of ordinary differential equations for cylinder-averaged properties. The computer program is capable of multicycle calculations, with some parameters varying from cycle to cycle, and has restart capabilities. It can accommodate a broad spectrum of reactants, permits changes in physical properties, and offers a wide selection of alternative modeling functions without any reprogramming. It readily adapts to the amount of information available in a particular case because the model is in fact a hierarchy of five models. The models range from a simple model requiring only thermodynamic properties to a complex model demanding full combustion kinetics, transport properties, and poppet valve flow characteristics. Among its many features the model includes heat transfer, valve timing, supercharging, motoring, finite burning rates, cycle-to-cycle variations in air-fuel ratio, humid air, residual and recirculated exhaust gas, and full combustion kinetics.

  2. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  3. IMPLEMENTATION OF THE IMPROVED QUASI-STATIC METHOD IN RATTLESNAKE/MOOSE FOR TIME-DEPENDENT RADIATION TRANSPORT MODELLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachary M. Prince; Jean C. Ragusa; Yaqi Wang

    Because of the recent interest in reactor transient modeling and the restart of the Transient Reactor (TREAT) Facility, there has been a need for more efficient, robust methods in computation frameworks. This is the impetus of implementing the Improved Quasi-Static method (IQS) in the RATTLESNAKE/MOOSE framework. IQS has implemented with CFEM diffusion by factorizing flux into time-dependent amplitude and spacial- and weakly time-dependent shape. The shape evaluation is very similar to a flux diffusion solve and is computed at large (macro) time steps. While the amplitude evaluation is a PRKE solve where the parameters are dependent on the shape andmore » is computed at small (micro) time steps. IQS has been tested with a custom one-dimensional example and the TWIGL ramp benchmark. These examples prove it to be a viable and effective method for highly transient cases. More complex cases are intended to be applied to further test the method and its implementation.« less

  4. New Thomson scattering diagnostic on RFX-mod.

    PubMed

    Alfier, A; Pasqualotto, R

    2007-01-01

    This article describes the completely renovated Thomson scattering (TS) diagnostic employed in the modified Reversed Field eXperiment (RFX-mod) since it restarted operation in 2005. The system measures plasma electron temperature and density profiles along an equatorial diameter, measuring in 84 positions with 7 mm spatial resolution. The custom built Nd:YLF laser produces a burst of 10 pulses at 50 Hz with energy of 3 J, providing ten profile measurements in a plasma discharge of about 300 ms duration. An optical delay system accommodates three scattering volumes in each of the 28 interference filter spectrometers. Avalanche photodiodes detect the Thomson scattering signals and allow them to be recorded by means of waveform digitizers. Electron temperature is obtained using an alternative relative calibration method, based on the use of a supercontinuum light source. Rotational Raman scattering in nitrogen has supplied the absolute calibration for the electron density measurements. During RFX-mod experimental campaigns in 2005, the TS diagnostic has demonstrated its performance, routinely providing reliable high resolution profiles.

  5. New Parallel computing framework for radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.

    A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility canmore » be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less

  6. Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas

    2014-08-01

    The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems andmore » then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.« less

  7. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  8. Implementation of real-time digital signal processing systems

    NASA Technical Reports Server (NTRS)

    Narasimha, M.; Peterson, A.; Narayan, S.

    1978-01-01

    Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.

  9. Reactor operation environmental information document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haselow, J.S.; Price, V.; Stephenson, D.E.

    1989-12-01

    The Savannah River Site (SRS) produces nuclear materials, primarily plutonium and tritium, to meet the requirements of the Department of Defense. These products have been formed in nuclear reactors that were built during 1950--1955 at the SRS. K, L, and P reactors are three of five reactors that have been used in the past to produce the nuclear materials. All three of these reactors discontinued operation in 1988. Currently, intense efforts are being extended to prepare these three reactors for restart in a manner that protects human health and the environment. To document that restarting the reactors will have minimalmore » impacts to human health and the environment, a three-volume Reactor Operations Environmental Impact Document has been prepared. The document focuses on the impacts of restarting the K, L, and P reactors on both the SRS and surrounding areas. This volume discusses the geology, seismology, and subsurface hydrology. 195 refs., 101 figs., 16 tabs.« less

  10. Lithium overdose and delayed severe neurotoxicity: timing for renal replacement therapy and restarting of lithium.

    PubMed

    de Cates, Angharad N; Morlet, Julien; Antoun Reyad, Ayman; Tadros, George

    2017-10-25

    This is a case report of a man in his 60s who presented to an English hospital following a significant lithium overdose. He was monitored for 24 hours, and then renal replacement therapy was initiated after assessment by the renal team. As soon as the lithium level returned to normal therapeutic levels (from 4.7 mEq/L to 0.67 mEq/L), lithium was restarted by the medical team. At this point, the patient developed new slurred speech and later catatonia. In this case report, we discuss the factors that could determine which patients are at risk of neurotoxicity following lithium overdose and the appropriate decision regarding when and how to consider initiation of renal replacement therapy and restarting of lithium. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Design of coupled mace filters for optical pattern recognition using practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Khan, Ajmal

    1993-01-01

    Spatial light modulators (SLMs) are being used in correlation-based optical pattern recognition systems to implement the Fourier domain filters. Currently available SLMs have certain limitations with respect to the realizability of these filters. Therefore, it is necessary to incorporate the SLM constraints in the design of the filters. The design of a SLM-constrained minimum average correlation energy (SLM-MACE) filter using the simulated annealing-based optimization technique was investigated. The SLM-MACE filter was synthesized for three different types of constraints. The performance of the filter was evaluated in terms of its recognition (discrimination) capabilities using computer simulations. The correlation plane characteristics of the SLM-MACE filter were found to be reasonably good. The SLM-MACE filter yielded far better results than the analytical MACE filter implemented on practical SLMs using the constrained magnitude technique. Further, the filter performance was evaluated in the presence of noise in the input test images. This work demonstrated the need to include the SLM constraints in the filter design. Finally, a method is suggested to reduce the computation time required for the synthesis of the SLM-MACE filter.

  12. Bowtie filters for dedicated breast CT: Theory and computational implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontson, Kimberly, E-mail: Kimberly.Kontson@fda.hhs.gov; Jennings, Robert J.

    Purpose: To design bowtie filters with improved properties for dedicated breast CT to improve image quality and reduce dose to the patient. Methods: The authors present three different bowtie filters designed for a cylindrical 14-cm diameter phantom with a uniform composition of 40/60 breast tissue, which vary in their design objectives and performance improvements. Bowtie design #1 is based on single material spectral matching and produces nearly uniform spectral shape for radiation incident upon the detector. Bowtie design #2 uses the idea of basis material decomposition to produce the same spectral shape and intensity at the detector, using two differentmore » materials. Bowtie design #3 eliminates the beam hardening effect in the reconstructed image by adjusting the bowtie filter thickness so that the effective attenuation coefficient for every ray is the same. All three designs are obtained using analytical computational methods and linear attenuation coefficients. Thus, the designs do not take into account the effects of scatter. The authors considered this to be a reasonable approach to the filter design problem since the use of Monte Carlo methods would have been computationally intensive. The filter profiles for a cone-angle of 0° were used for the entire length of each filter because the differences between those profiles and the correct cone-beam profiles for the cone angles in our system are very small, and the constant profiles allowed construction of the filters with the facilities available to us. For evaluation of the filters, we used Monte Carlo simulation techniques and the full cone-beam geometry. Images were generated with and without each bowtie filter to analyze the effect on dose distribution, noise uniformity, and contrast-to-noise ratio (CNR) homogeneity. Line profiles through the reconstructed images generated from the simulated projection images were also used as validation for the filter designs. Results: Examples of the three designs are presented. Initial verification of performance of the designs was done using analytical computations of HVL, intensity, and effective attenuation coefficient behind the phantom as a function of fan-angle with a cone-angle of 0°. The performance of the designs depends only weakly on incident spectrum and tissue composition. For all designs, the dynamic range requirement on the detector was reduced compared to the no-bowtie-filter case. Further verification of the filter designs was achieved through analysis of reconstructed images from simulations. Simulation data also showed that the use of our bowtie filters can reduce peripheral dose to the breast by 61% and provide uniform noise and CNR distributions. The bowtie filter design concepts validated in this work were then used to create a computational realization of a 3D anthropomorphic bowtie filter capable of achieving a constant effective attenuation coefficient behind the entire field-of-view of an anthropomorphic breast phantom. Conclusions: Three different bowtie filter designs that vary in performance improvements were described and evaluated using computational and simulation techniques. Results indicate that the designs are robust against variations in breast diameter, breast composition, and tube voltage, and that the use of these filters can reduce patient dose and improve image quality compared to the no-bowtie-filter case.« less

  13. Simulation of synthetic discriminant function optical implementation

    NASA Astrophysics Data System (ADS)

    Riggins, J.; Butler, S.

    1984-12-01

    The optical implementation of geometrical shape and synthetic discriminant function matched filters is computer modeled. The filter implementation utilizes the Allebach-Keegan computer-generated hologram algorithm. Signal-to-noise and efficiency measurements were made on the resultant correlation planes.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riesen, Rolf E.; Bridges, Patrick G.; Stearley, Jon R.

    Next-generation exascale systems, those capable of performing a quintillion (10{sup 18}) operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systemsmore » due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoint) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms.« less

  15. Development of a Stiffness-Based Chemistry Load Balancing Scheme, and Optimization of Input/Output and Communication, to Enable Massively Parallel High-Fidelity Internal Combustion Engine Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodavasal, Janardhan; Harms, Kevin; Srivastava, Priyesh

    A closed-cycle gasoline compression ignition engine simulation near top dead center (TDC) was used to profile the performance of a parallel commercial engine computational fluid dynamics code, as it was scaled on up to 4096 cores of an IBM Blue Gene/Q supercomputer. The test case has 9 million cells near TDC, with a fixed mesh size of 0.15 mm, and was run on configurations ranging from 128 to 4096 cores. Profiling was done for a small duration of 0.11 crank angle degrees near TDC during ignition. Optimization of input/output performance resulted in a significant speedup in reading restart files, andmore » in an over 100-times speedup in writing restart files and files for post-processing. Improvements to communication resulted in a 1400-times speedup in the mesh load balancing operation during initialization, on 4096 cores. An improved, “stiffness-based” algorithm for load balancing chemical kinetics calculations was developed, which results in an over 3-times faster run-time near ignition on 4096 cores relative to the original load balancing scheme. With this improvement to load balancing, the code achieves over 78% scaling efficiency on 2048 cores, and over 65% scaling efficiency on 4096 cores, relative to 256 cores.« less

  16. Efficient Decoding With Steady-State Kalman Filter in Neural Interface Systems

    PubMed Central

    Malik, Wasim Q.; Truccolo, Wilson; Brown, Emery N.; Hochberg, Leigh R.

    2011-01-01

    The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5 ± 0.5 s (mean ± s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25 ± 3 single units by a factor of 7.0 ± 0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems. PMID:21078582

  17. UDU/T/ covariance factorization for Kalman filtering

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1980-01-01

    There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.

  18. Application of the actor model to large scale NDE data analysis

    NASA Astrophysics Data System (ADS)

    Coughlin, Chris

    2018-03-01

    The Actor model of concurrent computation discretizes a problem into a series of independent units or actors that interact only through the exchange of messages. Without direct coupling between individual components, an Actor-based system is inherently concurrent and fault-tolerant. These traits lend themselves to so-called "Big Data" applications in which the volume of data to analyze requires a distributed multi-system design. For a practical demonstration of the Actor computational model, a system was developed to assist with the automated analysis of Nondestructive Evaluation (NDE) datasets using the open source Myriad Data Reduction Framework. A machine learning model trained to detect damage in two-dimensional slices of C-Scan data was deployed in a streaming data processing pipeline. To demonstrate the flexibility of the Actor model, the pipeline was deployed on a local system and re-deployed as a distributed system without recompiling, reconfiguring, or restarting the running application.

  19. Event heap: a coordination infrastructure for dynamic heterogeneous application interactions in ubiquitous computing environments

    DOEpatents

    Johanson, Bradley E.; Fox, Armando; Winograd, Terry A.; Hanrahan, Patrick M.

    2010-04-20

    An efficient and adaptive middleware infrastructure called the Event Heap system dynamically coordinates application interactions and communications in a ubiquitous computing environment, e.g., an interactive workspace, having heterogeneous software applications running on various machines and devices across different platforms. Applications exchange events via the Event Heap. Each event is characterized by a set of unordered, named fields. Events are routed by matching certain attributes in the fields. The source and target versions of each field are automatically set when an event is posted or used as a template. The Event Heap system implements a unique combination of features, both intrinsic to tuplespaces and specific to the Event Heap, including content based addressing, support for routing patterns, standard routing fields, limited data persistence, query persistence/registration, transparent communication, self-description, flexible typing, logical/physical centralization, portable client API, at most once per source first-in-first-out ordering, and modular restartability.

  20. Superphenix: Restarting, with emphasis on research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-04-01

    French Prime Minister Edouard Baladour announced on February 22 that the Superphenix fast reactor at Creys-Malville will be allowed to restart. Along with his industry minister, Gerard Longuet, and the environment minister, Michel Bamier, Baladour has accepted the recommendation for reissuance of an operating license submitted to the ministers in January by the nuclear installations safety directorate (DSIN). In announcing their approval for restart, the ministers emphasized that the reactor was to be used as a research and demonstration facility rather than a power station. In particular, they said it should be used to investigate the possibility of using fastmore » reactors as plutonium burners rather than breeders and for the incineration of other long-lived actinide wastes. This point was made also by DSIN Director Andre-Claude Lacoste in his recommendations, but appears to have been given more weight by the government ministers. The ministers also accepted the DSIN recommendation that restart be conditional on completion of improvements to fire protection systems for the secondary sodium coolant ducts. Baladour said that it would take several months to complete this work, but other sources have suggested that the reactor could be ready to operate this month. DSIN has also stipulated that the power level be limited for several months to around 50 percent of the plant`s 1200-MWe potential in order to check out all the systems carefully.« less

  1. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Welch, Greg

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  2. A high-order spatial filter for a cubed-sphere spectral element model

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Gyu; Cheong, Hyeong-Bin

    2017-04-01

    A high-order spatial filter is developed for the spectral-element-method dynamical core on the cubed-sphere grid which employs the Gauss-Lobatto Lagrange interpolating polynomials (GLLIP) as orthogonal basis functions. The filter equation is the high-order Helmholtz equation which corresponds to the implicit time-differencing of a diffusion equation employing the high-order Laplacian. The Laplacian operator is discretized within a cell which is a building block of the cubed sphere grid and consists of the Gauss-Lobatto grid. When discretizing a high-order Laplacian, due to the requirement of C0 continuity along the cell boundaries the grid-points in neighboring cells should be used for the target cell: The number of neighboring cells is nearly quadratically proportional to the filter order. Discrete Helmholtz equation yields a huge-sized and highly sparse matrix equation whose size is N*N with N the number of total grid points on the globe. The number of nonzero entries is also almost in quadratic proportion to the filter order. Filtering is accomplished by solving the huge-matrix equation. While requiring a significant computing time, the solution of global matrix provides the filtered field free of discontinuity along the cell boundaries. To achieve the computational efficiency and the accuracy at the same time, the solution of the matrix equation was obtained by only accounting for the finite number of adjacent cells. This is called as a local-domain filter. It was shown that to remove the numerical noise near the grid-scale, inclusion of 5*5 cells for the local-domain filter was found sufficient, giving the same accuracy as that obtained by global domain solution while reducing the computing time to a considerably lower level. The high-order filter was evaluated using the standard test cases including the baroclinic instability of the zonal flow. Results indicated that the filter performs better on the removal of grid-scale numerical noises than the explicit high-order viscosity. It was also presented that the filter can be easily implemented on the distributed-memory parallel computers with a desirable scalability.

  3. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.

  4. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  5. Applications of charge-coupled device transversal filters to communication

    NASA Technical Reports Server (NTRS)

    Buss, D. D.; Bailey, W. H.; Brodersen, R. W.; Hewes, C. R.; Tasch, A. F., Jr.

    1975-01-01

    The paper discusses the computational power of state-of-the-art charged-coupled device (CCD) transversal filters in communications applications. Some of the performance limitations of CCD transversal filters are discussed, with attention given to time delay and bandwidth, imperfect charge transfer efficiency, weighting coefficient error, noise, and linearity. The application of CCD transversal filters to matched filtering, spectral filtering, and Fourier analysis is examined. Techniques for making programmable transversal filters are briefly outlined.

  6. Economic evaluation of strategies for restarting anticoagulation therapy after a first event of unprovoked venous thromboembolism.

    PubMed

    Monahan, M; Ensor, J; Moore, D; Fitzmaurice, D; Jowett, S

    2017-08-01

    Essentials Correct duration of treatment after a first unprovoked venous thromboembolism (VTE) is unknown. We assessed when restarting anticoagulation was worthwhile based on patient risk of recurrent VTE. When the risk over a one-year period is 17.5%, restarting is cost-effective. However, sensitivity analyses indicate large uncertainty in the estimates. Background Following at least 3 months of anticoagulation therapy after a first unprovoked venous thromboembolism (VTE), there is uncertainty about the duration of therapy. Further anticoagulation therapy reduces the risk of having a potentially fatal recurrent VTE but at the expense of a higher risk of bleeding, which can also be fatal. Objective An economic evaluation sought to estimate the long-term cost-effectiveness of using a decision rule for restarting anticoagulation therapy vs. no extension of therapy in patients based on their risk of a further unprovoked VTE. Methods A Markov patient-level simulation model was developed, which adopted a lifetime time horizon with monthly time cycles and was from a UK National Health Service (NHS)/Personal Social Services (PSS) perspective. Results Base-case model results suggest that treating patients with a predicted 1 year VTE risk of 17.5% or higher may be cost-effective if decision makers are willing to pay up to £20 000 per quality adjusted life year (QALY) gained. However, probabilistic sensitivity analysis shows that the model was highly sensitive to overall parameter uncertainty and caution is warranted in selecting the optimal decision rule on cost-effectiveness grounds. Univariate sensitivity analyses indicate variables such as anticoagulation therapy disutility and mortality risks were very influential in driving model results. Conclusion This represents the first economic model to consider the use of a decision rule for restarting therapy for unprovoked VTE patients. Better data are required to predict long-term bleeding risks during therapy in this patient group. © 2017 International Society on Thrombosis and Haemostasis.

  7. Target Information Processing: A Joint Decision and Estimation Approach

    DTIC Science & Technology

    2012-03-29

    ground targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important...targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important

  8. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  9. Parallelization and checkpointing of GPU applications through program transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solano-Quinde, Lizandro Damian

    2012-01-01

    GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have benefited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solvemore » the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and to develop support for application-level fault tolerance in applications using multiple GPUs. Our techniques reduce the burden of enhancing single-GPU applications to support these features. To achieve our goal, this work designs and implements a framework for enhancing a single-GPU OpenCL application through application transformation.« less

  10. A biased filter for linear discrete dynamic systems.

    NASA Technical Reports Server (NTRS)

    Chang, J. W.; Hoerl, A. E.; Leathrum, J. F.

    1972-01-01

    A recursive estimator, the ridge filter, was developed for the linear discrete dynamic estimation problem. Theorems were established to show that the ridge filter can be, on the average, closer to the expected value of the system state than the Kalman filter. On the other hand, Kalman filter, on the average, is closer to the instantaneous system state than the ridge filter. The ridge filter has been formulated in such a way that the computational features of the Kalman filter are preserved.

  11. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  12. Reconstruction for limited-projection fluorescence molecular tomography based on projected restarted conjugate gradient normal residual.

    PubMed

    Cao, Xu; Zhang, Bin; Liu, Fei; Wang, Xin; Bai, Jing

    2011-12-01

    Limited-projection fluorescence molecular tomography (FMT) can greatly reduce the acquisition time, which is suitable for resolving fast biology processes in vivo but suffers from severe ill-posedness because of the reconstruction using only limited projections. To overcome the severe ill-posedness, we report a reconstruction method based on the projected restarted conjugate gradient normal residual. The reconstruction results of two phantom experiments demonstrate that the proposed method is feasible for limited-projection FMT. © 2011 Optical Society of America

  13. Zero energy-storage ballast for compact fluorescent lamps

    DOEpatents

    Schultz, W.N.; Thomas, R.J.

    1999-08-31

    A CFL ballast includes complementary-type switching devices connected in series with their gates connected together at a control node. The switching devices supply a resonant tank circuit which is tuned to a frequency near, but slightly lower than, the resonant frequency of a resonant control circuit. As a result, the tank circuit restarts oscillations immediately following each zero crossing of the bus voltage. Such rapid restarts avoid undesirable flickering while maintaining the operational advantages and high efficacy of the CFL ballast. 4 figs.

  14. Zero energy-storage ballast for compact fluorescent lamps

    DOEpatents

    Schultz, William Newell; Thomas, Robert James

    1999-01-01

    A CFL ballast includes complementary-type switching devices connected in series with their gates connected together at a control node. The switching devices supply a resonant tank circuit which is tuned to a frequency near, but slightly lower than, the resonant frequency of a resonant control circuit. As a result, the tank circuit restarts oscillations immediately following each zero crossing of the bus voltage. Such rapid restarts avoid undesirable flickering while maintaining the operational advantages and high efficacy of the CFL ballast.

  15. Logic design for dynamic and interactive recovery.

    NASA Technical Reports Server (NTRS)

    Carter, W. C.; Jessep, D. C.; Wadia, A. B.; Schneider, P. R.; Bouricius, W. G.

    1971-01-01

    Recovery in a fault-tolerant computer means the continuation of system operation with data integrity after an error occurs. This paper delineates two parallel concepts embodied in the hardware and software functions required for recovery; detection, diagnosis, and reconfiguration for hardware, data integrity, checkpointing, and restart for the software. The hardware relies on the recovery variable set, checking circuits, and diagnostics, and the software relies on the recovery information set, audit, and reconstruct routines, to characterize the system state and assist in recovery when required. Of particular utility is a handware unit, the recovery control unit, which serves as an interface between error detection and software recovery programs in the supervisor and provides dynamic interactive recovery.

  16. Error recovery in shared memory multiprocessors using private caches

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1990-01-01

    The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.

  17. An extended Kalman filter approach to non-stationary Bayesian estimation of reduced-order vocal fold model parameters.

    PubMed

    Hadwin, Paul J; Peterson, Sean D

    2017-04-01

    The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.

  18. Correlation Filter Synthesis Using Neural Networks.

    DTIC Science & Technology

    1993-12-01

    trained neural networks may be understood as "smart" data interpolators, the stored filter and the filter synthesis approaches have much in common: in...the former new filters are found by searching a data bank consisting of the filters themselves; in the latter filters are formed from a distributed... data bank that contains neural network interaction strengths or weights. 1.2 Key Results and Outputs Excellent computer simulation results were

  19. RDTC [Restricted Data Transmission Controller] global variable definitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grambihler, A.J.; O`Callaghan, P.B.

    The purpose of the Restricted Data Transmission Controller (RDTC) is to demonstrate a methodology for transmitting data between computers which have different levels of classification. The RDTC does this by logically filtering the data being transmitted between the two computers. This prototype is set up to filter data from the classified computer so that only numeric data is passed to the unclassified computer. The RDTC allows all data from the unclassified computer to be sent to the classified computer. The classified system is referred to as LUA and the unclassified system is referred to as LUB. 9 tabs.

  20. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  1. Visualization of the freeze/thaw characteristics of a copper/water heat pipe - Effects of non-condensible gas

    NASA Technical Reports Server (NTRS)

    Ochterbeck, J. M.; Peterson, G. P.

    1991-01-01

    The freeze/thaw characteristics of a copper/water heat pipe of rectangular cross section were investigated experimentally to determine the effect of variations in the amount of non-condensible gases (NCG) present. The transient internal temperature profiles in both the liquid and vapor channels are presented along with contours of the frozen fluid configuration obtained through visual observation. Several interesting phenomena were observed including total blockage of the vapor channel by a solid plug, evaporator dryout during restart, and freezing blowby. In addition, the restart characteristics are shown to be strongly dependent upon the shutdown procedure used prior to freezing, indicating that accurate prediction of the startup or restart characteristics requires a complete thermal history. Finally, the experimental results indicate that the freeze/thaw characteristics of room temperature heat pipes may be significantly different from those occurring in higher temperature, liquid metal heat pipes due to differences in the vapor pressures in the frozen condition.

  2. HLW system plan - revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-01-14

    The projected ability of the Tank Farm to support DWPF startup and continued operation has diminished somewhat since revision 1 of this Plan. The 13 month delay in DWPF startup, which actually helps the Tank Farm condition in the near term, was more than offset by the 9 month delay in ITP startup, the delay in the Evaporator startups and the reduction to Waste Removal funding. This Plan does, however, describe a viable operating strategy for the success of the HLW System and Mission, albeit with less contingency and operating flexibility than in the past. HLWM has focused resources frommore » within the division on five near term programs: The three evaporator restarts, DWPF melter heatup and completion of the ITP outage. The 1H Evaporator was restarted 12/28/93 after a 9 month shutdown for an extensive Conduct of Operations upgrade. The 2F and 2H Evaporators are scheduled to restart 3/94 and 4/94, respectively. The RHLWE startup remains 11/17/97.« less

  3. Substrate reactivity as a function of the extent of reaction in the enzymatic hydrolysis of lignocellulose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, S.G.; Converse, A.O.

    1997-12-20

    In an effort to better understand the role of the substrate in the rapid fall off in the rate of enzymatic hydrolysis of cellulose with conversion, substrate reactivity was measured as a function of conversion. These measurements were made by interrupting the hydrolysis of pretreated wood at various degrees of conversion; and, after boiling and washing, restarting the hydrolysis in fresh butter with fresh enzyme. The comparison of the restart rate per enzyme adsorbed with the initial rate per enzyme adsorbed, both extrapolated back to zero conversion, provides a measurement of the substrate reactivity without the complications of product inhibitionmore » or cellulase inactivation. The results indicate that the substrate reactivity falls only modestly as conversion increases. However, the restart rate is still higher than the rate of the uninterrupted hydrolysis, particularly at high conversion. Hence the authors conclude that the loss of substrate reactivity is not the principal cause for the long residence time required for complete conversion.« less

  4. Space Object Maneuver Detection Algorithms Using TLE Data

    NASA Astrophysics Data System (ADS)

    Pittelkau, M.

    2016-09-01

    An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.

  5. Quantum neural network-based EEG filtering for a brain-computer interface.

    PubMed

    Gandhi, Vaibhav; Prasad, Girijesh; Coyle, Damien; Behera, Laxmidhar; McGinnity, Thomas Martin

    2014-02-01

    A novel neural information processing architecture inspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain-computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner-outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain-computer interface performance compared to using only the raw EEG or Savitzky-Golay filtered EEG across multiple sessions.

  6. A filtering approach to edge preserving MAP estimation of images.

    PubMed

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  7. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  8. Program CALIB. [for computing noise levels for helicopter version of S-191 filter wheel spectrometer

    NASA Technical Reports Server (NTRS)

    Mendlowitz, M. A.

    1973-01-01

    The program CALIB, which was written to compute noise levels and average signal levels of aperture radiance for the helicopter version of the S-191 filter wheel spectrometer is described. The program functions, and input description are included along with a compiled program listing.

  9. Attentional Selection in Object Recognition

    DTIC Science & Technology

    1993-02-01

    order. It also affects the choice of strategies in both the 24 A Computational Model of Attentional Selection filtering and arbiter stages. The set...such processing. In Treisman’s model this was hidden in the concept of the selection filter . Later computational models of attention tried to...This thesis presents a novel approach to the selection problem by propos. ing a computational model of visual attentional selection as a paradigm for

  10. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  11. Virtual experiment of optical spatial filtering in Matlab environment

    NASA Astrophysics Data System (ADS)

    Ji, Yunjing; Wang, Chunyong; Song, Yang; Lai, Jiancheng; Wang, Qinghua; Qi, Jing; Shen, Zhonghua

    2017-08-01

    The principle of spatial filtering experiment has been introduced, and the computer simulation platform with graphical user interface (GUI) has been made out in Matlab environment. Using it various filtering processes for different input image or different filtering purpose will be completed accurately, and filtering effect can be observed clearly with adjusting experimental parameters. The physical nature of the optical spatial filtering can be showed vividly, and so experimental teaching effect will be promoted.

  12. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  13. Frequency modulation television analysis: Distortion analysis

    NASA Technical Reports Server (NTRS)

    Hodge, W. H.; Wong, W. H.

    1973-01-01

    Computer simulation is used to calculate the time-domain waveform of standard T-pulse-and-bar test signal distorted in passing through an FM television system. The simulator includes flat or preemphasized systems and requires specification of the RF predetection filter characteristics. The predetection filters are modeled with frequency-symmetric Chebyshev (0.1-db ripple) and Butterworth filters. The computer was used to calculate distorted output signals for sixty-four different specified systems, and the output waveforms are plotted for all sixty-four. Comparison of the plotted graphs indicates that a Chebyshev predetection filter of four poles causes slightly more signal distortion than a corresponding Butterworth filter and the signal distortion increases as the number of poles increases. An increase in the peak deviation also increases signal distortion. Distortion also increases with the addition of preemphasis.

  14. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    PubMed

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  15. Adaptive Laplacian filtering for sensorimotor rhythm-based brain-computer interfaces.

    PubMed

    Lu, Jun; McFarland, Dennis J; Wolpaw, Jonathan R

    2013-02-01

    Sensorimotor rhythms (SMRs) are 8-30 Hz oscillations in the electroencephalogram (EEG) recorded from the scalp over sensorimotor cortex that change with movement and/or movement imagery. Many brain-computer interface (BCI) studies have shown that people can learn to control SMR amplitudes and can use that control to move cursors and other objects in one, two or three dimensions. At the same time, if SMR-based BCIs are to be useful for people with neuromuscular disabilities, their accuracy and reliability must be improved substantially. These BCIs often use spatial filtering methods such as common average reference (CAR), Laplacian (LAP) filter or common spatial pattern (CSP) filter to enhance the signal-to-noise ratio of EEG. Here, we test the hypothesis that a new filter design, called an 'adaptive Laplacian (ALAP) filter', can provide better performance for SMR-based BCIs. An ALAP filter employs a Gaussian kernel to construct a smooth spatial gradient of channel weights and then simultaneously seeks the optimal kernel radius of this spatial filter and the regularization parameter of linear ridge regression. This optimization is based on minimizing the leave-one-out cross-validation error through a gradient descent method and is computationally feasible. Using a variety of kinds of BCI data from a total of 22 individuals, we compare the performances of ALAP filter to CAR, small LAP, large LAP and CSP filters. With a large number of channels and limited data, ALAP performs significantly better than CSP, CAR, small LAP and large LAP both in classification accuracy and in mean-squared error. Using fewer channels restricted to motor areas, ALAP is still superior to CAR, small LAP and large LAP, but equally matched to CSP. Thus, ALAP may help to improve the accuracy and robustness of SMR-based BCIs.

  16. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  17. Optimum filters for narrow-band frequency modulation.

    NASA Technical Reports Server (NTRS)

    Shelton, R. D.

    1972-01-01

    The results of a computer search for the optimum type of bandpass filter for low-index angle-modulated signals are reported. The bandpass filters are discussed in terms of their low-pass prototypes. Only filter functions with constant numerators are considered. The pole locations for the optimum filters of several cases are shown in a table. The results are fairly independent of modulation index and bandwidth.

  18. Economical Implementation of a Filter Engine in an FPGA

    NASA Technical Reports Server (NTRS)

    Kowalski, James E.

    2009-01-01

    A logic design has been conceived for a field-programmable gate array (FPGA) that would implement a complex system of multiple digital state-space filters. The main innovative aspect of this design lies in providing for reuse of parts of the FPGA hardware to perform different parts of the filter computations at different times, in such a manner as to enable the timely performance of all required computations in the face of limitations on available FPGA hardware resources. The implementation of the digital state-space filter involves matrix vector multiplications, which, in the absence of the present innovation, would ordinarily necessitate some multiplexing of vector elements and/or routing of data flows along multiple paths. The design concept calls for implementing vector registers as shift registers to simplify operand access to multipliers and accumulators, obviating both multiplexing and routing of data along multiple paths. Each vector register would be reused for different parts of a calculation. Outputs would always be drawn from the same register, and inputs would always be loaded into the same register. A simple state machine would control each filter. The output of a given filter would be passed to the next filter, accompanied by a "valid" signal, which would start the state machine of the next filter. Multiple filter modules would share a multiplication/accumulation arithmetic unit. The filter computations would be timed by use of a clock having a frequency high enough, relative to the input and output data rate, to provide enough cycles for matrix and vector arithmetic operations. This design concept could prove beneficial in numerous applications in which digital filters are used and/or vectors are multiplied by coefficient matrices. Examples of such applications include general signal processing, filtering of signals in control systems, processing of geophysical measurements, and medical imaging. For these and other applications, it could be advantageous to combine compact FPGA digital filter implementations with other application-specific logic implementations on single integrated-circuit chips. An FPGA could readily be tailored to implement a variety of filters because the filter coefficients would be loaded into memory at startup.

  19. Technical Note: Image filtering to make computer-aided detection robust to image reconstruction kernel choice in lung cancer CT screening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohkubo, Masaki, E-mail: mook@clg.niigata-u.ac.jp

    Purpose: In lung cancer computed tomography (CT) screening, the performance of a computer-aided detection (CAD) system depends on the selection of the image reconstruction kernel. To reduce this dependence on reconstruction kernels, the authors propose a novel application of an image filtering method previously proposed by their group. Methods: The proposed filtering process uses the ratio of modulation transfer functions (MTFs) of two reconstruction kernels as a filtering function in the spatial-frequency domain. This method is referred to as MTF{sub ratio} filtering. Test image data were obtained from CT screening scans of 67 subjects who each had one nodule. Imagesmore » were reconstructed using two kernels: f{sub STD} (for standard lung imaging) and f{sub SHARP} (for sharp edge-enhancement lung imaging). The MTF{sub ratio} filtering was implemented using the MTFs measured for those kernels and was applied to the reconstructed f{sub SHARP} images to obtain images that were similar to the f{sub STD} images. A mean filter and a median filter were applied (separately) for comparison. All reconstructed and filtered images were processed using their prototype CAD system. Results: The MTF{sub ratio} filtered images showed excellent agreement with the f{sub STD} images. The standard deviation for the difference between these images was very small, ∼6.0 Hounsfield units (HU). However, the mean and median filtered images showed larger differences of ∼48.1 and ∼57.9 HU from the f{sub STD} images, respectively. The free-response receiver operating characteristic (FROC) curve for the f{sub SHARP} images indicated poorer performance compared with the FROC curve for the f{sub STD} images. The FROC curve for the MTF{sub ratio} filtered images was equivalent to the curve for the f{sub STD} images. However, this similarity was not achieved by using the mean filter or median filter. Conclusions: The accuracy of MTF{sub ratio} image filtering was verified and the method was demonstrated to be effective for reducing the kernel dependence of CAD performance.« less

  20. Real-time optical correlator using computer-generated holographic filter on a liquid crystal light valve

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Yu, Jeffrey

    1990-01-01

    Limitations associated with the binary phase-only filter often used in optical correlators are presently circumvented in the writing of complex-valued data on a gray-scale spatial light modulator through the use of a computer-generated hologram (CGH) algorithm. The CGH encodes complex-valued data into nonnegative real CGH data in such a way that it may be encoded in any of the available gray-scale spatial light modulators. A CdS liquid-crystal light valve is used for the complex-valued CGH encoding; computer simulations and experimental results are compared, and the use of such a CGH filter as the synapse hologram in a holographic optical neural net is discussed.

  1. HST3D; a computer code for simulation of heat and solute transport in three-dimensional ground-water flow systems

    USGS Publications Warehouse

    Kipp, K.L.

    1987-01-01

    The Heat- and Soil-Transport Program (HST3D) simulates groundwater flow and associated heat and solute transport in three dimensions. The three governing equations are coupled through the interstitial pore velocity, the dependence of the fluid density on pressure, temperature, the solute-mass fraction , and the dependence of the fluid viscosity on temperature and solute-mass fraction. The solute transport equation is for only a single, solute species with possible linear equilibrium sorption and linear decay. Finite difference techniques are used to discretize the governing equations using a point-distributed grid. The flow-, heat- and solute-transport equations are solved , in turn, after a particle Gauss-reduction scheme is used to modify them. The modified equations are more tightly coupled and have better stability for the numerical solutions. The basic source-sink term represents wells. A complex well flow model may be used to simulate specified flow rate and pressure conditions at the land surface or within the aquifer, with or without pressure and flow rate constraints. Boundary condition types offered include specified value, specified flux, leakage, heat conduction, and approximate free surface, and two types of aquifer influence functions. All boundary conditions can be functions of time. Two techniques are available for solution of the finite difference matrix equations. One technique is a direct-elimination solver, using equations reordered by alternating diagonal planes. The other technique is an iterative solver, using two-line successive over-relaxation. A restart option is available for storing intermediate results and restarting the simulation at an intermediate time with modified boundary conditions. This feature also can be used as protection against computer system failure. Data input and output may be in metric (SI) units or inch-pound units. Output may include tables of dependent variables and parameters, zoned-contour maps, and plots of the dependent variables versus time. (Lantz-PTT)

  2. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  3. Perforation of the IVC: rule rather than exception after longer indwelling times for the Günther Tulip and Celect retrievable filters.

    PubMed

    Durack, Jeremy C; Westphalen, Antonio C; Kekulawela, Stephanie; Bhanu, Shiv B; Avrin, David E; Gordon, Roy L; Kerlan, Robert K

    2012-04-01

    This study was designed to assess the incidence, magnitude, and impact upon retrievability of vena caval perforation by Günther Tulip and Celect conical inferior vena cava (IVC) filters on computed tomographic (CT) imaging. Günther Tulip and Celect IVC filters placed between July 2007 and May 2009 were identified from medical records. Of 272 IVC filters placed, 50 (23 Günther Tulip, 46%; 27 Celect, 54%) were retrospectively assessed on follow-up abdominal CT scans performed for reasons unrelated to the filter. Computed tomography scans were examined for evidence of filter perforation through the vena caval wall, tilt, or pericaval tissue injury. Procedure records were reviewed to determine whether IVC filter retrieval was attempted and successful. Perforation of at least one filter component through the IVC was observed in 43 of 50 (86%) filters on CT scans obtained between 1 and 880 days after filter placement. All filters imaged after 71 days showed some degree of vena caval perforation, often as a progressive process. Filter tilt was seen in 20 of 50 (40%) filters, and all tilted filters also demonstrated vena caval perforation. Transjugular removal was attempted in 12 of 50 (24%) filters and was successful in 11 of 12 (92%). Longer indwelling times usually result in vena caval perforation by retrievable Günther Tulip and Celect IVC filters. Although infrequently reported in the literature, clinical sequelae from IVC filter components breaching the vena cava can be significant. We advocate filter retrieval as early as clinically indicated and increased attention to the appearance of IVC filters on all follow-up imaging studies.

  4. Perforation of the IVC: Rule Rather Than Exception After Longer Indwelling Times for the Guenther Tulip and Celect Retrievable Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durack, Jeremy C., E-mail: jeremy.durack@ucsf.edu; Westphalen, Antonio C.; Kekulawela, Stephanie

    Purpose: This study was designed to assess the incidence, magnitude, and impact upon retrievability of vena caval perforation by Guenther Tulip and Celect conical inferior vena cava (IVC) filters on computed tomographic (CT) imaging. Methods: Guenther Tulip and Celect IVC filters placed between July 2007 and May 2009 were identified from medical records. Of 272 IVC filters placed, 50 (23 Guenther Tulip, 46%; 27 Celect, 54%) were retrospectively assessed on follow-up abdominal CT scans performed for reasons unrelated to the filter. Computed tomography scans were examined for evidence of filter perforation through the vena caval wall, tilt, or pericaval tissuemore » injury. Procedure records were reviewed to determine whether IVC filter retrieval was attempted and successful. Results: Perforation of at least one filter component through the IVC was observed in 43 of 50 (86%) filters on CT scans obtained between 1 and 880 days after filter placement. All filters imaged after 71 days showed some degree of vena caval perforation, often as a progressive process. Filter tilt was seen in 20 of 50 (40%) filters, and all tilted filters also demonstrated vena caval perforation. Transjugular removal was attempted in 12 of 50 (24%) filters and was successful in 11 of 12 (92%). Conclusions: Longer indwelling times usually result in vena caval perforation by retrievable Guenther Tulip and Celect IVC filters. Although infrequently reported in the literature, clinical sequelae from IVC filter components breaching the vena cava can be significant. We advocate filter retrieval as early as clinically indicated and increased attention to the appearance of IVC filters on all follow-up imaging studies.« less

  5. A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.

    PubMed

    Chee, Adrian J Y; Yiu, Billy Y S; Yu, Alfred C H

    2017-01-01

    Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigendecompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel singular value decomposition), since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths was studied. Using our eigen-processing framework, real-time video-range throughput (24 frames/s) can be attained for CFI frames with full view in azimuth direction (128 scanlines), up to a scan depth of 5 cm ( λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that the GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.

  6. Part 2 of a Computational Study of a Drop-Laden Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2004-01-01

    This second of three reports on a computational study of a mixing layer laden with evaporating liquid drops presents the evaluation of Large Eddy Simulation (LES) models. The LES models were evaluated on an existing database that had been generated using Direct Numerical Simulation (DNS). The DNS method and the database are described in the first report of this series, Part 1 of a Computational Study of a Drop-Laden Mixing Layer (NPO-30719), NASA Tech Briefs, Vol. 28, No.7 (July 2004), page 59. The LES equations, which are derived by applying a spatial filter to the DNS set, govern the evolution of the larger scales of the flow and can therefore be solved on a coarser grid. Consistent with the reduction in grid points, the DNS drops would be represented by fewer drops, called computational drops in the LES context. The LES equations contain terms that cannot be directly computed on the coarser grid and that must instead be modeled. Two types of models are necessary: (1) those for the filtered source terms representing the effects of drops on the filtered flow field and (2) those for the sub-grid scale (SGS) fluxes arising from filtering the convective terms in the DNS equations. All of the filtered-sourceterm models that were developed were found to overestimate the filtered source terms. For modeling the SGS fluxes, constant-coefficient Smagorinsky, gradient, and scale-similarity models were assessed and calibrated on the DNS database. The Smagorinsky model correlated poorly with the SGS fluxes, whereas the gradient and scale-similarity models were well correlated with the SGS quantities that they represented.

  7. Correlation Filters for Detection of Cellular Nuclei in Histopathology Images.

    PubMed

    Ahmad, Asif; Asif, Amina; Rajpoot, Nasir; Arif, Muhammad; Minhas, Fayyaz Ul Amir Afsar

    2017-11-21

    Nuclei detection in histology images is an essential part of computer aided diagnosis of cancers and tumors. It is a challenging task due to diverse and complicated structures of cells. In this work, we present an automated technique for detection of cellular nuclei in hematoxylin and eosin stained histopathology images. Our proposed approach is based on kernelized correlation filters. Correlation filters have been widely used in object detection and tracking applications but their strength has not been explored in the medical imaging domain up till now. Our experimental results show that the proposed scheme gives state of the art accuracy and can learn complex nuclear morphologies. Like deep learning approaches, the proposed filters do not require engineering of image features as they can operate directly on histopathology images without significant preprocessing. However, unlike deep learning methods, the large-margin correlation filters developed in this work are interpretable, computationally efficient and do not require specialized or expensive computing hardware. A cloud based webserver of the proposed method and its python implementation can be accessed at the following URL: http://faculty.pieas.edu.pk/fayyaz/software.html#corehist .

  8. Novel Spectro-Temporal Codes and Computations for Auditory Signal Representation and Separation

    DTIC Science & Technology

    2013-02-01

    responses are shown). Bottom right panel (c) shows the Frequency responses of the tunable bandpass filter ( BPF ) triplets that adapt to the incoming...signal. One BPF triplet is associated with each fixed filter, such that coarse filtering of the fixed gammatone filters is followed by additional, finer...is achieved using a second layer of narrower bandpass filters ( BPFs , Q=8) that emulate the filtering functions of outer hair cells (OHCs). In the

  9. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.

    1985-01-01

    Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.

  10. Startup of a frozen heat pipe in one-g and micro-g environments - A proposed shuttle flight experiment

    NASA Technical Reports Server (NTRS)

    Ochterbeck, J. M.; Peterson, G. P.

    1991-01-01

    An attempt is made to determine how a heat pipe freezes under various low load and/or no load conditions in both one-g and micro-g environments. Also of interest are the mechanisms that can be used to restart the heat pipe after freezing has occurred. Particular attention is given to step function power reductions and the resulting distribution of the working fluid after freezing has occurred and the effect of noncondensible gases on the frozen configuration and the restart characteristics.

  11. A Microprocessor Development System for the ALTOS Series Microcomputers.

    DTIC Science & Technology

    1981-06-01

    location, and 3) routines for online user self-help and system use instructions. The primary ccnsideration in the desin of the HOST control program was...TO LOWER CASE CPI ;rYNAMIC SET ITEMIZE JZ MAKEI CPI Vt" ;LYNAMIC SET TOTAL ONLY JZ MAKET CPI "e JNZ STACKIT ;RESTART TEST IF NOT E MEM55 LXI HMEMM...RESET STACK JMP MEM0I ;RESTART TEST MAKEI MVI All ;MAKE ITEMIZE STA MEMP CALL BSOUT RET MAKET MJI A,O ;MAKE TOTAL ONLY STA MEMP CALL BSOUT RET * LONE WITH

  12. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  13. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  14. Spectral analysis and filtering techniques in digital spatial data processing

    USGS Publications Warehouse

    Pan, Jeng-Jong

    1989-01-01

    A filter toolbox has been developed at the EROS Data Center, US Geological Survey, for retrieving or removing specified frequency information from two-dimensional digital spatial data. This filter toolbox provides capabilities to compute the power spectrum of a given data and to design various filters in the frequency domain. Three types of filters are available in the toolbox: point filter, line filter, and area filter. Both the point and line filters employ Gaussian-type notch filters, and the area filter includes the capabilities to perform high-pass, band-pass, low-pass, and wedge filtering techniques. These filters are applied for analyzing satellite multispectral scanner data, airborne visible and infrared imaging spectrometer (AVIRIS) data, gravity data, and the digital elevation models (DEM) data. -from Author

  15. GRIM-Filter: Fast seed location filtering in DNA read mapping using processing-in-memory technologies.

    PubMed

    Kim, Jeremie S; Senol Cali, Damla; Xin, Hongyi; Lee, Donghyuk; Ghose, Saugata; Alser, Mohammed; Hassan, Hasan; Ergin, Oguz; Alkan, Can; Mutlu, Onur

    2018-05-09

    Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x-6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x-3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. GRIM-Filter exploits 3D-stacked memory, which enables the efficient use of processing-in-memory, to overcome the memory bandwidth bottleneck in seed location filtering. We show that GRIM-Filter significantly improves the performance of a state-of-the-art read mapper. GRIM-Filter is a universal seed location filter that can be applied to any read mapper. We hope that our results provide inspiration for new works to design other bioinformatics algorithms that take advantage of emerging technologies and new processing paradigms, such as processing-in-memory using 3D-stacked memory devices.

  16. Computational Investigations of Noise Suppression in Subsonic Round Jets

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    NASA Grant NAG1-1802, originally submitted in June 1996 as a two-year proposal, was awarded one-year's funding by NASA LaRC for the period 5 Oct., 1996, through 4 Oct., 1997. Because of the inavailability (from IT at NASA ARC) of sufficient supercomputer time in fiscal 1998 to complete the computational goals of the second year of the original proposal (estimated to be at least 400 Cray C-90 CPU hours), those goals have been appropriately amended, and a new proposal has been submitted to LaRC as a follow-on to NAG1-1802. The current report documents the activities and accomplishments on NAG1-1802 during the one-year period from 5 Oct., 1996, through 4 Oct., 1997. NASA Grant NAG1-1802, and its predecessor, NAG1-1772, have been directed toward adapting the numerical tool of Large-Eddy Simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of SubGrid-Scale (SGS) models that incorporate time- domain filters. The author is unaware of any previous attempt at purely time-filtered LES; however, Aldama and Dakhoul and Bedford have considered approaches that combine both spatial and temporal filtering. In our view, filtering in both space and time is redundant, because removal of high frequencies effects the removal of small spatial scales and vice versa.

  17. Exploration of computational methods for classification of movement intention during human voluntary movement from single trial EEG.

    PubMed

    Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark

    2007-12-01

    To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.

  18. Dense grid sibling frames with linear phase filters

    NASA Astrophysics Data System (ADS)

    Abdelnour, Farras

    2013-09-01

    We introduce new 5-band dyadic sibling frames with dense time-frequency grid. Given a lowpass filter satisfying certain conditions, the remaining filters are obtained using spectral factorization. The analysis and synthesis filterbanks share the same lowpass and bandpass filters but have different and oversampled highpass filters. This leads to wavelets approximating shift-invariance. The filters are FIR, have linear phase, and the resulting wavelets have vanishing moments. The filters are designed using spectral factorization method. The proposed method leads to smooth limit functions with higher approximation order, and computationally stable filterbanks.

  19. Design of a composite filter realizable on practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Ramakrishnan, Ramachandran

    1994-01-01

    Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.

  20. QMRPF-UKF Master-Slave Filtering for the Attitude Determination of Micro-Nano Satellites Using Gyro and Magnetometer

    PubMed Central

    Cui, Peiling; Zhang, Huijuan

    2010-01-01

    In this paper, the problem of estimating the attitude of a micro-nano satellite, obtaining geomagnetic field measurements via a three-axis magnetometer and obtaining angle rate via gyro, is considered. For this application, a QMRPF-UKF master-slave filtering method is proposed, which uses the QMRPF and UKF algorithms to estimate the rotation quaternion and the gyro bias parameters, respectively. The computational complexicity related to the particle filtering technique is eliminated by introducing a multiresolution approach that permits a significant reduction in the number of particles. This renders QMRPF-UKF master-slave filter computationally efficient and enables its implementation with a remarkably small number of particles. Simulation results by using QMRPF-UKF are given, which demonstrate the validity of the QMRPF-UKF nonlinear filter. PMID:22163448

  1. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  2. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

  3. Discrete square root filtering - A survey of current techniques.

    NASA Technical Reports Server (NTRS)

    Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.

    1971-01-01

    Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.

  4. CIDR

    Science.gov Websites

    calculate eigenvectors to adjust for population stratification in association analyses. SNP filters are developed including missing data filters, duplicate and Mendelian errors, minor allele frequency and Hardy genotype and phenotype datasets and apply the filters correctly by repeating the pre-compute results. A QC

  5. ART EXPERIENCED PATIENTS FOR TACKLING ATTRITION FROM HIV CARE: A MULTI-SITE COHORT STUDY.

    PubMed

    Teklu, Alula Meressa; Yirdaw, Kesetebirhan Delele

    2016-10-01

    Retention of patients on anti-retroviral treatment in Ethiopia is a challenge. Use of anti-retroviral treatment experienced patients to prepare and re-engage them when they miss follow-ups is recommended, but evidence on its effectiveness is limited. This study evaluated its effectiveness. : A retrospective cohort study in 10 randomly selected health facilities was conducted to compare outcomes before and after initiation of the adherence supporters program in HIV care and treatment from September 2001 to August 2013. Data analysis involved Kaplan-Meier survival and Log-rank test analysis on STATA statistical software Version 12 to compare survival experiences. Of 18,835 records that were available, 938 (4.36%) records with missing values were excluded and data from the remaining 17,897 was analyzed. The incidence of first instance lost to follow-up was 22.2 per 100 person-years (95% confidence interval 21.7-22.7). The risk of missing follow-ups after initiation of the program was high (Hazard Ratio –1.22, P < 0.001). The incidence of restarting after missed follow-ups was 23 per 100 PY (95% CI 22.2-24.0). The likelihood of restarting after missed follow-ups was four times higher during the period adherence supporters were present (P<0.001). Patients who stayed longer in care before missing follow ups were more likely to restart (5.7 times the chance of restarting treatment for those whose first lost to follow-up occurred at≥12 months compared to <3 months, P< 0.001).Time to restarting treatment was shorter after the initiation of the adherence supporters program (median 37 vs. 115 days). The risk of recurrence of being lost to follow-up in the presence of adherence supporters was significantly higher than when there were no adherence supporters; 38.8 (95% CI 36.3-41.6) per 100 PY vs. 26.1 (95% CI 19.8-34.4) per 100 PY, respectively. Adherence supporters were effective in improving re-engagement of patients in treatment and care after they were lost to follow-up. Yet, prevention of lost to follow-up cases has remained a challenge to the program.

  6. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  7. Reduced-Order Kalman Filtering for Processing Relative Measurements

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2008-01-01

    A study in Kalman-filter theory has led to a method of processing relative measurements to estimate the current state of a physical system, using less computation than has previously been thought necessary. As used here, relative measurements signifies measurements that yield information on the relationship between a later and an earlier state of the system. An important example of relative measurements arises in computer vision: Information on relative motion is extracted by comparing images taken at two different times. Relative measurements do not directly fit into standard Kalman filter theory, in which measurements are restricted to those indicative of only the current state of the system. One approach heretofore followed in utilizing relative measurements in Kalman filtering, denoted state augmentation, involves augmenting the state of the system at the earlier of two time instants and then propagating the state to the later time instant.While state augmentation is conceptually simple, it can also be computationally prohibitive because it doubles the number of states in the Kalman filter. When processing a relative measurement, if one were to follow the state-augmentation approach as practiced heretofore, one would find it necessary to propagate the full augmented state Kalman filter from the earlier time to the later time and then select out the reduced-order components. The main result of the study reported here is proof of a property called reduced-order equivalence (ROE). The main consequence of ROE is that it is not necessary to augment with the full state, but, rather, only the portion of the state that is explicitly used in the partial relative measurement. In other words, it suffices to select the reduced-order components first and then propagate the partial augmented state Kalman filter from the earlier time to the later time; the amount of computation needed to do this can be substantially less than that needed for propagating the full augmented Kalman state filter.

  8. Adaptive Laplacian filtering for sensorimotor rhythm-based brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Lu, Jun; McFarland, Dennis J.; Wolpaw, Jonathan R.

    2013-02-01

    Objective. Sensorimotor rhythms (SMRs) are 8-30 Hz oscillations in the electroencephalogram (EEG) recorded from the scalp over sensorimotor cortex that change with movement and/or movement imagery. Many brain-computer interface (BCI) studies have shown that people can learn to control SMR amplitudes and can use that control to move cursors and other objects in one, two or three dimensions. At the same time, if SMR-based BCIs are to be useful for people with neuromuscular disabilities, their accuracy and reliability must be improved substantially. These BCIs often use spatial filtering methods such as common average reference (CAR), Laplacian (LAP) filter or common spatial pattern (CSP) filter to enhance the signal-to-noise ratio of EEG. Here, we test the hypothesis that a new filter design, called an ‘adaptive Laplacian (ALAP) filter’, can provide better performance for SMR-based BCIs. Approach. An ALAP filter employs a Gaussian kernel to construct a smooth spatial gradient of channel weights and then simultaneously seeks the optimal kernel radius of this spatial filter and the regularization parameter of linear ridge regression. This optimization is based on minimizing the leave-one-out cross-validation error through a gradient descent method and is computationally feasible. Main results. Using a variety of kinds of BCI data from a total of 22 individuals, we compare the performances of ALAP filter to CAR, small LAP, large LAP and CSP filters. With a large number of channels and limited data, ALAP performs significantly better than CSP, CAR, small LAP and large LAP both in classification accuracy and in mean-squared error. Using fewer channels restricted to motor areas, ALAP is still superior to CAR, small LAP and large LAP, but equally matched to CSP. Significance. Thus, ALAP may help to improve the accuracy and robustness of SMR-based BCIs.

  9. A Prospective Observational Study on the Predictive Value of Serum Cystatin C for Successful Weaning from Continuous Renal Replacement Therapy.

    PubMed

    Kim, Chang Seong; Bae, Eun Hui; Ma, Seong Kwon; Kim, Soo Wan

    2018-05-30

    There is a paucity of literature that investigates a biomarker associated with successful renal recovery following continuous renal replacement therapy (CRRT). Our study aimed to identify potential renal biomarkers or clinical indicators that could predict the successful weaning from CRRT. We conducted a prospective, observational study of 110 patients who had received CRRT and were weaned after renal recovery. Patients were considered to have successfully weaned from CRRT once there was no need for renal replacement therapy (RRT) for at least 14 days. For patients who had to restart dialysis within 14 days were considered unsuccessful. Of the 110 patients evaluated, 89 (80.9%) were successfully weaned from CRRT. These patients had lower serum cystatin C (CysC) levels and higher urine output than the group that restarted RRT at the time of CRRT cessation. However, the levels of serum creatinine and neutrophil gelatinase-associated lipocalin were not significantly lower in the successful group compared to the restart-RRT group. A multivariable logistic regression showed that serum CysC was an independent predictor for the successful weaning from CRRT. Furthermore, in a multivariable Cox proportional hazards analysis, the group that was successfully weaned from CRRT had a lower in-hospital mortality compared to the restarted RRT group. Serum CysC, at the time of CRRT cessation, is an independent predictor of the successful weaning from CRRT in critically ill patients with acute kidney injury. © 2018 The Author(s). Published by S. Karger AG, Basel.

  10. Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark; Baker, Benjamin; Ortensi, Javier

    Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less

  11. Pulmonary Artery Pseudoaneurysm Secondary to Lung Inf lammation.

    PubMed

    Ishimoto, Shinichirou; Sakurai, Hiroyuki; Higure, Ryouta; Kawachi, Riken; Shimamura, Mie

    2018-01-15

    Pulmonary artery aneurysms (PAA) and pseudoaneurysms (PAP) are caused by infections, vasculitis, trauma, pulmonary hypertension, congenital heart disease, and connective tissue disease. Most cases of such aneurysm occur in the trunk or major branches of the pulmonary artery, while the peripheral type is less common. The treatment modalities are medical therapy, surgery, and percutaneous catheter embolization. The mortality rate associated with rupture is approximately 50%. We encountered a case of a 53-year-old man with a pulmonary artery pseudoaneurysm secondary to pneumonia and cavity formation during chemotherapy for acute myeloid leukemia (AML). In diagnosis, contrast-enhanced chest computed tomography (CT) scan and pulmonary angiography were very useful. He was treated with right middle and lower lobectomy. After 1-month follow-up, he could restart additional chemotherapy.

  12. Bayesian learning for spatial filtering in an EEG-based brain-computer interface.

    PubMed

    Zhang, Haihong; Yang, Huijuan; Guan, Cuntai

    2013-07-01

    Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  14. Hypersonic entry vehicle state estimation using nonlinearity-based adaptive cubature Kalman filters

    NASA Astrophysics Data System (ADS)

    Sun, Tao; Xin, Ming

    2017-05-01

    Guidance, navigation, and control of a hypersonic vehicle landing on the Mars rely on precise state feedback information, which is obtained from state estimation. The high uncertainty and nonlinearity of the entry dynamics make the estimation a very challenging problem. In this paper, a new adaptive cubature Kalman filter is proposed for state trajectory estimation of a hypersonic entry vehicle. This new adaptive estimation strategy is based on the measure of nonlinearity of the stochastic system. According to the severity of nonlinearity along the trajectory, the high degree cubature rule or the conventional third degree cubature rule is adaptively used in the cubature Kalman filter. This strategy has the benefit of attaining higher estimation accuracy only when necessary without causing excessive computation load. The simulation results demonstrate that the proposed adaptive filter exhibits better performance than the conventional third-degree cubature Kalman filter while maintaining the same performance as the uniform high degree cubature Kalman filter but with lower computation complexity.

  15. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  16. Neuromorphic Kalman filter implementation in IBM’s TrueNorth

    NASA Astrophysics Data System (ADS)

    Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.

    2017-10-01

    Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.

  17. Effect of hydraulic retention time on deterioration/restarting of sludge anaerobic digestion: Extracellular polymeric substances and microbial response.

    PubMed

    Wei, Liangliang; An, Xiaoyan; Wang, Sheng; Xue, Chonghua; Jiang, Junqiu; Zhao, Qingliang; Kabutey, Felix Tetteh; Wang, Kun

    2017-11-01

    In this study, the transformation of the sludge-related extracellular polymeric substances (EPS) during mesophilic anaerobic digestion was characterized to assess the effect of hydraulic retention time (HRT) on reactor deterioration/restarting. Experimental HRT variations from 20 to 15 and 10d was implemented for deterioration, and from 10 to 20d for restarting. Long-term digestion at the lowest HRT (10d) resulted in significant accumulation of hydrolyzed hydrophobic materials and volatile fatty acids in the supernatants. Moreover, less efficient hydrolysis of sludge EPS, especially of proteins related substances which contributed to the deterioration of digester. Aceticlastic species of Methanosaetaceae decreased from 36.3% to 27.6% with decreasing HRT (20-10d), while hydrogenotrophic methanogens (Methanomicrobiales and Methanobacteriales) increased from 30.4% to 38.3%. Proteins and soluble microbial byproducts related fluorophores in feed sludge for the anaerobic digester changed insignificantly at high HRT, whereas the fluorescent intensity of fulvic acid-like components declined sharply once the digestion deteriorated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Investigation of Loop Heat Pipe Survival and Restart After Extreme Cold Environment Exposure

    NASA Technical Reports Server (NTRS)

    Golliher, Eric; Ku, Jentung; Licari, Anthony; Sanzi, James

    2010-01-01

    NASA plans human exploration near the South Pole of the Moon, and other locations where the environment is extremely cold. This paper reports on the heat transfer performance of a loop heat pipe (LHP) exposed to extreme cold under the simulated reduced gravitational environment of the Moon. A common method of spacecraft thermal control is to use a LHP with ammonia working fluid. Typically, a small amount of heat is provided either by electrical heaters or by environmental design, such that the LHP condenser temperature never drops below the freezing point of ammonia. The concern is that a liquid-filled, frozen condenser would not restart, or that a thawing condenser would damage the tubing due to the expansion of ammonia upon thawing. This paper reports the results of an experimental investigation of a novel approach to avoid these problems. The LHP compensation chamber (CC) is conditioned such that all the ammonia liquid is removed from the condenser and the LHP is nonoperating. The condenser temperature is then reduced to below that of the ammonia freezing point. The LHP is then successfully restarted.

  19. Steady, oscillatory, and unsteady subsonic Aerodynamics, production version 1.1 (SOUSSA-P1.1). Volume 2: User/programmer manual

    NASA Technical Reports Server (NTRS)

    Smolka, S. A.; Preuss, R. D.; Tseng, K.; Morino, L.

    1980-01-01

    A user/programmer manual for the computer program SOUSSA P 1.1 is presented. The program was designed to provide accurate and efficient evaluation of steady and unsteady loads on aircraft having arbitrary shapes and motions, including structural deformations. These design goals were in part achieved through the incorporation of the data handling capabilities of the SPAR finite element Structural Analysis computer program. As a further result, SOUSSA P possesses an extensive checkpoint/ restart facility. The programmer's portion of this manual includes overlay/subroutine hierarchy, logical flow of control, definition of SOUSSA P 1.1 FORTRAN variables, and definition of SOUSSA P 1.1 subroutines. Purpose of the SOUSSA P 1.1 modules, input data to the program, output of the program, hardware/software requirements, error detection and reporting capabilities, job control statements, a summary of the procedure for running the program and two test cases including input and output and listings are described in the user oriented portion of the manual.

  20. HO2 rovibrational eigenvalue studies for nonzero angular momentum

    NASA Astrophysics Data System (ADS)

    Wu, Xudong T.; Hayes, Edward F.

    1997-08-01

    An efficient parallel algorithm is reported for determining all bound rovibrational energy levels for the HO2 molecule for nonzero angular momentum values, J=1, 2, and 3. Performance tests on the CRAY T3D indicate that the algorithm scales almost linearly when up to 128 processors are used. Sustained performance levels of up to 3.8 Gflops have been achieved using 128 processors for J=3. The algorithm uses a direct product discrete variable representation (DVR) basis and the implicitly restarted Lanczos method (IRLM) of Sorensen to compute the eigenvalues of the polyatomic Hamiltonian. Since the IRLM is an iterative method, it does not require storage of the full Hamiltonian matrix—it only requires the multiplication of the Hamiltonian matrix by a vector. When the IRLM is combined with a formulation such as DVR, which produces a very sparse matrix, both memory and computation times can be reduced dramatically. This algorithm has the potential to achieve even higher performance levels for larger values of the total angular momentum.

  1. Coevolution in management fashion: an agent-based model of consultant-driven innovation.

    PubMed

    Strang, David; David, Robert J; Akhlaghpour, Saeed

    2014-07-01

    The rise of management consultancy has been accompanied by increasingly marked faddish cycles in management techniques, but the mechanisms that underlie this relationship are not well understood. The authors develop a simple agent-based framework that models innovation adoption and abandonment on both the supply and demand sides. In opposition to conceptions of consultants as rhetorical wizards who engineer waves of management fashion, firms and consultants are treated as boundedly rational actors who chase the secrets of success by mimicking their highest-performing peers. Computational experiments demonstrate that consultant-driven versions of this dynamic in which the outcomes of firms are strongly conditioned by their choice of consultant are robustly faddish. The invasion of boom markets by low-quality consultants undercuts popular innovations while simultaneously restarting the fashion cycle by prompting the flight of high-quality consultants into less densely occupied niches. Computational experiments also indicate conditions involving consultant mobility, aspiration levels, mimic probabilities, and client-provider matching that attenuate faddishness.

  2. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  3. Computational Fluid Dynamics of Choanoflagellate Filter-Feeding

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Seyed Saeed; Walther, Jens; Nielsen, Lasse Tore; Kiorboe, Thomas; Dolger, Julia; Andersen, Anders

    2017-11-01

    Choanoflagellates are unicellular aquatic organisms with a single flagellum that drives a feeding current through a funnel-shaped collar filter on which bacteria-sized prey are caught. Using computational fluid dynamics (CFD) we model the beating flagellum and the complex filter flow of the choanoflagellate Diaphanoeca grandis. Our CFD simulations based on the current understanding of the morphology underestimate the experimentally observed clearance rate by more than an order of magnitude: The beating flagellum is simply unable to draw enough water through the fine filter. Our observations motivate us to suggest a radically different filtration mechanism that requires a flagellar vane (sheet), and addition of a wide vane in our CFD model allows us to correctly predict the observed clearance rate.

  4. Effects of a combined Diesel particle filter-DeNOx system (DPN) on reactive nitrogen compounds emissions: a parameter study.

    PubMed

    Heeb, Norbert V; Haag, Regula; Seiler, Cornelia; Schmid, Peter; Zennegg, Markus; Wichser, Adrian; Ulrich, Andrea; Honegger, Peter; Zeyer, Kerstin; Emmenegger, Lukas; Zimmerli, Yan; Czerwinski, Jan; Kasper, Markus; Mayer, Andreas

    2012-12-18

    The impact of a combined diesel particle filter-deNO(x) system (DPN) on emissions of reactive nitrogen compounds (RNCs) was studied varying the urea feed factor (α), temperature, and residence time, which are key parameters of the deNO(x) process. The DPN consisted of a platinum-coated cordierite filter and a vanadia-based deNO(x) catalyst supporting selective catalytic reduction (SCR) chemistry. Ammonia (NH₃) is produced in situ from thermolysis of urea and hydrolysis of isocyanic acid (HNCO). HNCO and NH₃ are both toxic and highly reactive intermediates. The deNO(x) system was only part-time active in the ISO8178/4 C1cycle. Urea injection was stopped and restarted twice. Mean NO and NO₂ conversion efficiencies were 80%, 95%, 97% and 43%, 87%, 99%, respectively, for α = 0.8, 1.0, and 1.2. HNCO emissions increased from 0.028 g/h engine-out to 0.18, 0.25, and 0.26 g/h at α = 0.8, 1.0, and 1.2, whereas NH₃ emissions increased from <0.045 to 0.12, 1.82, and 12.8 g/h with maxima at highest temperatures and shortest residence times. Most HNCO is released at intermediate residence times (0.2-0.3 s) and temperatures (300-400 °C). Total RNC efficiencies are highest at α = 1.0, when comparable amounts of reduced and oxidized compounds are released. The DPN represents the most advanced system studied so far under the VERT protocol achieving high conversion efficiencies for particles, NO, NO₂, CO, and hydrocarbons. However, we observed a trade-off between deNO(x) efficiency and secondary emissions. Therefore, it is important to adopt such DPN technology to specific application conditions to take advantage of reduced NO(x) and particle emissions while avoiding NH₃ and HNCO slip.

  5. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  6. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  7. A novel iris localization algorithm using correlation filtering

    NASA Astrophysics Data System (ADS)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  8. The effects of the Asselin time filter on numerical solutions to the linearized shallow-water wave equations

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.; Johnson, D. R.; Uccellini, L. W.

    1983-01-01

    In the present investigation, a one-dimensional linearized analysis is used to determine the effect of Asselin's (1972) time filter on both the computational stability and phase error of numerical solutions for the shallow water wave equations, in cases with diffusion but without rotation. An attempt has been made to establish the approximate optimal values of the filtering parameter nu for each of the 'lagged', Dufort-Frankel, and Crank-Nicholson diffusion schemes, suppressing the computational wave mode without materially altering the physical wave mode. It is determined that in the presence of diffusion, the optimum filter length depends on whether waves are undergoing significant propagation. When moderate propagation is present, with or without diffusion, the Asselin filter has little effect on the spatial phase lag of the physical mode for the leapfrog advection scheme of the three diffusion schemes considered.

  9. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  10. Optical implementation of the synthetic discriminant function

    NASA Astrophysics Data System (ADS)

    Butler, S.; Riggins, J.

    1984-10-01

    Much attention is focused on the use of coherent optical pattern recognition (OPR) using matched spatial filters for robotics and intelligent systems. The OPR problem consists of three aspects -- information input, information processing, and information output. This paper discusses the information processing aspect which consists of choosing a filter to provide robust correlation with high efficiency. The filter should ideally be invariant to image shift, rotation and scale, provide a reasonable signal-to-noise (S/N) ratio and allow high throughput efficiency. The physical implementation of a spatial matched filter involves many choices. These include the use of conventional holograms or computer-generated holograms (CGH) and utilizing absorption or phase materials. Conventional holograms inherently modify the reference image by non-uniform emphasis of spatial frequencies. Proper use of film nonlinearity provides improved filter performance by emphasizing frequency ranges crucial to target discrimination. In the case of a CGH, the emphasis of the reference magnitude and phase can be controlled independently of the continuous tone or binary writing processes. This paper describes computer simulation and optical implementation of a geometrical shape and a Synthetic Discriminant Function (SDF) matched filter. The authors chose the binary Allebach-Keegan (AK) CGH algorithm to produce actual filters. The performances of these filters were measured to verify the simulation results. This paper provides a brief summary of the matched filter theory, the SDF, CGH algorithms, Phase-Only-Filtering, simulation procedures, and results.

  11. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  12. Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.

    PubMed

    Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun

    2018-01-01

    Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.

  13. Final Report for Geometric Observers and Particle Filtering for Controlled Active Vision

    DTIC Science & Technology

    2016-12-15

    code) 15-12-2016 Final Report 01Sep06 - 09May11 Final Report for Geometric Observers & Particle Filtering for Controlled Active Vision 49414-NS.1Allen...Observers and Particle Filtering for Controlled Active Vision by Allen R. Tannenbaum School of Electrical and Computer Engineering Georgia Institute of...7 2.2.4 Conformal Area Minimizing Flows . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Particle Filters

  14. Novel Digital Signal Processing and Detection Techniques.

    DTIC Science & Technology

    1980-09-01

    decimation and interpolation [11, 1 2]. * Submitted by: Bede Liu Department of Electrical .l Engineering and Computer Science Princeton University ...on the use of recursive filters for decimation and interpolation. 4- UNCL.ASSIFIED~ SECURITY CLASSIFICATION OF PAGEfW1,en Data Fneprd) ...filter structure for realizing low-pass filter is developed 16,7]. By employing decimation and interpolation, the filter uses only coefficients 0, +1, and

  15. IGMtransmission: Transmission curve computation

    NASA Astrophysics Data System (ADS)

    Harrison, Christopher M.; Meiksin, Avery; Stock, David

    2015-04-01

    IGMtransmission is a Java graphical user interface that implements Monte Carlo simulations to compute the corrections to colors of high-redshift galaxies due to intergalactic attenuation based on current models of the Intergalactic Medium. The effects of absorption due to neutral hydrogen are considered, with particular attention to the stochastic effects of Lyman Limit Systems. Attenuation curves are produced, as well as colors for a wide range of filter responses and model galaxy spectra. Photometric filters are included for the Hubble Space Telescope, the Keck telescope, the Mt. Palomar 200-inch, the SUBARU telescope and UKIRT; alternative filter response curves and spectra may be readily uploaded.

  16. Full-color large-scaled computer-generated holograms using RGB color filters.

    PubMed

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  17. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  18. Development of a New Arterial-Line Filter Design Using Computational Fluid Dynamics Analysis

    PubMed Central

    Herbst, Daniel P.; Najm, Hani K.

    2012-01-01

    Abstract: Arterial-line filters used during extracorporeal circulation continue to rely on the physical properties of a wetted micropore and reductions in blood flow velocity to affect air separation from the circulating blood volume. Although problems associated with air embolism during cardiac surgery persist, a number of investigators have concluded that further improvements in filtration are needed to enhance air removal during cardiopulmonary bypass procedures. This article reviews theoretical principles of micropore filter technology and outlines the development of a new arterial-line filter concept using computational fluid dynamics analysis. Manufacturer-supplied data of a micropore screen and experimental results taken from an ex vivo test circuit were used to define the inputs needed for numerical modeling of a new filter design. Flow patterns, pressure distributions, and velocity profiles predicted with computational fluid dynamics softwarewere used to inform decisions on model refinements and how to achieve initial design goals of ≤225 mL prime volume and ≤500 cm2 of screen surface area. Predictions for optimal model geometry included a screen angle of 56° from the horizontal plane with a total surface area of 293.9 cm2 and a priming volume of 192.4 mL. This article describes in brief the developmental process used to advance a new filter design and supports the value of numerical modeling in this undertaking. PMID:23198394

  19. Development of a new arterial-line filter design using computational fluid dynamics analysis.

    PubMed

    Herbst, Daniel P; Najm, Hani K

    2012-09-01

    Arterial-line filters used during extracorporeal circulation continue to rely on the physical properties of a wetted micropore and reductions in blood flow velocity to affect air separation from the circulating blood volume. Although problems associated with air embolism during cardiac surgery persist, a number of investigators have concluded that further improvements in filtration are needed to enhance air removal during cardiopulmonary bypass procedures. This article reviews theoretical principles of micropore filter technology and outlines the development of a new arterial-line filter concept using computational fluid dynamics analysis. Manufacturer-supplied data of a micropore screen and experimental results taken from an ex vivo test circuit were used to define the inputs needed for numerical modeling of a new filter design. Flow patterns, pressure distributions, and velocity profiles predicted with computational fluid dynamics software were used to inform decisions on model refinements and how to achieve initial design goals of < or = 225 mL prime volume and < or = 500 cm2 of screen surface area. Predictions for optimal model geometry included a screen angle of 56 degrees from the horizontal plane with a total surface area of 293.9 cm2 and a priming volume of 192.4 mL. This article describes in brief the developmental process used to advance a new filter design and supports the value of numerical modeling in this undertaking.

  20. Efficient multichannel acoustic echo cancellation using constrained tap selection schemes in the subband domain

    NASA Astrophysics Data System (ADS)

    Desiraju, Naveen Kumar; Doclo, Simon; Wolff, Tobias

    2017-12-01

    Acoustic echo cancellation (AEC) is a key speech enhancement technology in speech communication and voice-enabled devices. AEC systems employ adaptive filters to estimate the acoustic echo paths between the loudspeakers and the microphone(s). In applications involving surround sound, the computational complexity of an AEC system may become demanding due to the multiple loudspeaker channels and the necessity of using long filters in reverberant environments. In order to reduce the computational complexity, the approach of partially updating the AEC filters is considered in this paper. In particular, we investigate tap selection schemes which exploit the sparsity present in the loudspeaker channels for partially updating subband AEC filters. The potential for exploiting signal sparsity across three dimensions, namely time, frequency, and channels, is analyzed. A thorough analysis of different state-of-the-art tap selection schemes is performed and insights about their limitations are gained. A novel tap selection scheme is proposed which overcomes these limitations by exploiting signal sparsity while not ignoring any filters for update in the different subbands and channels. Extensive simulation results using both artificial as well as real-world multichannel signals show that the proposed tap selection scheme outperforms state-of-the-art tap selection schemes in terms of echo cancellation performance. In addition, it yields almost identical echo cancellation performance as compared to updating all filter taps at a significantly reduced computational cost.

  1. nu-TRLan User Guide Version 1.0: A High-Performance Software Package for Large-Scale Harmitian Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Ichitaro; Wu, Kesheng; Simon, Horst

    2008-10-27

    The original software package TRLan, [TRLan User Guide], page 24, implements the thick restart Lanczos method, [Wu and Simon 2001], page 24, for computing eigenvalues {lambda} and their corresponding eigenvectors v of a symmetric matrix A: Av = {lambda}v. Its effectiveness in computing the exterior eigenvalues of a large matrix has been demonstrated, [LBNL-42982], page 24. However, its performance strongly depends on the user-specified dimension of a projection subspace. If the dimension is too small, TRLan suffers from slow convergence. If it is too large, the computational and memory costs become expensive. Therefore, to balance the solution convergence and costs,more » users must select an appropriate subspace dimension for each eigenvalue problem at hand. To free users from this difficult task, nu-TRLan, [LNBL-1059E], page 23, adjusts the subspace dimension at every restart such that optimal performance in solving the eigenvalue problem is automatically obtained. This document provides a user guide to the nu-TRLan software package. The original TRLan software package was implemented in Fortran 90 to solve symmetric eigenvalue problems using static projection subspace dimensions. nu-TRLan was developed in C and extended to solve Hermitian eigenvalue problems. It can be invoked using either a static or an adaptive subspace dimension. In order to simplify its use for TRLan users, nu-TRLan has interfaces and features similar to those of TRLan: (1) Solver parameters are stored in a single data structure called trl-info, Chapter 4 [trl-info structure], page 7. (2) Most of the numerical computations are performed by BLAS, [BLAS], page 23, and LAPACK, [LAPACK], page 23, subroutines, which allow nu-TRLan to achieve optimized performance across a wide range of platforms. (3) To solve eigenvalue problems on distributed memory systems, the message passing interface (MPI), [MPI forum], page 23, is used. The rest of this document is organized as follows. In Chapter 2 [Installation], page 2, we provide an installation guide of the nu-TRLan software package. In Chapter 3 [Example], page 3, we present a simple nu-TRLan example program. In Chapter 4 [trl-info structure], page 7, and Chapter 5 [trlan subroutine], page 14, we describe the solver parameters and interfaces in detail. In Chapter 6 [Solver parameters], page 21, we discuss the selection of the user-specified parameters. In Chapter 7 [Contact information], page 22, we give the acknowledgements and contact information of the authors. In Chapter 8 [References], page 23, we list reference to related works.« less

  2. CUDA-based acceleration and BPN-assisted automation of bilateral filtering for brain MR image restoration.

    PubMed

    Chang, Herng-Hua; Chang, Yu-Ning

    2017-04-01

    Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.

  3. The multimedia computer for low-literacy patient education: a pilot project of cancer risk perceptions.

    PubMed

    Wofford, J L; Currin, D; Michielutte, R; Wofford, M M

    2001-04-20

    Inadequate reading literacy is a major barrier to better educating patients. Despite its high prevalence, practical solutions for detecting and overcoming low literacy in a busy clinical setting remain elusive. In exploring the potential role for the multimedia computer in improving office-based patient education, we compared the accuracy of information captured from audio-computer interviewing of patients with that obtained from subsequent verbal questioning. Adult medicine clinic, urban community health center Convenience sample of patients awaiting clinic appointments (n = 59). Exclusion criteria included obvious psychoneurologic impairment or primary language other than English. A multimedia computer presentation that used audio-computer interviewing with localized imagery and voices to elicit responses to 4 questions on prior computer use and cancer risk perceptions. Three patients refused or were unable to interact with the computer at all, and 3 patients required restarting the presentation from the beginning but ultimately completed the computerized survey. Of the 51 evaluable patients (72.5% African-American, 66.7% female, mean age 47.5 [+/- 18.1]), the mean time in the computer presentation was significantly longer with older age and with no prior computer use but did not differ by gender or race. Despite a high proportion of no prior computer use (60.8%), there was a high rate of agreement (88.7% overall) between audio-computer interviewing and subsequent verbal questioning. Audio-computer interviewing is feasible in this urban community health center. The computer offers a partial solution for overcoming literacy barriers inherent in written patient education materials and provides an efficient means of data collection that can be used to better target patients' educational needs.

  4. Hardware Implementation of a Bilateral Subtraction Filter

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.

  5. ITER Simulations Using the PEDESTAL Module in the PTRANSP Code

    NASA Astrophysics Data System (ADS)

    Halpern, F. D.; Bateman, G.; Kritz, A. H.; Pankin, A. Y.; Budny, R. V.; Kessel, C.; McCune, D.; Onjun, T.

    2006-10-01

    PTRANSP simulations with a computed pedestal height are carried out for ITER scenarios including a standard ELMy H-mode (15 MA discharge) and a hybrid scenario (12MA discharge). It has been found that fusion power production predicted in simulations of ITER discharges depends sensitively on the height of the H-mode temperature pedestal [1]. In order to study this effect, the NTCC PEDESTAL module [2] has been implemented in PTRANSP code to provide boundary conditions used for the computation of the projected performance of ITER. The PEDESTAL module computes both the temperature and width of the pedestal at the edge of type I ELMy H-mode discharges once the threshold conditions for the H-mode are satisfied. The anomalous transport in the plasma core is predicted using the GLF23 or MMM95 transport models. To facilitate the steering of lengthy PTRANSP computations, the PTRANSP code has been modified to allow changes in the transport model when simulations are restarted. The PTRANSP simulation results are compared with corresponding results obtained using other integrated modeling codes.[1] G. Bateman, T. Onjun and A.H. Kritz, Plasma Physics and Controlled Fusion, 45, 1939 (2003).[2] T. Onjun, G. Bateman, A.H. Kritz, and G. Hammett, Phys. Plasmas 9, 5018 (2002).

  6. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA andmore » MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less

  7. Software Aids Visualization of Computed Unsteady Flow

    NASA Technical Reports Server (NTRS)

    Kao, David; Kenwright, David

    2003-01-01

    Unsteady Flow Analysis Toolkit (UFAT) is a computer program that synthesizes motions of time-dependent flows represented by very large sets of data generated in computational fluid dynamics simulations. Prior to the development of UFAT, it was necessary to rely on static, single-snapshot depictions of time-dependent flows generated by flow-visualization software designed for steady flows. Whereas it typically takes weeks to analyze the results of a largescale unsteady-flow simulation by use of steady-flow visualization software, the analysis time is reduced to hours when UFAT is used. UFAT can be used to generate graphical objects of flow visualization results using multi-block curvilinear grids in the format of a previously developed NASA data-visualization program, PLOT3D. These graphical objects can be rendered using FAST, another popular flow visualization software developed at NASA. Flow-visualization techniques that can be exploited by use of UFAT include time-dependent tracking of particles, detection of vortex cores, extractions of stream ribbons and surfaces, and tetrahedral decomposition for optimal particle tracking. Unique computational features of UFAT include capabilities for automatic (batch) processing, restart, memory mapping, and parallel processing. These capabilities significantly reduce analysis time and storage requirements, relative to those of prior flow-visualization software. UFAT can be executed on a variety of supercomputers.

  8. High resolution vertical profiles of wind, temperature and humidity obtained by computer processing and digital filtering of radiosonde and radar tracking data from the ITCZ experiment of 1977

    NASA Technical Reports Server (NTRS)

    Danielson, E. F.; Hipskind, R. S.; Gaines, S. E.

    1980-01-01

    Results are presented from computer processing and digital filtering of radiosonde and radar tracking data obtained during the ITCZ experiment when coordinated measurements were taken daily over a 16 day period across the Panama Canal Zone. The temperature relative humidity and wind velocity profiles are discussed.

  9. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  10. Fast Inbound Top-K Query for Random Walk with Restart.

    PubMed

    Zhang, Chao; Jiang, Shan; Chen, Yucheng; Sun, Yidan; Han, Jiawei

    2015-09-01

    Random walk with restart (RWR) is widely recognized as one of the most important node proximity measures for graphs, as it captures the holistic graph structure and is robust to noise in the graph. In this paper, we study a novel query based on the RWR measure, called the inbound top-k (Ink) query. Given a query node q and a number k , the Ink query aims at retrieving k nodes in the graph that have the largest weighted RWR scores to q . Ink queries can be highly useful for various applications such as traffic scheduling, disease treatment, and targeted advertising. Nevertheless, none of the existing RWR computation techniques can accurately and efficiently process the Ink query in large graphs. We propose two algorithms, namely Squeeze and Ripple, both of which can accurately answer the Ink query in a fast and incremental manner. To identify the top- k nodes, Squeeze iteratively performs matrix-vector multiplication and estimates the lower and upper bounds for all the nodes in the graph. Ripple employs a more aggressive strategy by only estimating the RWR scores for the nodes falling in the vicinity of q , the nodes outside the vicinity do not need to be evaluated because their RWR scores are propagated from the boundary of the vicinity and thus upper bounded. Ripple incrementally expands the vicinity until the top- k result set can be obtained. Our extensive experiments on real-life graph data sets show that Ink queries can retrieve interesting results, and the proposed algorithms are orders of magnitude faster than state-of-the-art method.

  11. Recursive time-varying filter banks for subband image coding

    NASA Technical Reports Server (NTRS)

    Smith, Mark J. T.; Chung, Wilson C.

    1992-01-01

    Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.

  12. Improving the visualization of 3D ultrasound data with 3D filtering

    NASA Astrophysics Data System (ADS)

    Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.

  13. An auxiliary adaptive Gaussian mixture filter applied to flowrate allocation using real data from a multiphase producer

    NASA Astrophysics Data System (ADS)

    Lorentzen, Rolf J.; Stordal, Andreas S.; Hewitt, Neal

    2017-05-01

    Flowrate allocation in production wells is a complicated task, especially for multiphase flow combined with several reservoir zones and/or branches. The result depends heavily on the available production data, and the accuracy of these. In the application we show here, downhole pressure and temperature data are available, in addition to the total flowrates at the wellhead. The developed methodology inverts these observations to the fluid flowrates (oil, water and gas) that enters two production branches in a real full-scale producer. A major challenge is accurate estimation of flowrates during rapid variations in the well, e.g. due to choke adjustments. The Auxiliary Sequential Importance Resampling (ASIR) filter was developed to handle such challenges, by introducing an auxiliary step, where the particle weights are recomputed (second weighting step) based on how well the particles reproduce the observations. However, the ASIR filter suffers from large computational time when the number of unknown parameters increase. The Gaussian Mixture (GM) filter combines a linear update, with the particle filters ability to capture non-Gaussian behavior. This makes it possible to achieve good performance with fewer model evaluations. In this work we present a new filter which combines the ASIR filter and the Gaussian Mixture filter (denoted ASGM), and demonstrate improved estimation (compared to ASIR and GM filters) in cases with rapid parameter variations, while maintaining reasonable computational cost.

  14. Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong

    2005-04-01

    Low-dose CT (computed tomography) sinogram data have been shown to be signal-dependent with an analytical relationship between the sample mean and sample variance. Spatially-invariant low-pass linear filters, such as the Butterworth and Hanning filters, could not adequately handle the data noise and statistics-based nonlinear filters may be an alternative choice, in addition to other choices of minimizing cost functions on the noisy data. Anisotropic diffusion filter and nonlinear Gaussian filters chain (NLGC) are two well-known classes of nonlinear filters based on local statistics for the purpose of edge-preserving noise reduction. These two filters can utilize the noise properties of the low-dose CT sinogram for adaptive noise reduction, but can not incorporate signal correlative information for an optimal regularized solution. Our previously-developed Karhunen-Loeve (KL) domain PWLS (penalized weighted least square) minimization considers the signal correlation via the KL strategy and seeks the PWLS cost function minimization for an optimal regularized solution for each KL component, i.e., adaptive to the KL components. This work compared the nonlinear filters with the KL-PWLS framework for low-dose CT application. Furthermore, we investigated the nonlinear filters for post KL-PWLS noise treatment in the sinogram space, where the filters were applied after ramp operation on the KL-PWLS treated sinogram data prior to backprojection operation (for image reconstruction). By both computer simulation and experimental low-dose CT data, the nonlinear filters could not outperform the KL-PWLS framework. The gain of post KL-PWLS edge-preserving noise filtering in the sinogram space is not significant, even the noise has been modulated by the ramp operation.

  15. Basilar-membrane responses to broadband noise modeled using linear filters with rational transfer functions.

    PubMed

    Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A

    2011-05-01

    Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE

  16. Space shuttle propulsion parameter estimation using optional estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.

  17. Ventral-stream-like shape representation: from pixel intensity values to trainable object-selective COSFIRE models

    PubMed Central

    Azzopardi, George; Petkov, Nicolai

    2014-01-01

    The remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms. PMID:25126068

  18. Computationally efficient algorithms for real-time attitude estimation

    NASA Technical Reports Server (NTRS)

    Pringle, Steven R.

    1993-01-01

    For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.

  19. CSI computer system/remote interface unit acceptance test results

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.

    1992-01-01

    The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.

  20. Optical calculation of correlation filters for a robotic vision system

    NASA Technical Reports Server (NTRS)

    Knopp, Jerome

    1989-01-01

    A method is presented for designing optical correlation filters based on measuring three intensity patterns: the Fourier transform of a filter object, a reference wave and the interference pattern produced by the sum of the object transform and the reference. The method can produce a filter that is well matched to both the object, its transforming optical system and the spatial light modulator used in the correlator input plane. A computer simulation was presented to demonstrate the approach for the special case of a conventional binary phase-only filter. The simulation produced a workable filter with a sharp correlation peak.

  1. Single-trial detection of visual evoked potentials by common spatial patterns and wavelet filtering for brain-computer interface.

    PubMed

    Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo

    2013-01-01

    Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.

  2. SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Floros, D

    Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less

  3. Reducing Conservatism of Analytic Transient Response Bounds via Shaping Filters

    NASA Technical Reports Server (NTRS)

    Kwan, Aiyueh; Bedrossian, Nazareth; Jan, Jiann-Woei; Grigoriadis, Karolos; Hua, Tuyen (Technical Monitor)

    1999-01-01

    Recent results show that the peak transient response of a linear system to bounded energy inputs can be computed using the energy-to-peak gain of the system. However, analytically computed peak response bound can be conservative for a class of class bounded energy signals, specifically pulse trains generated from jet firings encountered in space vehicles. In this paper, shaping filters are proposed as a Methodology to reduce the conservatism of peak response analytic bounds. This Methodology was applied to a realistic Space Station assembly operation subject to jet firings. The results indicate that shaping filters indeed reduce the predicted peak response bounds.

  4. Introduction of Virtualization Technology to Multi-Process Model Checking

    NASA Technical Reports Server (NTRS)

    Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu

    2009-01-01

    Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.

  5. Saturn Apollo Program

    NASA Image and Video Library

    1967-01-01

    This cutaway illustration shows the Saturn V S-IVB (third) stage with the callouts of its major components. When the S-II (second) stage of the powerful Saturn V rocket burnt out and was separated the remaining units approached orbit around the Earth. Injection into the desired orbit was attaineded as the S-IVB (third stage) was ignited and burnt. The S-IVB stage was powered by a single 200,000-pound thrust J-2 engine and had a re-start capability built in for its J-2 engine. The S-IVB restarted to speed the Apollo spacecraft to escape velocity injecting it and the astronauts into a moon trajectory.

  6. Successful Retreatment of a Child with a Refractory Brainstem Ganglioglioma with Vemurafenib.

    PubMed

    Aguilera, Dolly; Janss, Anna; Mazewski, Claire; Castellino, Robert Craig; Schniederjan, Matthew; Hayes, Laura; Brahma, Barunashish; Fogelgren, Lauren; MacDonald, Tobey J

    2016-03-01

    A child with brainstem ganglioglioma underwent subtotal resection and focal radiation. Magnetic resonance imaging confirmed tumor progression 6 months later. Another partial resection revealed viable BRAF V600E-positive residual tumor. Vemurafenib (660 mg/m(2) /dose) was administered twice daily, resulting in >70% tumor reduction with sustained clinical improvement for 1 year. Vemurafenib was then terminated, but significant tumor progression occurred 3 months later. Vemurafenib was restarted, resulting in partial response. Toxicities included Grade I pruritus and Grade II rash. Vemurafenib was effectively crushed and administered in solution via nasogastric tube. We demonstrate benefit from restarting vemurafenib therapy. © 2015 Wiley Periodicals, Inc.

  7. IRWRLDA: improved random walk with restart for lncRNA-disease association prediction.

    PubMed

    Chen, Xing; You, Zhu-Hong; Yan, Gui-Ying; Gong, Dun-Wei

    2016-09-06

    In recent years, accumulating evidences have shown that the dysregulations of lncRNAs are associated with a wide range of human diseases. It is necessary and feasible to analyze known lncRNA-disease associations, predict potential lncRNA-disease associations, and provide the most possible lncRNA-disease pairs for experimental validation. Considering the limitations of traditional Random Walk with Restart (RWR), the model of Improved Random Walk with Restart for LncRNA-Disease Association prediction (IRWRLDA) was developed to predict novel lncRNA-disease associations by integrating known lncRNA-disease associations, disease semantic similarity, and various lncRNA similarity measures. The novelty of IRWRLDA lies in the incorporation of lncRNA expression similarity and disease semantic similarity to set the initial probability vector of the RWR. Therefore, IRWRLDA could be applied to diseases without any known related lncRNAs. IRWRLDA significantly improved previous classical models with reliable AUCs of 0.7242 and 0.7872 in two known lncRNA-disease association datasets downloaded from the lncRNADisease database, respectively. Further case studies of colon cancer and leukemia were implemented for IRWRLDA and 60% of lncRNAs in the top 10 prediction lists have been confirmed by recent experimental reports.

  8. Principal Component Analysis in the Spectral Analysis of the Dynamic Laser Speckle Patterns

    NASA Astrophysics Data System (ADS)

    Ribeiro, K. M.; Braga, R. A., Jr.; Horgan, G. W.; Ferreira, D. D.; Safadi, T.

    2014-02-01

    Dynamic laser speckle is a phenomenon that interprets an optical patterns formed by illuminating a surface under changes with coherent light. Therefore, the dynamic change of the speckle patterns caused by biological material is known as biospeckle. Usually, these patterns of optical interference evolving in time are analyzed by graphical or numerical methods, and the analysis in frequency domain has also been an option, however involving large computational requirements which demands new approaches to filter the images in time. Principal component analysis (PCA) works with the statistical decorrelation of data and it can be used as a data filtering. In this context, the present work evaluated the PCA technique to filter in time the data from the biospeckle images aiming the reduction of time computer consuming and improving the robustness of the filtering. It was used 64 images of biospeckle in time observed in a maize seed. The images were arranged in a data matrix and statistically uncorrelated by PCA technique, and the reconstructed signals were analyzed using the routine graphical and numerical methods to analyze the biospeckle. Results showed the potential of the PCA tool in filtering the dynamic laser speckle data, with the definition of markers of principal components related to the biological phenomena and with the advantage of fast computational processing.

  9. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  10. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  11. Design and Implementation of Embedded Computer Vision Systems Based on Particle Filters

    DTIC Science & Technology

    2010-01-01

    for hardware/software implementa- tion of multi-dimensional particle filter application and we explore this in the third application which is a 3D...methodology for hardware/software implementation of multi-dimensional particle filter application and we explore this in the third application which is a...and hence multiprocessor implementation of parti- cle filters is an important option to examine. A significant body of work exists on optimizing generic

  12. Discrete square root smoothing.

    NASA Technical Reports Server (NTRS)

    Kaminski, P. G.; Bryson, A. E., Jr.

    1972-01-01

    The basic techniques applied in the square root least squares and square root filtering solutions are applied to the smoothing problem. Both conventional and square root solutions are obtained by computing the filtered solutions, then modifying the results to include the effect of all measurements. A comparison of computation requirements indicates that the square root information smoother (SRIS) is more efficient than conventional solutions in a large class of fixed interval smoothing problems.

  13. A Distributed Multiobject Tracking Algorithm for Passive Sensor Networks

    DTIC Science & Technology

    1980-06-23

    between the true acoustic azimuth and the filter estimate was computed for e4eii track. An overall aso was computed for each filter. The results of...h B sindB’ =°S6B (5.26) (Remember 6. is a negative quantity in this figure). Also, 1. tA- t = A (5.28)A t -t (5.29) From (5.25) and (5.26) we geL

  14. Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.

    1976-01-01

    An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.

  15. A Computer Model of a Phase Lock Loop

    NASA Technical Reports Server (NTRS)

    Shelton, Ralph Paul

    1973-01-01

    A computer model is reported of a PLL (phase-lock loop), preceded by a bandpass filter, which is valid when the bandwidth of the bandpass filter is of the same order of magnitude as the natural frequency of the PLL. New results for the PLL natural frequency equal to the bandpass filter bandwidth are presented for a second order PLL operating with carrier plus noise as the input. However, it is shown that extensions to higher order loops, and to the case of a modulated carrier are straightforward. The new results presented give the cycle skipping rate of the PLL as a function of the input carrier to noise ratio when the PLL natural frequency is equal to the bandpass filter bandwidth. Preliminary results showing the variation of the output noise power and cycle skipping rates of the PLL as a function of the loop damping ratio for the PLL natural frequency equal to the bandpass filter bandwidth are also included.

  16. FILTSoft: A computational tool for microstrip planar filter design

    NASA Astrophysics Data System (ADS)

    Elsayed, M. H.; Abidin, Z. Z.; Dahlan, S. H.; Cholan N., A.; Ngu, Xavier T. I.; Majid, H. A.

    2017-09-01

    Filters are key component of any communication system to control spectrum and suppress interferences. Designing a filter involves long process as well as good understanding of the basic hardware technology. Hence this paper introduces an automated design tool based on Matlab-GUI, called the FILTSoft (acronym for Filter Design Software) to ease the process. FILTSoft is a user friendly filter design tool to aid, guide and expedite calculations from lumped elements level to microstrip structure. Users just have to provide the required filter specifications as well as the material description. FILTSoft will calculate and display the lumped element details, the planar filter structure, and the expected filter's response. An example of a lowpass filter design was calculated using FILTSoft and the results were validated through prototype measurement for comparison purposes.

  17. X-band preamplifier filter

    NASA Technical Reports Server (NTRS)

    Manshadi, F.

    1986-01-01

    A low-loss bandstop filter designed and developed for the Deep Space Network's 34-meter high-efficiency antennas is described. The filter is used for protection of the X-band traveling wave masers from the 20-kW transmitter signal. A combination of empirical and theoretical techniques was employed as well as computer simulation to verify the design before fabrication.

  18. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  19. Angle only tracking with particle flow filters

    NASA Astrophysics Data System (ADS)

    Daum, Fred; Huang, Jim

    2011-09-01

    We show the results of numerical experiments for tracking ballistic missiles using only angle measurements. We compare the performance of an extended Kalman filter with a new nonlinear filter using particle flow to compute Bayes' rule. For certain difficult geometries, the particle flow filter is an order of magnitude more accurate than the EKF. Angle only tracking is of interest in several different sensors; for example, passive optics and radars in which range and Doppler data are spoiled by jamming.

  20. Intelligent vision system for autonomous vehicle operations

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1991-01-01

    A complex optical system consisting of a 4f optical correlator with programmatic filters under the control of a digital on-board computer that operates at video rates for filter generation, storage, and management is described.

  1. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  2. Investigation on filter method for smoothing spiral phase plate

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian

    2018-03-01

    Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.

  3. A statistical package for computing time and frequency domain analysis

    NASA Technical Reports Server (NTRS)

    Brownlow, J.

    1978-01-01

    The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.

  4. Time-Filtered Navier-Stokes Approach and Emulation of Turbulence-Chemistry Interaction

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey; Wey, Thomas; Shih, Tsan-Hsing

    2013-01-01

    This paper describes the time-filtered Navier-Stokes approach capable of capturing unsteady flow structures important for turbulent mixing and an accompanying subgrid model directly accounting for the major processes in turbulence-chemistry interaction. They have been applied to the computation of two-phase turbulent combustion occurring in a single-element lean-direct-injection combustor. Some of the preliminary results from this computational effort are presented in this paper.

  5. High-performance analysis of filtered semantic graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buluc, Aydin; Fox, Armando; Gilbert, John R.

    2012-01-01

    High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry "attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry amore » performance cost. In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded JIT Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++ /MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance.« less

  6. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  7. On the use of distributed sensing in control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.; Ghosh, Dave

    1990-01-01

    Distributed processing technology is being developed to process signals from distributed sensors using distributed computations. Thiw work presents a scheme for calculating the operators required to emulate a conventional Kalman filter and regulator using such a computer. The scheme makes use of conventional Kalman theory as applied to the control of large flexible structures. The required computation of the distributed operators given the conventional Kalman filter and regulator is explained. A straightforward application of this scheme may lead to nonsmooth operators whose convergence is not apparent. This is illustrated by application to the Mini-Mast, a large flexible truss at the Langley Research Center used for research in structural dynamics and control. Techniques for developing smooth operators are presented. These involve spatial filtering as well as adjusting the design constants in the Kalman theory. Results are presented that illustrate the degree of smoothness achieved.

  8. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  9. Bistatic passive radar simulator with spatial filtering subsystem

    NASA Astrophysics Data System (ADS)

    Hossa, Robert; Szlachetko, Boguslaw; Lewandowski, Andrzej; Górski, Maksymilian

    2009-06-01

    The purpose of this paper is to briefly introduce the structure and features of the developed virtual passive FM radar implemented in Matlab system of numerical computations and to present many alternative ways of its performance. An idea of the proposed solution is based on analytic representation of transmitted direct signals and reflected echo signals. As a spatial filtering subsystem a beamforming network of ULA and UCA dipole configuration dedicated to bistatic radar concept is considered and computationally efficient procedures are presented in details. Finally, exemplary results of the computer simulations of the elaborated virtual simulator are provided and discussed.

  10. SU-E-J-243: Possibility of Exposure Dose Reduction of Cone-Beam Computed Tomography in An Image Guided Patient Positioning System by Using Various Noise Suppression Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamezawa, H; Fujimoto General Hospital, Miyakonojo, Miyazaki; Arimura, H

    Purpose: To investigate the possibility of exposure dose reduction of the cone-beam computed tomography (CBCT) in an image guided patient positioning system by using 6 noise suppression filters. Methods: First, a reference dose (RD) and low-dose (LD)-CBCT (X-ray volume imaging system, Elekta Co.) images were acquired with a reference dose of 86.2 mGy (weighted CT dose index: CTDIw) and various low doses of 1.4 to 43.1 mGy, respectively. Second, an automated rigid registration for three axes was performed for estimating setup errors between a planning CT image and the LD-CBCT images, which were processed by 6 noise suppression filters, i.e.,more » averaging filter (AF), median filter (MF), Gaussian filter (GF), bilateral filter (BF), edge preserving smoothing filter (EPF) and adaptive partial median filter (AMF). Third, residual errors representing the patient positioning accuracy were calculated as an Euclidean distance between the setup error vectors estimated using the LD-CBCT image and RD-CBCT image. Finally, the relationships between the residual error and CTDIw were obtained for 6 noise suppression filters, and then the CTDIw for LD-CBCT images processed by the noise suppression filters were measured at the same residual error, which was obtained with the RD-CBCT. This approach was applied to an anthropomorphic pelvic phantom and two cancer patients. Results: For the phantom, the exposure dose could be reduced from 61% (GF) to 78% (AMF) by applying the noise suppression filters to the CBCT images. The exposure dose in a prostate cancer case could be reduced from 8% (AF) to 61% (AMF), and the exposure dose in a lung cancer case could be reduced from 9% (AF) to 37% (AMF). Conclusion: Using noise suppression filters, particularly an adaptive partial median filter, could be feasible to decrease the additional exposure dose to patients in image guided patient positioning systems.« less

  11. Classifying EEG for Brain-Computer Interface: Learning Optimal Filters for Dynamical System Features

    PubMed Central

    Song, Le; Epps, Julien

    2007-01-01

    Classification of multichannel EEG recordings during motor imagination has been exploited successfully for brain-computer interfaces (BCI). In this paper, we consider EEG signals as the outputs of a networked dynamical system (the cortex), and exploit synchronization features from the dynamical system for classification. Herein, we also propose a new framework for learning optimal filters automatically from the data, by employing a Fisher ratio criterion. Experimental evaluations comparing the proposed dynamical system features with the CSP and the AR features reveal their competitive performance during classification. Results also show the benefits of employing the spatial and the temporal filters optimized using the proposed learning approach. PMID:18364986

  12. External Aiding Methods for IMU-Based Navigation

    DTIC Science & Technology

    2016-11-26

    Carlo simulation and particle filtering . This approach allows for the utilization of highly complex systems in a black box configuration with minimal...alternative method, which has the advantage of being less computationally demanding, is to use a Kalman filtering -based approach. The particular...Kalman filtering -based approach used here is known as linear covariance analysis. In linear covariance analysis, the nonlinear systems describing the

  13. Fundamentals of digital filtering with applications in geophysical prospecting for oil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesko, A.

    This book is a comprehensive work bringing together the important mathematical foundations and computing techniques for numerical filtering methods. The first two parts of the book introduce the techniques, fundamental theory and applications, while the third part treats specific applications in geophysical prospecting. Discussion is limited to linear filters, but takes in related fields such as correlational and spectral analysis.

  14. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  15. Evolution of an interfacial crack on the concrete-embankment boundary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, Lee; Antoun, Tarabay; Kanarska, Yuliya

    2013-07-10

    Failure of a dam can have subtle beginnings. A small crack or dislocation at the interface of the concrete dam and the surrounding embankment soil initiated by, for example, a seismic or an explosive event can lead to a catastrophic failure of the dam. The dam may ‘self-rehabilitate’ if a properly designed granular filter is engineered around the embankment. Currently, the design criteria for such filters have only been based on experimental studies. We demonstrate the numerical prediction of filter effectiveness at the soil grain scale. This joint LLNL-ERDC basic research project, funded by the Department of Homeland Security’s Sciencemore » and Technology Directorate (DHS S&T), consists of validating advanced high performance computer simulations of soil erosion and transport of grain- and dam-scale models to detailed centrifuge and soil erosion tests. Validated computer predictions highlight that a resilient filter is consistent with the current design specifications for dam filters. These predictive simulations, unlike the design specifications, can be used to assess filter success or failure under different soil or loading conditions and can lead to meaningful estimates of the timing and nature of full-scale dam failure.« less

  16. Radar range data signal enhancement tracker

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The design, fabrication, and performance characteristics are described of two digital data signal enhancement filters which are capable of being inserted between the Space Shuttle Navigation Sensor outputs and the guidance computer. Commonality of interfaces has been stressed so that the filters may be evaluated through operation with simulated sensors or with actual prototype sensor hardware. The filters will provide both a smoothed range and range rate output. Different conceptual approaches are utilized for each filter. The first filter is based on a combination low pass nonrecursive filter and a cascaded simple average smoother for range and range rate, respectively. Filter number two is a tracking filter which is capable of following transient data of the type encountered during burn periods. A test simulator was also designed which generates typical shuttle navigation sensor data.

  17. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  18. Software for biomedical engineering signal processing laboratory experiments.

    PubMed

    Tompkins, Willis J; Wilson, J

    2009-01-01

    In the early 1990's we developed a special computer program called UW DigiScope to provide a mechanism for anyone interested in biomedical digital signal processing to study the field without requiring any other instrument except a personal computer. There are many digital filtering and pattern recognition algorithms used in processing biomedical signals. In general, students have very limited opportunity to have hands-on access to the mechanisms of digital signal processing. In a typical course, the filters are designed non-interactively, which does not provide the student with significant understanding of the design constraints of such filters nor their actual performance characteristics. UW DigiScope 3.0 is the first major update since version 2.0 was released in 1994. This paper provides details on how the new version based on MATLAB! works with signals, including the filter design tool that is the programming interface between UW DigiScope and processing algorithms.

  19. Large-Eddy Simulation of Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Pruett, C. David; Sochacki, James S.

    1999-01-01

    This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.

  20. Simulated annealing with restart strategy for the blood pickup routing problem

    NASA Astrophysics Data System (ADS)

    Yu, V. F.; Iswari, T.; Normasari, N. M. E.; Asih, A. M. S.; Ting, H.

    2018-04-01

    This study develops a simulated annealing heuristic with restart strategy (SA_RS) for solving the blood pickup routing problem (BPRP). BPRP minimizes the total length of the routes for blood bag collection between a blood bank and a set of donation sites, each associated with a time window constraint that must be observed. The proposed SA_RS is implemented in C++ and tested on benchmark instances of the vehicle routing problem with time windows to verify its performance. The algorithm is then tested on some newly generated BPRP instances and the results are compared with those obtained by CPLEX. Experimental results show that the proposed SA_RS heuristic effectively solves BPRP.

  1. Polynomial filter estimation of range and range rate for terminal rendezvous

    NASA Technical Reports Server (NTRS)

    Philips, R.

    1970-01-01

    A study was made of a polynomial filter for computing range rate information from CSM VHF range data. The filter's performance during the terminal phase of the rendezvous is discussed. Two modifications of the filter were also made and tested. A manual terminal rendezvous was simulated and desired accuracies were achieved for vehicles on an intercept trajectory, except for short periods following each braking maneuver when the estimated range rate was initially in error by the magnitude of the burn.

  2. Image restoration by Wiener filtering in the presence of signal-dependent noise.

    PubMed

    Kondo, K; Ichioka, Y; Suzuki, T

    1977-09-01

    An optimum filter to restore the degraded image due to blurring and the signal-dependent noise is obtained on the basis of the theory of Wiener filtering. Computer simulations of image restoration using signal-dependent noise models are carried out. It becomes clear that the optimum filter, which makes use of a priori information on the signal-dependent nature of the noise and the spectral density of the signal and the noise showing significant spatial correlation, is potentially advantageous.

  3. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  4. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  5. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    PubMed

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.

  6. A Novel Centrality Measure for Network-wide Cyber Vulnerability Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathanur, Arun V.; Haglin, David J.

    In this work we propose a novel formulation that models the attack and compromise on a cyber network as a combination of two parts - direct compromise of a host and the compromise occurring through the spread of the attack on the network from a compromised host. The model parameters for the nodes are a concise representation of the host profiles that can include the risky behaviors of the associated human users while the model parameters for the edges are based on the existence of vulnerabilities between each pair of connected hosts. The edge models relate to the summary representationsmore » of the corresponding attack-graphs. This results in a formulation based on Random Walk with Restart (RWR) and the resulting centrality metric can be solved for in an efficient manner through the use of sparse linear solvers. Thus the formulation goes beyond mere topological considerations in centrality computations by summarizing the host profiles and the attack graphs into the model parameters. The computational efficiency of the method also allows us to also quantify the uncertainty in the centrality measure through Monte Carlo analysis.« less

  7. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    PubMed

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  8. A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    1999-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.

  9. Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.

    PubMed

    Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam

    2018-05-26

    The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.

  10. Blackout detection as a multiobjective optimization problem.

    PubMed

    Chaudhary, A M; Trachtenberg, E A

    1991-01-01

    We study new fast computational procedures for a pilot blackout (total loss of vision) detection in real time. Their validity is demonstrated by data acquired during experiments with volunteer pilots on a human centrifuge. A new systematic class of very fast suboptimal group filters is employed. The utilization of various inherent group invariancies of signals involved allows us to solve the detection problem via estimation with respect to many performance criteria. The complexity of the procedures in terms of the number of computer operations required for their implementation is investigated. Various classes of such prediction procedures are investigated, analyzed and trade offs are established. Also we investigated the validity of suboptimal filtering using different group filters for different performance criteria, namely: the number of false detections, the number of missed detections, the accuracy of detection and the closeness of all procedures to a certain bench mark technique in terms of dispersion squared (mean square error). The results are compared to recent studies of detection of evoked potentials using estimation. The group filters compare favorably with conventional techniques in many cases with respect to the above mentioned criteria. Their main advantage is the fast computational processing.

  11. Proactive Fault Tolerance for HPC with Xen Virtualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Arun Babu; Mueller, Frank; Engelmann, Christian

    2007-01-01

    with thousands of processors. At such large counts of compute nodes, faults are becoming common place. Current techniques to tolerate faults focus on reactive schemes to recover from faults and generally rely on a checkpoint/restart mechanism. Yet, in today's systems, node failures can often be anticipated by detecting a deteriorating health status. Instead of a reactive scheme for fault tolerance (FT), we are promoting a proactive one where processes automatically migrate from “unhealthy” nodes to healthy ones. Our approach relies on operating system virtualization techniques exemplied by but not limited to Xen. This paper contributes an automatic and transparent mechanismmore » for proactive FT for arbitrary MPI applications. It leverages virtualization techniques combined with health monitoring and load-based migration. We exploit Xen's live migration mechanism for a guest operating system (OS) to migrate an MPI task from a health-deteriorating node to a healthy one without stopping the MPI task during most of the migration. Our proactive FT daemon orchestrates the tasks of health monitoring, load determination and initiation of guest OS migration. Experimental results demonstrate that live migration hides migration costs and limits the overhead to only a few seconds making it an attractive approach to realize FT in HPC systems. Overall, our enhancements make proactive FT a valuable asset for long-running MPI application that is complementary to reactive FT using full checkpoint/ restart schemes since checkpoint frequencies can be reduced as fewer unanticipated failures are encountered. In the context of OS virtualization, we believe that this is the rst comprehensive study of proactive fault tolerance where live migration is actually triggered by health monitoring.« less

  12. Who watches the watchers?: preventing fault in a fault tolerance library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanavige, C. D.

    The Scalable Checkpoint/Restart library (SCR) was developed and is used by researchers at Lawrence Livermore National Laboratory to provide a fast and efficient method of saving and recovering large applications during runtime on high-performance computing (HPC) systems. Though SCR protects other programs, up until June 2017, nothing was actively protecting SCR. The goal of this project was to automate the building and testing of this library on the varying HPC architectures on which it is used. Our methods centered around the use of a continuous integration tool called Bamboo that allowed for automation agents to be installed on the HPCmore » systems themselves. These agents provided a way for us to establish a new and unique way to automate and customize the allocation of resources and running of tests with CMake’s unit testing framework, CTest, as well as integration testing scripts though an HPC package manager called Spack. These methods provided a parallel environment in which to test the more complex features of SCR. As a result, SCR is now automatically built and tested on several HPC architectures any time changes are made by developers to the library’s source code. The results of these tests are then communicated back to the developers for immediate feedback, allowing them to fix functionality of SCR that may have broken. Hours of developers’ time are now being saved from the tedious process of manually testing and debugging, which saves money and allows the SCR project team to focus their efforts towards development. Thus, HPC system users can use SCR in conjunction with their own applications to efficiently and effectively checkpoint and restart as needed with the assurance that SCR itself is functioning properly.« less

  13. Materials Development: Pitfalls, Successes, and Lessons

    NASA Technical Reports Server (NTRS)

    Johnson, Sylvia M.

    2010-01-01

    The incorporation of new or improved materials in aerospace systems, or indeed any systems, can yield tremendous payoffs in the system performance or cost, and in many cases can be enabling for a mission or concept. However, the availability of new materials requires advance development, and too often this is neglected or postponed, leaving a project or mission with little choice. In too many cases, the immediate reaction is to use what was used before; this usually turns out not to be possible and results in large sums of money, and amounts of time, being expended on reinvention rather than development of a material with extended capabilities. Material innovation and development is time consuming, with some common wisdom claiming that the timeline is at least 20 years. This time expands considerably when development is stopped and restarted, or knowledge is lost. Down selection of materials is necessary, especially as the Technical Readiness Level (TRL) increases. However, the costs must be considered and approaches should be taken to retain knowledge and allow for restarting the development process. Regardless of the exact time required, it is clear that it is necessary to have materials, at all stages of development, in a research and development pipeline and available for maturation as required. This talk will discuss some of theses issues, including some of the elements for a development path for materials. Some history of materials developments will be included. The usefulness of computational materials science, as a route to decreasing material development time, will be an important element of this discussion. Collaboration with outside institutions and nations is also critical for innovation, but raises the issues of intellectual property and protections, and national security (ITAR rules, for example).

  14. Friction Stir Weld Restart+Reweld Repair Allowables

    NASA Technical Reports Server (NTRS)

    Clifton, Andrew

    2008-01-01

    A friction stir weld (FSW) repair method has been developed and successfully implemented on Al 2195 plate material for the Space Shuttle External Fuel Tank (ET). The method includes restarting the friction stir weld in the termination hole of the original weld followed by two reweld passes. Room temperature and cryogenic temperature mechanical properties exceeded minimum FSW design strength and compared well with the development data. Simulated service test results also compared closely to historical data for initial FSW, confirming no change to the critical flaw size or inspection requirements for the repaired weld. Testing of VPPA fusion/FSW intersection weld specimens exhibited acceptable strength and exceeded the minimum design value. Porosity, when present at the intersection was on the root side toe of the fusion weld, the "worst case" being 0.7 inch long. While such porosity may be removed by sanding, this "worst case" porosity condition was tested "as is" and demonstrated that porosity did not negatively affect the strength of the intersection weld. Large, 15-inch "wide panels" FSW repair welds were tested to demonstrate strength and evaluate residual stresses using photo stress analysis. All results exceeded design minimums, and photo stress analysis showed no significant stress gradients due to the presence of the restart and multi-pass FSW repair weld.

  15. The best bits in an iris code.

    PubMed

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2009-06-01

    Iris biometric systems apply filters to iris images to extract information about iris texture. Daugman's approach maps the filter output to a binary iris code. The fractional Hamming distance between two iris codes is computed and decisions about the identity of a person are based on the computed distance. The fractional Hamming distance weights all bits in an iris code equally. However, not all the bits in an iris code are equally useful. Our research is the first to present experiments documenting that some bits are more consistent than others. Different regions of the iris are compared to evaluate their relative consistency, and contrary to some previous research, we find that the middle bands of the iris are more consistent than the inner bands. The inconsistent-bit phenomenon is evident across genders and different filter types. Possible causes of inconsistencies, such as segmentation, alignment issues, and different filters are investigated. The inconsistencies are largely due to the coarse quantization of the phase response. Masking iris code bits corresponding to complex filter responses near the axes of the complex plane improves the separation between the match and nonmatch Hamming distance distributions.

  16. Application of Consider Covariance to the Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lundberg, John B.

    1996-01-01

    The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.

  17. Estimating Thruster Impulses From IMU and Doppler Data

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E.; Kruizinga, Gerhard L.

    2009-01-01

    A computer program implements a thrust impulse measurement (TIM) filter, which processes data on changes in velocity and attitude of a spacecraft to estimate the small impulsive forces and torques exerted by the thrusters of the spacecraft reaction control system (RCS). The velocity-change data are obtained from line-of-sight-velocity data from Doppler measurements made from the Earth. The attitude-change data are the telemetered from an inertial measurement unit (IMU) aboard the spacecraft. The TIM filter estimates the threeaxis thrust vector for each RCS thruster, thereby enabling reduction of cumulative navigation error attributable to inaccurate prediction of thrust vectors. The filter has been augmented with a simple mathematical model to compensate for large temperature fluctuations in the spacecraft thruster catalyst bed in order to estimate thrust more accurately at deadbanding cold-firing levels. Also, rigorous consider-covariance estimation is applied in the TIM to account for the expected uncertainty in the moment of inertia and the location of the center of gravity of the spacecraft. The TIM filter was built with, and depends upon, a sigma-point consider-filter algorithm implemented in a Python-language computer program.

  18. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.

    PubMed

    Sokoloski, Sacha

    2017-09-01

    In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.

  19. Essentially nonoscillatory postprocessing filtering methods

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1992-01-01

    High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.

  20. Systolic Signal Processor/High Frequency Direction Finding

    DTIC Science & Technology

    1990-10-01

    MUSIC ) algorithm and the finite impulse response (FIR) filter onto the testbed hardware was supported by joint sponsorship of the block and major bid...computational throughput. The systolic implementations of a four-channel finite impulse response (FIR) filter and multiple signal classification ( MUSIC ... MUSIC ) algorithm was mated to a bank of finite impulse response (FIR) filters and a four-channel data acquisition subsystem. A complete description

  1. DEKFIS user's guide: Discrete Extended Kalman Filter/Smoother program for aircraft and rotorcraft data consistency

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.

  2. Multicore job scheduling in the Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.

    2015-12-01

    After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.

  3. Evaluating Application Resilience with XRay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Sui; Bronevetsky, Greg; Li, Bin

    2015-05-07

    The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less

  4. A comparative CFD study of four inferior vena cava filters.

    PubMed

    López, Josep M; Fortuny, Gerard; Puigjaner, Dolors; Herrero, Joan; Marimon, Francesc

    2018-03-30

    Computational fluid dynamics was used to simulate the flow of blood within an inferior vena cava (IVC) geometry model that was reconstructed from computed tomography images obtained from a real patient. The main novelty of the present work is that we simulated the implantation of 4 different filter models in this realistic IVC geometry. We considered different blood flow rates in the range between V in =20 and V in =80 cm 3 /s, and all simulations were performed with both the Newtonian and a non-Newtonian model for the blood viscosity. We compared the hemodynamics performance of the different filter models, and we paid a special attention to the total drag force, F d , exerted by the blood flow on the filter surface. This force is the sum of 2 contributions: the viscous skin friction force, which was found to be roughly proportional to the filter surface area, and the pressure force, which depended on the particular filter geometry design. The F d force is relevant because it must be balanced by the total force exerted by the filter hooks/struts on the IVC wall at the attachment locations. For the highest V in value investigated, the variation in F d among filters was from 116 to 308 dyne. We also showed how the present results can be extrapolated to obtain good estimates of the drag forces if the blood viscosity levels change, ie, if the patient with a filter implanted is treated with anticoagulant therapy. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  6. Robust failure detection filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sanmartin, A. M.

    1985-01-01

    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.

  7. Review of Transient Testing of Fast Reactor Fuels in the Transient REActor Test Facility (TREAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, C.; Wachs, D.; Carmack, J.

    The restart of the Transient REActor Test (TREAT) facility provides a unique opportunity to engage the fast reactor fuels community to reinitiate in-pile experimental safety studies. Historically, the TREAT facility played a critical role in characterizing the behavior of both metal and oxide fast reactor fuels under off-normal conditions, irradiating hundreds of fuel pins to support fast reactor fuel development programs. The resulting test data has provided validation for a multitude of fuel performance and severe accident analysis computer codes. This paper will provide a review of the historical database of TREAT experiments including experiment design, instrumentation, test objectives, andmore » salient findings. Additionally, the paper will provide an introduction to the current and future experiment plans of the U.S. transient testing program at TREAT.« less

  8. LLNL Mercury Project Trinity Open Science Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brantley, Patrick; Dawson, Shawn; McKinley, Scott

    2016-04-20

    The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less

  9. Uncertainty quantification of seabed parameters for large data volumes along survey tracks with a tempered particle filter

    NASA Astrophysics Data System (ADS)

    Dettmer, J.; Quijano, J. E.; Dosso, S. E.; Holland, C. W.; Mandolesi, E.

    2016-12-01

    Geophysical seabed properties are important for the detection and classification of unexploded ordnance. However, current surveying methods such as vertical seismic profiling, coring, or inversion are of limited use when surveying large areas with high spatial sampling density. We consider surveys based on a source and receiver array towed by an autonomous vehicle which produce large volumes of seabed reflectivity data that contain unprecedented and detailed seabed information. The data are analyzed with a particle filter, which requires efficient reflection-coefficient computation, efficient inversion algorithms and efficient use of computer resources. The filter quantifies information content of multiple sequential data sets by considering results from previous data along the survey track to inform the importance sampling at the current point. Challenges arise from environmental changes along the track where the number of sediment layers and their properties change. This is addressed by a trans-dimensional model in the filter which allows layering complexity to change along a track. Efficiency is improved by likelihood tempering of various particle subsets and including exchange moves (parallel tempering). The filter is implemented on a hybrid computer that combines central processing units (CPUs) and graphics processing units (GPUs) to exploit three levels of parallelism: (1) fine-grained parallel computation of spherical reflection coefficients with a GPU implementation of Levin integration; (2) updating particles by concurrent CPU processes which exchange information using automatic load balancing (coarse grained parallelism); (3) overlapping CPU-GPU communication (a major bottleneck) with GPU computation by staggering CPU access to the multiple GPUs. The algorithm is applied to spherical reflection coefficients for data sets along a 14-km track on the Malta Plateau, Mediterranean Sea. We demonstrate substantial efficiency gains over previous methods. [This research was supported in part by the U.S. Dept of Defense, thought the Strategic Environmental Research and Development Program (SERDP).

  10. Advanced Transportation System Studies. Technical Area 3: Alternate Propulsion Subsystem Concepts. Volume 1; Executive Summary

    NASA Technical Reports Server (NTRS)

    Levack, Daniel J. H.

    2000-01-01

    The Alternate Propulsion Subsystem Concepts contract had seven tasks defined that are reported under this contract deliverable. The tasks were: FAA Restart Study, J-2S Restart Study, Propulsion Database Development. SSME Upper Stage Use. CERs for Liquid Propellant Rocket Engines. Advanced Low Cost Engines, and Tripropellant Comparison Study. The two restart studies, F-1A and J-2S, generated program plans for restarting production of each engine. Special emphasis was placed on determining changes to individual parts due to obsolete materials, changes in OSHA and environmental concerns, new processes available, and any configuration changes to the engines. The Propulsion Database Development task developed a database structure and format which is easy to use and modify while also being comprehensive in the level of detail available. The database structure included extensive engine information and allows for parametric data generation for conceptual engine concepts. The SSME Upper Stage Use task examined the changes needed or desirable to use the SSME as an upper stage engine both in a second stage and in a translunar injection stage. The CERs for Liquid Engines task developed qualitative parametric cost estimating relationships at the engine and major subassembly level for estimating development and production costs of chemical propulsion liquid rocket engines. The Advanced Low Cost Engines task examined propulsion systems for SSTO applications including engine concept definition, mission analysis. trade studies. operating point selection, turbomachinery alternatives, life cycle cost, weight definition. and point design conceptual drawings and component design. The task concentrated on bipropellant engines, but also examined tripropellant engines. The Tripropellant Comparison Study task provided an unambiguous comparison among various tripropellant implementation approaches and cycle choices, and then compared them to similarly designed bipropellant engines in the SSTO mission This volume overviews each of the tasks giving its objectives, main results. and conclusions. More detailed Final Task Reports are available on each individual task.

  11. Computational time reduction for sequential batch solutions in GNSS precise point positioning technique

    NASA Astrophysics Data System (ADS)

    Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando

    2017-08-01

    Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.

  12. Spacecraft attitude determination using a second-order nonlinear filter

    NASA Technical Reports Server (NTRS)

    Vathsal, S.

    1987-01-01

    The stringent attitude determination accuracy and faster slew maneuver requirements demanded by present-day spacecraft control systems motivate the development of recursive nonlinear filters for attitude estimation. This paper presents the second-order filter development for the estimation of attitude quaternion using three-axis gyro and star tracker measurement data. Performance comparisons have been made by computer simulation of system models and filter mechanization. It is shown that the second-order filter consistently performs better than the extended Kalman filter when the performance index of the root sum square estimation error of the quaternion vector is compared. The second-order filter identifies the gyro drift rates faster than the extended Kalman filter. The uniqueness of this algorithm is the online generation of the time-varying process and measurement noise covariance matrices, derived as a function or the process and measurement nonlinearity, respectively.

  13. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  14. Computer designed compensation filters for use in radiation therapy. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, R. Jr.

    1982-12-01

    A computer program was written in the MUMPS language to design filters for use in cancer radiotherapy. The filter corrects for patient surface irregularities and allows homogeneous dose distribution with depth in the patient. The program does not correct for variations in the density of the patient. The program uses data available from the software in Computerized Medical Systems Inc.'s Radiation Treatment Planning package. External contours of General Electric CAT scans are made using the RTP software. The program uses the data from these external contours in designing the compensation filters. The program is written to process from 3 tomore » 31, 1cm thick, CAT scan slices. The output from the program can be in one of two different forms. The first option will drive the probe of a CMS Water Phantom in three dimensions as if it were the bit of a routing machine. Thus a routing machine constructed to run from the same output that drives the Water Phantom probe would produce a three dimensional filter mold. The second option is a listing of thicknesses for an array of aluminum blocks to filter the radiation. The size of the filter array is 10 in. by 10 in. The Printronix printer provides an array of blocks 1/2 in. by 1/2 in. with the thickness in millimeters printed inside each block.« less

  15. Improved Collaborative Filtering Algorithm via Information Transformation

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Wang, Bing-Hong; Guo, Qiang

    In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering using the Pearson correlation. Furthermore, we introduce a free parameter β to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-N similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.

  16. Capability of GPGPU for Faster Thermal Analysis Used in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Takaki, Ryoji; Akita, Takeshi; Shima, Eiji

    A thermal mathematical model plays an important role in operations on orbit as well as spacecraft thermal designs. The thermal mathematical model has some uncertain thermal characteristic parameters, such as thermal contact resistances between components, effective emittances of multilayer insulation (MLI) blankets, discouraging make up efficiency and accuracy of the model. A particle filter which is one of successive data assimilation methods has been applied to construct spacecraft thermal mathematical models. This method conducts a lot of ensemble computations, which require large computational power. Recently, General Purpose computing in Graphics Processing Unit (GPGPU) has been attracted attention in high performance computing. Therefore GPGPU is applied to increase the computational speed of thermal analysis used in the particle filter. This paper shows the speed-up results by using GPGPU as well as the application method of GPGPU.

  17. A mathematical model for computer image tracking.

    PubMed

    Legters, G R; Young, T Y

    1982-06-01

    A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.

  18. Research reports: 1990 NASA/ASEE Summer Faculty Fellowship Program

    NASA Technical Reports Server (NTRS)

    Anderson, Loren A. (Editor); Beymer, Mark A. (Editor)

    1990-01-01

    A collection of technical reports on research conducted by the participants in this program is presented. The topics covered include: human-computer interface software, multimode fiber optic communication links, electrochemical impedance spectroscopy, rocket-triggered lightning, robotics, a flammability study of thin polymeric film materials, a vortex shedding flowmeter, modeling of flow systems, monomethyl hydrazine vapor detection, a rocket noise filter system using digital filters, computer programs, lower body negative pressure, closed ecological systems, and others. Several reports with respect to space shuttle orbiters are presented.

  19. A VME-based software trigger system using UNIX processors

    NASA Astrophysics Data System (ADS)

    Atmur, Robert; Connor, David F.; Molzon, William

    1997-02-01

    We have constructed a distributed computing platform with eight processors to assemble and filter data from digitization crates. The filtered data were transported to a tape-writing UNIX computer via ethernet. Each processor ran a UNIX operating system and was installed in its own VME crate. Each VME crate contained dual-port memories which interfaced with the digitizers. Using standard hardware and software (VME and UNIX) allows us to select from a wide variety of non-proprietary products and makes upgrades simpler, if they are necessary.

  20. Development of Na Adaptive Filter to Estimate the Percentage of Body Fat Based on Anthropometric Measures

    NASA Astrophysics Data System (ADS)

    do Lago, Naydson Emmerson S. P.; Kardec Barros, Allan; Sousa, Nilviane Pires S.; Junior, Carlos Magno S.; Oliveira, Guilherme; Guimares Polisel, Camila; Eder Carvalho Santana, Ewaldo

    2018-01-01

    This study aims to develop an algorithm of an adaptive filter to determine the percentage of body fat based on the use of anthropometric indicators in adolescents. Measurements such as body mass, height and waist circumference were collected for a better analysis. The development of this filter was based on the Wiener filter, used to produce an estimate of a random process. The Wiener filter minimizes the mean square error between the estimated random process and the desired process. The LMS algorithm was also studied for the development of the filter because it is important due to its simplicity and facility of computation. Excellent results were obtained with the filter developed, being these results analyzed and compared with the data collected.

  1. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  2. Filtered epithermal quasi-monoenergetic neutron beams at research reactor facilities.

    PubMed

    Mansy, M S; Bashter, I I; El-Mesiry, M S; Habib, N; Adib, M

    2015-03-01

    Filtered neutron techniques were applied to produce quasi-monoenergetic neutron beams in the energy range of 1.5-133keV at research reactors. A simulation study was performed to characterize the filter components and transmitted beam lines. The filtered beams were characterized in terms of the optimal thickness of the main and additive components. The filtered neutron beams had high purity and intensity, with low contamination from the accompanying thermal emission, fast neutrons and γ-rays. A computer code named "QMNB" was developed in the "MATLAB" programming language to perform the required calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Detection of circuit-board components with an adaptive multiclass correlation filter

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Kober, Vitaly

    2008-08-01

    A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.

  4. Cryogenic metal mesh bandpass filters for submillimeter astronomy

    NASA Technical Reports Server (NTRS)

    Dragovan, M.

    1984-01-01

    The design and performance of a tunable double-half-wave bandpass filter centered at 286 microns (Delta lambda/lambda = 0.16) and operating at cryogenic temperatures (for astronomy applications) are presented. The operating principle is explained, and the fabrication of the device, which comprises two identical mutually coupled Fabry-Perot filters with electroformed Ni-mesh reflectors and is tuned by means of variable spacers, is described. A drawing of the design and graphs of computed and measured performance are provided. Significantly improved bandpass characteristics are obtained relative to the single Fabry-Perot filter.

  5. Exploiting Superconvergence in Discontinuous Galerkin Methods for Improved Time-Stepping and Visualization

    DTIC Science & Technology

    2016-09-08

    Accuracy Conserving (SIAC) filter when applied to nonuniform meshes; 2) Theoretically and numerical demonstration of the 2k+1 order accuracy of the SIAC...Establishing a more theoretical and numerical understanding of a computationally efficient scaling for the SIAC filter for nonuniform meshes [7]; 2...Li, “SIAC Filtering of DG Methods – Boundary and Nonuniform Mesh”, International Conference on Spectral and Higher Order Methods (ICOSAHOM

  6. Accuracy of the Estimated Core Temperature (ECTemp) Algorithm in Estimating Circadian Rhythm Indicators

    DTIC Science & Technology

    2017-04-12

    measurement of CT outside of stringent laboratory environments. This study evaluated ECTempTM, a heart rate-based extended Kalman Filter CT...based CT-estimation algorithms [7, 13, 14]. One notable example is ECTempTM, which utilizes an extended Kalman Filter to estimate CT from...3. The extended Kalman filter mapping function variance coefficient (Ct) was computed using the following equation: = −9.1428 ×

  7. A Stabilized Sparse-Matrix U-D Square-Root Implementation of a Large-State Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Boggs, D.; Ghil, M.; Keppenne, C.

    1995-01-01

    The full nonlinear Kalman filter sequential algorithm is, in theory, well-suited to the four-dimensional data assimilation problem in large-scale atmospheric and oceanic problems. However, it was later discovered that this algorithm can be very sensitive to computer roundoff, and that results may cease to be meaningful as time advances. Implementations of a modified Kalman filter are given.

  8. Impact of Image Filters and Observations Parameters in CBCT for Identification of Mandibular Osteolytic Lesions

    PubMed Central

    Monteiro, Bruna Moraes; Nobrega Filho, Denys Silveira; Lopes, Patrícia de Medeiros Loureiro; de Sales, Marcelo Augusto Oliveira

    2012-01-01

    The aim of this study was to analyze the influence of filters (algorithms) to improve the image of Cone Beam Computed Tomography (CBCT) in diagnosis of osteolytic lesions of the mandible, in order to establish the protocols for viewing images more suitable for CBCT diagnostics. 15 dry mandibles in which perforations were performed, simulating lesions, were submitted to CBCT examination. Two examiners analyzed the images, using filters to improve image Hard, Normal, and Very Sharp, contained in the iCAT Vision software, and protocols for assessment: axial; sagittal and coronal; and axial, sagittal and coronal planes simultaneously (MPR), on two occasions. The sensitivity and specificity (validity) of the cone beam computed tomography (CBCT) have been demonstrated as the values achieved were above 75% for sensitivity and above 85% for specificity, reaching around 95.5% of sensitivity and 99% of specificity when we used the appropriate observation protocol. It was concluded that the use of filters (algorithms) to improve the CBCT image influences the diagnosis, due to the fact that all measured values were correspondingly higher when it was used the filter Very Sharp, which justifies its use for clinical activities, followed by Hard and Normal filters, in order of decreasing values. PMID:22956955

  9. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  10. Impact of Image Filters and Observations Parameters in CBCT for Identification of Mandibular Osteolytic Lesions.

    PubMed

    Monteiro, Bruna Moraes; Nobrega Filho, Denys Silveira; Lopes, Patrícia de Medeiros Loureiro; de Sales, Marcelo Augusto Oliveira

    2012-01-01

    The aim of this study was to analyze the influence of filters (algorithms) to improve the image of Cone Beam Computed Tomography (CBCT) in diagnosis of osteolytic lesions of the mandible, in order to establish the protocols for viewing images more suitable for CBCT diagnostics. 15 dry mandibles in which perforations were performed, simulating lesions, were submitted to CBCT examination. Two examiners analyzed the images, using filters to improve image Hard, Normal, and Very Sharp, contained in the iCAT Vision software, and protocols for assessment: axial; sagittal and coronal; and axial, sagittal and coronal planes simultaneously (MPR), on two occasions. The sensitivity and specificity (validity) of the cone beam computed tomography (CBCT) have been demonstrated as the values achieved were above 75% for sensitivity and above 85% for specificity, reaching around 95.5% of sensitivity and 99% of specificity when we used the appropriate observation protocol. It was concluded that the use of filters (algorithms) to improve the CBCT image influences the diagnosis, due to the fact that all measured values were correspondingly higher when it was used the filter Very Sharp, which justifies its use for clinical activities, followed by Hard and Normal filters, in order of decreasing values.

  11. Attitude Determination Using a MEMS-Based Flight Information Measurement Unit

    PubMed Central

    Ma, Der-Ming; Shiau, Jaw-Kuen; Wang, I.-Chiang; Lin, Yu-Heng

    2012-01-01

    Obtaining precise attitude information is essential for aircraft navigation and control. This paper presents the results of the attitude determination using an in-house designed low-cost MEMS-based flight information measurement unit. This study proposes a quaternion-based extended Kalman filter to integrate the traditional quaternion and gravitational force decomposition methods for attitude determination algorithm. The proposed extended Kalman filter utilizes the evolution of the four elements in the quaternion method for attitude determination as the dynamic model, with the four elements as the states of the filter. The attitude angles obtained from the gravity computations and from the electronic magnetic sensors are regarded as the measurement of the filter. The immeasurable gravity accelerations are deduced from the outputs of the three axes accelerometers, the relative accelerations, and the accelerations due to body rotation. The constraint of the four elements of the quaternion method is treated as a perfect measurement and is integrated into the filter computation. Approximations of the time-varying noise variances of the measured signals are discussed and presented with details through Taylor series expansions. The algorithm is intuitive, easy to implement, and reliable for long-term high dynamic maneuvers. Moreover, a set of flight test data is utilized to demonstrate the success and practicality of the proposed algorithm and the filter design. PMID:22368455

  12. Attitude determination using a MEMS-based flight information measurement unit.

    PubMed

    Ma, Der-Ming; Shiau, Jaw-Kuen; Wang, I-Chiang; Lin, Yu-Heng

    2012-01-01

    Obtaining precise attitude information is essential for aircraft navigation and control. This paper presents the results of the attitude determination using an in-house designed low-cost MEMS-based flight information measurement unit. This study proposes a quaternion-based extended Kalman filter to integrate the traditional quaternion and gravitational force decomposition methods for attitude determination algorithm. The proposed extended Kalman filter utilizes the evolution of the four elements in the quaternion method for attitude determination as the dynamic model, with the four elements as the states of the filter. The attitude angles obtained from the gravity computations and from the electronic magnetic sensors are regarded as the measurement of the filter. The immeasurable gravity accelerations are deduced from the outputs of the three axes accelerometers, the relative accelerations, and the accelerations due to body rotation. The constraint of the four elements of the quaternion method is treated as a perfect measurement and is integrated into the filter computation. Approximations of the time-varying noise variances of the measured signals are discussed and presented with details through Taylor series expansions. The algorithm is intuitive, easy to implement, and reliable for long-term high dynamic maneuvers. Moreover, a set of flight test data is utilized to demonstrate the success and practicality of the proposed algorithm and the filter design.

  13. Low-cost space-varying FIR filter architecture for computational imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.

    2010-01-01

    Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.

  14. Alpha Thalassemia/Mental Retardation Syndrome X-linked Gene Product ATRX Is Required for Proper Replication Restart and Cellular Resistance to Replication Stress*

    PubMed Central

    Leung, Justin Wai-Chung; Ghosal, Gargi; Wang, Wenqi; Shen, Xi; Wang, Jiadong; Li, Lei; Chen, Junjie

    2013-01-01

    Alpha thalassemia/mental retardation syndrome X-linked (ATRX) is a member of the SWI/SNF protein family of DNA-dependent ATPases. It functions as a chromatin remodeler and is classified as an SNF2-like helicase. Here, we showed somatic knock-out of ATRX displayed perturbed S-phase progression as well as hypersensitivity to replication stress. ATRX is recruited to sites of DNA damage, required for efficient checkpoint activation and faithful replication restart. In addition, we identified ATRX as a binding partner of MRE11-RAD50-NBS1 (MRN) complex. Together, these results suggest a non-canonical function of ATRX in guarding genomic stability. PMID:23329831

  15. Psychological adaptation among residents following restart of Three Mile Island.

    PubMed

    Prince-Embury, S; Rooney, J F

    1995-01-01

    Psychological adaptation is examined in a sample of residents who remained in the vicinity of Three Mile Island following the restart of the nuclear generating facility which had been shut down since the 1979 accident. Findings indicate a lowering of psychological symptoms between 1985 and 1989 in spite of increased lack of control, less faith in experts and increased fear of developing cancer. The suggestion is made that reduced stress might have been related to a process of adaptation whereby a cognition of emergency preparedness was integrated by some of these residents as a modulating cognitive element. Findings also indicate that "loss of faith in experts" is a persistently salient cognition consistent with the "shattered assumptions" theory of victimization.

  16. Life stage differences in resident coping with restart of the Three Mile Island nuclear generating facility.

    PubMed

    Prince-Embury, S; Rooney, J F

    1990-12-01

    A study of residents who remained in the vicinity of Three Mile Island (TMI) immediately following the restart of the nuclear generating plant revealed that older residents employed a more emotion-focused coping style in the face of this event than did younger residents. Coping style was, however, unrelated to the level of psychological symptoms for these older residents, whereas demographic variables were related. Among younger residents, on the other hand, coping style was related to the level of psychological symptoms, whereas demographic variables were not. Among younger residents, emotion-focused coping was associated with more symptoms and problem-focused coping was associated with fewer symptoms, contradicting previous findings among TMI area residents.

  17. Filtering of the Radon transform to enhance linear signal features via wavelet pyramid decomposition

    NASA Astrophysics Data System (ADS)

    Meckley, John R.

    1995-09-01

    The information content in many signal processing applications can be reduced to a set of linear features in a 2D signal transform. Examples include the narrowband lines in a spectrogram, ship wakes in a synthetic aperture radar image, and blood vessels in a medical computer-aided tomography scan. The line integrals that generate the values of the projections of the Radon transform can be characterized as a bank of matched filters for linear features. This localization of energy in the Radon transform for linear features can be exploited to enhance these features and to reduce noise by filtering the Radon transform with a filter explicitly designed to pass only linear features, and then reconstructing a new 2D signal by inverting the new filtered Radon transform (i.e., via filtered backprojection). Previously used methods for filtering the Radon transform include Fourier based filtering (a 2D elliptical Gaussian linear filter) and a nonlinear filter ((Radon xfrm)**y with y >= 2.0). Both of these techniques suffer from the mismatch of the filter response to the true functional form of the Radon transform of a line. The Radon transform of a line is not a point but is a function of the Radon variables (rho, theta) and the total line energy. This mismatch leads to artifacts in the reconstructed image and a reduction in achievable processing gain. The Radon transform for a line is computed as a function of angle and offset (rho, theta) and the line length. The 2D wavelet coefficients are then compared for the Haar wavelets and the Daubechies wavelets. These filter responses are used as frequency filters for the Radon transform. The filtering is performed on the wavelet pyramid decomposition of the Radon transform by detecting the most likely positions of lines in the transform and then by convolving the local area with the appropriate response and zeroing the pyramid coefficients outside of the response area. The response area is defined to contain 95% of the total wavelet coefficient energy. The detection algorithm provides an estimate of the line offset, orientation, and length that is then used to index the appropriate filter shape. Additional wavelet pyramid decomposition is performed in areas of high energy to refine the line position estimate. After filtering, the new Radon transform is generated by inverting the wavelet pyramid. The Radon transform is then inverted by filtered backprojection to produce the final 2D signal estimate with the enhanced linear features. The wavelet-based method is compared to both the Fourier and the nonlinear filtering with examples of sparse and dense shapes in imaging, acoustics and medical tomography with test images of noisy concentric lines, a real spectrogram of a blow fish (a very nonstationary spectrum), and the Shepp Logan Computer Tomography phantom image. Both qualitative and derived quantitative measures demonstrate the improvement of wavelet-based filtering. Additional research is suggested based on these results. Open questions include what level(s) to use for detection and filtering because multiple-level representations exist. The lower levels are smoother at reduced spatial resolution, while the higher levels provide better response to edges. Several examples are discussed based on analytical and phenomenological arguments.

  18. A simple filter circuit for denoising biomechanical impact signals.

    PubMed

    Subramaniam, Suba R; Georgakis, Apostolos

    2009-01-01

    We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.

  19. Implicit Kalman filtering

    NASA Technical Reports Server (NTRS)

    Skliar, M.; Ramirez, W. F.

    1997-01-01

    For an implicitly defined discrete system, a new algorithm for Kalman filtering is developed and an efficient numerical implementation scheme is proposed. Unlike the traditional explicit approach, the implicit filter can be readily applied to ill-conditioned systems and allows for generalization to descriptor systems. The implementation of the implicit filter depends on the solution of the congruence matrix equation (A1)(Px)(AT1) = Py. We develop a general iterative method for the solution of this equation, and prove necessary and sufficient conditions for convergence. It is shown that when the system matrices of an implicit system are sparse, the implicit Kalman filter requires significantly less computer time and storage to implement as compared to the traditional explicit Kalman filter. Simulation results are presented to illustrate and substantiate the theoretical developments.

  20. Distortion analysis of subband adaptive filtering methods for FMRI active noise control systems.

    PubMed

    Milani, Ali A; Panahi, Issa M; Briggs, Richard

    2007-01-01

    Delayless subband filtering structure, as a high performance frequency domain filtering technique, is used for canceling broadband fMRI noise (8 kHz bandwidth). In this method, adaptive filtering is done in subbands and the coefficients of the main canceling filter are computed by stacking the subband weights together. There are two types of stacking methods called FFT and FFT-2. In this paper, we analyze the distortion introduced by these two stacking methods. The effect of the stacking distortion on the performance of different adaptive filters in FXLMS algorithm with non-minimum phase secondary path is explored. The investigation is done for different adaptive algorithms (nLMS, APA and RLS), different weight stacking methods, and different number of subbands.

  1. Verification of sub-grid filtered drag models for gas-particle fluidized beds with immersed cylinder arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2014-04-23

    The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less

  2. Precision of quantitative computed tomography texture analysis using image filtering: A phantom study for scanner variability.

    PubMed

    Yasaka, Koichiro; Akai, Hiroyuki; Mackin, Dennis; Court, Laurence; Moros, Eduardo; Ohtomo, Kuni; Kiryu, Shigeru

    2017-05-01

    Quantitative computed tomography (CT) texture analyses for images with and without filtration are gaining attention to capture the heterogeneity of tumors. The aim of this study was to investigate how quantitative texture parameters using image filtering vary among different computed tomography (CT) scanners using a phantom developed for radiomics studies.A phantom, consisting of 10 different cartridges with various textures, was scanned under 6 different scanning protocols using four CT scanners from four different vendors. CT texture analyses were performed for both unfiltered images and filtered images (using a Laplacian of Gaussian spatial band-pass filter) featuring fine, medium, and coarse textures. Forty-five regions of interest were placed for each cartridge (x) in a specific scan image set (y), and the average of the texture values (T(x,y)) was calculated. The interquartile range (IQR) of T(x,y) among the 6 scans was calculated for a specific cartridge (IQR(x)), while the IQR of T(x,y) among the 10 cartridges was calculated for a specific scan (IQR(y)), and the median IQR(y) was then calculated for the 6 scans (as the control IQR, IQRc). The median of their quotient (IQR(x)/IQRc) among the 10 cartridges was defined as the variability index (VI).The VI was relatively small for the mean in unfiltered images (0.011) and for standard deviation (0.020-0.044) and entropy (0.040-0.044) in filtered images. Skewness and kurtosis in filtered images featuring medium and coarse textures were relatively variable across different CT scanners, with VIs of 0.638-0.692 and 0.430-0.437, respectively.Various quantitative CT texture parameters are robust and variable among different scanners, and the behavior of these parameters should be taken into consideration.

  3. Technical note: optimization for improved tube-loading efficiency in the dual-energy computed tomography coupled with balanced filter method.

    PubMed

    Saito, Masatoshi

    2010-08-01

    This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, "Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method," Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.

  4. INS/GNSS Tightly-Coupled Integration Using Quaternion-Based AUPF for USV.

    PubMed

    Xia, Guoqing; Wang, Guoqing

    2016-08-02

    This paper addresses the problem of integration of Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) for the purpose of developing a low-cost, robust and highly accurate navigation system for unmanned surface vehicles (USVs). A tightly-coupled integration approach is one of the most promising architectures to fuse the GNSS data with INS measurements. However, the resulting system and measurement models turn out to be nonlinear, and the sensor stochastic measurement errors are non-Gaussian and distributed in a practical system. Particle filter (PF), one of the most theoretical attractive non-linear/non-Gaussian estimation methods, is becoming more and more attractive in navigation applications. However, the large computation burden limits its practical usage. For the purpose of reducing the computational burden without degrading the system estimation accuracy, a quaternion-based adaptive unscented particle filter (AUPF), which combines the adaptive unscented Kalman filter (AUKF) with PF, has been proposed in this paper. The unscented Kalman filter (UKF) is used in the algorithm to improve the proposal distribution and generate a posterior estimates, which specify the PF importance density function for generating particles more intelligently. In addition, the computational complexity of the filter is reduced with the avoidance of the re-sampling step. Furthermore, a residual-based covariance matching technique is used to adapt the measurement error covariance. A trajectory simulator based on a dynamic model of USV is used to test the proposed algorithm. Results show that quaternion-based AUPF can significantly improve the overall navigation accuracy and reliability.

  5. Computer Series, 62: Bits and Pieces, 25.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1985-01-01

    Describes: (1) a FORTH-language, computer-controlled potentiometric titration; (2) coulometric titrations using computer-interfaced potentiometric endpoint detection; (3) interfacing a scanning infrared spectrophotometer to a microcomputer; (4) demonstrations of signal-to-noise enhancement (digital filtering); (5) and an inexpensive Apple…

  6. Kalman and particle filtering methods for full vehicle and tyre identification

    NASA Astrophysics Data System (ADS)

    Bogdanski, Karol; Best, Matthew C.

    2018-05-01

    This paper considers identification of all significant vehicle handling dynamics of a test vehicle, including identification of a combined-slip tyre model, using only those sensors currently available on most vehicle controller area network buses. Using an appropriately simple but efficient model structure, all of the independent parameters are found from test vehicle data, with the resulting model accuracy demonstrated on independent validation data. The paper extends previous work on augmented Kalman Filter state estimators to concentrate wholly on parameter identification. It also serves as a review of three alternative filtering methods; identifying forms of the unscented Kalman filter, extended Kalman filter and particle filter are proposed and compared for effectiveness, complexity and computational efficiency. All three filters are suited to applications of system identification and the Kalman Filters can also operate in real-time in on-line model predictive controllers or estimators.

  7. Adaptive iterated function systems filter for images highly corrupted with fixed - Value impulse noise

    NASA Astrophysics Data System (ADS)

    Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.

    2014-06-01

    The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.

  8. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    PubMed

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-05-09

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors.

  9. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGES

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  10. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  11. Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.

    PubMed

    Kiran Kumar, G R; Reddy, M Ramasubba

    2018-06-08

    Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.

  12. Molecular factor computing for predictive spectroscopy.

    PubMed

    Dai, Bin; Urbas, Aaron; Douglas, Craig C; Lodder, Robert A

    2007-08-01

    The concept of molecular factor computing (MFC)-based predictive spectroscopy was demonstrated here with quantitative analysis of ethanol-in-water mixtures in a MFC-based prototype instrument. Molecular computing of vectors for transformation matrices enabled spectra to be represented in a desired coordinate system. New coordinate systems were selected to reduce the dimensionality of the spectral hyperspace and simplify the mechanical/electrical/computational construction of a new MFC spectrometer employing transmission MFC filters. A library search algorithm was developed to calculate the chemical constituents of the MFC filters. The prototype instrument was used to collect data from 39 ethanol-in-water mixtures (range 0-14%). For each sample, four different voltage outputs from the detector (forming two factor scores) were measured by using four different MFC filters. Twenty samples were used to calibrate the instrument and build a multivariate linear regression prediction model, and the remaining samples were used to validate the predictive ability of the model. In engineering simulations, four MFC filters gave an adequate calibration model (r2 = 0.995, RMSEC = 0.229%, RMSECV = 0.339%, p = 0.05 by f test). This result is slightly better than a corresponding PCR calibration model based on corrected transmission spectra (r2 = 0.993, RMSEC = 0.359%, RMSECV = 0.551%, p = 0.05 by f test). The first actual MFC prototype gave an RMSECV = 0.735%. MFC was a viable alternative to conventional spectrometry with the potential to be more simply implemented and more rapid and accurate.

  13. Filter-fluorescer measurement of low-voltage simulator x-ray energy spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldwin, G.T.; Craven, R.E.

    X-ray energy spectra of the Maxwell Laboratories MBS and Physics International Pulserad 737 were measured using an eight-channel filter-fluorescer array. The PHOSCAT computer code was used to calculate channel response functions, and the UFO code to unfold spectrum.

  14. Experimental Study of Hydraulic Systems Transient Response Characteristics

    DTIC Science & Technology

    1978-12-01

    of Filter .. ... ...... ..... ..... 28 Effects of Quincke -Tube. .. ..... ...... ... 28 Error ’Estimation. .. ... ...... ..... ..... 33 I. CONCLUSIONS...System With Quincke -Tube i Configuration ..... ..................... ... 11 6 Schematic of Pump System .... ............... ... 12 7 Example of Computer...Filter Configuration ........ ..................... 32 20 Transient Response, Reservoir System, Quincke -Tube (Short) Configuration, 505 PSIA

  15. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography.

    PubMed

    Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B

    2016-05-21

    The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.

  16. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

    NASA Astrophysics Data System (ADS)

    Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

    2018-05-01

    A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

  17. Neuro-inspired smart image sensor: analog Hmax implementation

    NASA Astrophysics Data System (ADS)

    Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman

    2015-03-01

    Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.

  18. Spatiotemporal source tuning filter bank for multiclass EEG based brain computer interfaces.

    PubMed

    Acharya, Soumyadipta; Mollazadeh, Moshen; Murari, Kartikeya; Thakor, Nitish

    2006-01-01

    Non invasive brain-computer interfaces (BCI) allow people to communicate by modulating features of their electroencephalogram (EEG). Spatiotemporal filtering has a vital role in multi-class, EEG based BCI. In this study, we used a novel combination of principle component analysis, independent component analysis and dipole source localization to design a spatiotemporal multiple source tuning (SPAMSORT) filter bank, each channel of which was tuned to the activity of an underlying dipole source. Changes in the event-related spectral perturbation (ERSP) were measured and used to train a linear support vector machine to classify between four classes of motor imagery tasks (left hand, right hand, foot and tongue) for one subject. ERSP values were significantly (p<0.01) different across tasks and better (p<0.01) than conventional spatial filtering methods (large Laplacian and common average reference). Classification resulted in an average accuracy of 82.5%. This approach could lead to promising BCI applications such as control of a prosthesis with multiple degrees of freedom.

  19. Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2012-01-01

    We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.

  20. Influence of Cone-Beam Computed Tomography filters on diagnosis of simulated endodontic complications.

    PubMed

    Verner, F S; D'Addazio, P S; Campos, C N; Devito, K L; Almeida, S M; Junqueira, R B

    2017-11-01

    To evaluate the influence of cone-beam computed tomography (CBCT) filters on diagnosis of simulated endodontic complications. Sixteen human teeth, in three mandibles, were submitted to the following simulated endodontic complications: (G1) fractured file, (G2) perforations in the canal walls, (G3) deviated cast post, and (G4) external root resorption. The mandibles were submitted to CBCT examination (I-Cat ® Next Generation). Five oral radiologists evaluated the images independently with and without XoranCat ® software filters. Accuracy, sensitivity and specificity were determined. ROC curves were calculated for each group with the filters, and the areas under the curves were compared using anova (one-way) test. McNemar test was applied for pair-wise agreement between all images versus the gold standard and original images versus images with filters (P < 0.05). G1 was the most difficult endodontic complication to diagnosis, followed by G2, G4 and G3. There were no differences between areas under the ROC curves for the filters in all groups; however, Sharpen Super Mild filter had the best results for G1 (0.47), Angio Sharpen Low 3 × 3 for G2 (0.93), Angio Sharpen Low 3 × 3, S9, Shadow and Sharpen for G3 (1.00) and Sharpen 3 × 3 for G4 (1.00). The McNemar test revealed significant differences between all filters with the gold standard (P = 0.00 for all filters) and the originals images (P = 0.00 for all filters) only in G1 group. There were no differences in the other groups. The filters did not improve the diagnosis of the simulated endodontic complications evaluated. Their diagnosis remains a major challenge in clinical practice. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  1. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD).

    PubMed

    Suzuki, Kenji

    2009-09-21

    Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved.

  2. Inverse design of high-Q wave filters in two-dimensional phononic crystals by topology optimization.

    PubMed

    Dong, Hao-Wen; Wang, Yue-Sheng; Zhang, Chuanzeng

    2017-04-01

    Topology optimization of a waveguide-cavity structure in phononic crystals for designing narrow band filters under the given operating frequencies is presented in this paper. We show that it is possible to obtain an ultra-high-Q filter by only optimizing the cavity topology without introducing any other coupling medium. The optimized cavity with highly symmetric resonance can be utilized as the multi-channel filter, raising filter and T-splitter. In addition, most optimized high-Q filters have the Fano resonances near the resonant frequencies. Furthermore, our filter optimization based on the waveguide and cavity, and our simple illustration of a computational approach to wave control in phononic crystals can be extended and applied to design other acoustic devices or even opto-mechanical devices. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  4. Optimal sharpening of compensated comb decimation filters: analysis and design.

    PubMed

    Troncoso Romero, David Ernesto; Laddomada, Massimiliano; Jovanovic Dolecek, Gordana

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature.

  5. A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-12-01

    To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.

  6. Collaborative filtering for brain-computer interaction using transfer learning and active class selection.

    PubMed

    Wu, Dongrui; Lance, Brent J; Parsons, Thomas D

    2013-01-01

    Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both k nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing.

  7. Collaborative Filtering for Brain-Computer Interaction Using Transfer Learning and Active Class Selection

    PubMed Central

    Wu, Dongrui; Lance, Brent J.; Parsons, Thomas D.

    2013-01-01

    Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing. PMID:23437188

  8. TomoEED: Fast Edge-Enhancing Denoising of Tomographic Volumes.

    PubMed

    Moreno, J J; Martínez-Sánchez, A; Martínez, J A; Garzón, E M; Fernández, J J

    2018-05-29

    TomoEED is an optimized software tool for fast feature-preserving noise filtering of large 3D tomographic volumes on CPUs and GPUs. The tool is based on the anisotropic nonlinear diffusion method. It has been developed with special emphasis in the reduction of the computational demands by using different strategies, from the algorithmic to the high performance computing perspectives. TomoEED manages to filter large volumes in a matter of minutes in standard computers. TomoEED has been developed in C. It is available for Linux platforms at http://www.cnb.csic.es/%7ejjfernandez/tomoeed. gmartin@ual.es, JJ.Fernandez@csic.es. Supplementary data are available at Bioinformatics online.

  9. SIRT-FILTER v1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PELT, DANIEL

    2017-04-21

    Small Python package to compute tomographic reconstructions using a reconstruction method published in: Pelt, D.M., & De Andrade, V. (2017). Improved tomographic reconstruction of large-scale real-world data by filter optimization. Advanced Structural and Chemical Imaging 2: 17; and Pelt, D. M., & Batenburg, K. J. (2015). Accurately approximating algebraic tomographic reconstruction by filtered backprojection. In Proceedings of The 13th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine (pp. 158-161).

  10. Localization Methods for a Mobile Robot in Urban Environments

    DTIC Science & Technology

    2004-10-04

    Columbia University, Department of Computer Science, 2001. [30] R. Brown and P. Hwang , Introduction to random signals and applied Kalman filtering, 3rd...sensor. An extended Kalman filter integrates the sensor data and keeps track of the uncertainty associated with it. The second method is based on...errors+ compass/GPS errors corrected odometry pose odometry error estimates zk zk h(x)~ h(x)~ Kalman Filter zk Fig. 4. A diagram of the extended

  11. Design of recursive digital filters having specified phase and magnitude characteristics

    NASA Technical Reports Server (NTRS)

    King, R. E.; Condon, G. W.

    1972-01-01

    A method for a computer-aided design of a class of optimum filters, having specifications in the frequency domain of both magnitude and phase, is described. The method, an extension to the work of Steiglitz, uses the Fletcher-Powell algorithm to minimize a weighted squared magnitude and phase criterion. Results using the algorithm for the design of filters having specified phase as well as specified magnitude and phase compromise are presented.

  12. On the ``optimal'' spatial distribution and directional anisotropy of the filter-width and grid-resolution in large eddy simulation

    NASA Astrophysics Data System (ADS)

    Toosi, Siavash; Larsson, Johan

    2017-11-01

    The accuracy of an LES depends directly on the accuracy of the resolved part of the turbulence. The continuing increase in computational power enables the application of LES to increasingly complex flow problems for which the LES community lacks the experience of knowing what the ``optimal'' or even an ``acceptable'' grid (or equivalently filter-width distribution) is. The goal of this work is to introduce a systematic approach to finding the ``optimal'' grid/filter-width distribution and their ``optimal'' anisotropy. The method is tested first on the turbulent channel flow, mainly to see if it is able to predict the right anisotropy of the filter/grid, and then on the more complicated case of flow over a backward-facing step, to test its ability to predict the right distribution and anisotropy of the filter/grid simultaneously, hence leading to a converged solution. This work has been supported by the Naval Air Warfare Center Aircraft Division at Pax River, MD, under contract N00421132M021. Computing time has been provided by the University of Maryland supercomputing resources (http://hpcc.umd.edu).

  13. Implementation of a Parallel Kalman Filter for Stratospheric Chemical Tracer Assimilation

    NASA Technical Reports Server (NTRS)

    Chang, Lang-Ping; Lyster, Peter M.; Menard, R.; Cohn, S. E.

    1998-01-01

    A Kalman filter for the assimilation of long-lived atmospheric chemical constituents has been developed for two-dimensional transport models on isentropic surfaces over the globe. An important attribute of the Kalman filter is that it calculates error covariances of the constituent fields using the tracer dynamics. Consequently, the current Kalman-filter assimilation is a five-dimensional problem (coordinates of two points and time), and it can only be handled on computers with large memory and high floating point speed. In this paper, an implementation of the Kalman filter for distributed-memory, message-passing parallel computers is discussed. Two approaches were studied: an operator decomposition and a covariance decomposition. The latter was found to be more scalable than the former, and it possesses the property that the dynamical model does not need to be parallelized, which is of considerable practical advantage. This code is currently used to assimilate constituent data retrieved by limb sounders on the Upper Atmosphere Research Satellite. Tests of the code examined the variance transport and observability properties. Aspects of the parallel implementation, some timing results, and a brief discussion of the physical results will be presented.

  14. Optimizing spatial patterns with sparse filter bands for motor-imagery based brain-computer interface.

    PubMed

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-11-30

    Common spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually. This study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification. Two public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI. The optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods. The proposed SFBCSP is a potential method for improving the performance of MI-based BCI. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Computational segmentation of collagen fibers from second-harmonic generation images of breast cancer

    NASA Astrophysics Data System (ADS)

    Bredfeldt, Jeremy S.; Liu, Yuming; Pehlke, Carolyn A.; Conklin, Matthew W.; Szulczewski, Joseph M.; Inman, David R.; Keely, Patricia J.; Nowak, Robert D.; Mackie, Thomas R.; Eliceiri, Kevin W.

    2014-01-01

    Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.

  16. Technical Report Series on Global Modeling and Data Assimilation. Volume 16; Filtering Techniques on a Stretched Grid General Circulation Model

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.

    1999-01-01

    This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.

  17. MR image reconstruction via guided filter.

    PubMed

    Huang, Heyan; Yang, Hang; Wang, Kang

    2018-04-01

    Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.

  18. Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results obtained from application to a turbofan engine model. This model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  19. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    Development of computer programs for component analysis and design aids used in production of microstrip components is discussed. System includes designs for couplers, filters, circulators, transformers, power splitters, diode switches, and attenuators.

  20. Eye Detection and Tracking for Intelligent Human Computer Interaction

    DTIC Science & Technology

    2006-02-01

    Meer and I. Weiss, “Smoothed Differentiation Filters for Images”, Journal of Visual Communication and Image Representation, 3(1):58-72, 1992. [13...25] P. Meer and I. Weiss. “Smoothed differentiation filters for images”. Journal of Visual Communication and Image Representation, 3(1), 1992

Top