Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm
NASA Technical Reports Server (NTRS)
LeTallec, Patrick; Tidriri, Moulay D.
1996-01-01
In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
Numerical simulation of steady supersonic flow. [spatial marching
NASA Technical Reports Server (NTRS)
Schiff, L. B.; Steger, J. L.
1981-01-01
A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.
TWO-LEVEL TIME MARCHING SCHEME USING SPLINES FOR SOLVING THE ADVECTION EQUATION. (R826371C004)
A new numerical algorithm using quintic splines is developed and analyzed: quintic spline Taylor-series expansion (QSTSE). QSTSE is an Eulerian flux-based scheme that uses quintic splines to compute space derivatives and Taylor series expansion to march in time. The new scheme...
Parallel CE/SE Computations via Domain Decomposition
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung
2000-01-01
This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.
A fast marching algorithm for the factored eikonal equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC
The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less
A Pseudo-Temporal Multi-Grid Relaxation Scheme for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
White, J. A.; Morrison, J. H.
1999-01-01
A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.
NASA Technical Reports Server (NTRS)
Toomarian, N.; Fijany, A.; Barhen, J.
1993-01-01
Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
Alarm systems detect volcanic tremor and earthquake swarms during Redoubt eruption, 2009
NASA Astrophysics Data System (ADS)
Thompson, G.; West, M. E.
2009-12-01
We ran two alarm algorithms on real-time data from Redoubt volcano during the 2009 crisis. The first algorithm was designed to detect escalations in continuous seismicity (tremor). This is implemented within an application called IceWeb which computes reduced displacement, and produces plots of reduced displacement and spectrograms linked to the Alaska Volcano Observatory internal webpage every 10 minutes. Reduced displacement is a measure of the amplitude of volcanic tremor, and is computed by applying a geometrical spreading correction to a displacement seismogram. When the reduced displacement at multiple stations exceeds pre-defined thresholds and there has been a factor of 3 increase in reduced displacement over the previous hour, a tremor alarm is declared. The second algorithm was to designed to detect earthquake swarms. The mean and median event rates are computed every 5 minutes based on the last hour of data from a real-time event catalog. By comparing these with thresholds, three swarm alarm conditions can be declared: a new swarm, an escalation in a swarm, and the end of a swarm. The end of swarm alarm is important as it may mark a transition from swarm to continuous tremor. Alarms from both systems were dispatched using a generic alarm management system which implements a call-down list, allowing observatory scientists to be called in sequence until someone acknowledged the alarm via a confirmation web page. The results of this simple approach are encouraging. The tremor alarm algorithm detected 26 of the 27 explosive eruptions that occurred from 23 March - 4 April. The swarm alarm algorithm detected all five of the main volcanic earthquake swarm episodes which occurred during the Redoubt crisis on 26-27 February, 21-23 March, 26 March, 2-4 April and 3-7 May. The end-of-swarm alarms on 23 March and 4 April were particularly helpful as they were caused by transitions from swarm to tremor shortly preceding explosive eruptions; transitions which were detected much earlier by the swarm algorithm than they were by the tremor algorithm.
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. PMID:27003958
Development of an upwind, finite-volume code with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Molvik, Gregory A.
1994-01-01
Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques, and a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical, and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data.
Towards developing robust algorithms for solving partial differential equations on MIMD machines
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Naik, Vijay K.
1988-01-01
Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.
Towards developing robust algorithms for solving partial differential equations on MIMD machines
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Naik, V. K.
1985-01-01
Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.
Development of an upwind, finite-volume code with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Molvik, Gregory A.
1995-01-01
Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques and of a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data. This report summarizes the research that took place from August 1,1994 to January 1, 1995.
Extension of a streamwise upwind algorithm to a moving grid system
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.
1990-01-01
A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.
TES Instrument Decommissioning
Atmospheric Science Data Center
2018-03-20
TES Instrument Decommissioning Tuesday, March 20, 2018 ... PST during a scheduled real time satellite contact the TES IOT along with the Aura FOT commanded the TES instrument to its ... generated from an algorithm update to the base Ground Data System software and will be made available to the scientific community in the ...
Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data
NASA Astrophysics Data System (ADS)
Chierici, F.; Embriaco, D.; Morucci, S.
2017-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1976-01-01
An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.
Time-saving impact of an algorithm to identify potential surgical site infections.
Knepper, B C; Young, H; Jenkins, T C; Price, C S
2013-10-01
To develop and validate a partially automated algorithm to identify surgical site infections (SSIs) using commonly available electronic data to reduce manual chart review. Retrospective cohort study of patients undergoing specific surgical procedures over a 4-year period from 2007 through 2010 (algorithm development cohort) or over a 3-month period from January 2011 through March 2011 (algorithm validation cohort). A single academic safety-net hospital in a major metropolitan area. Patients undergoing at least 1 included surgical procedure during the study period. Procedures were identified in the National Healthcare Safety Network; SSIs were identified by manual chart review. Commonly available electronic data, including microbiologic, laboratory, and administrative data, were identified via a clinical data warehouse. Algorithms using combinations of these electronic variables were constructed and assessed for their ability to identify SSIs and reduce chart review. The most efficient algorithm identified in the development cohort combined microbiologic data with postoperative procedure and diagnosis codes. This algorithm resulted in 100% sensitivity and 85% specificity. Time savings from the algorithm was almost 600 person-hours of chart review. The algorithm demonstrated similar sensitivity on application to the validation cohort. A partially automated algorithm to identify potential SSIs was highly sensitive and dramatically reduced the amount of manual chart review required of infection control personnel during SSI surveillance.
Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation
NASA Technical Reports Server (NTRS)
Liandrat, J.; Tchamitchian, PH.
1990-01-01
The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.
Segmentation of hand radiographs using fast marching methods
NASA Astrophysics Data System (ADS)
Chen, Hong; Novak, Carol L.
2006-03-01
Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.
Xue, Zhong; Li, Hai; Guo, Lei; Wong, Stephen T.C.
2010-01-01
It is a key step to spatially align diffusion tensor images (DTI) to quantitatively compare neural images obtained from different subjects or the same subject at different timepoints. Different from traditional scalar or multi-channel image registration methods, tensor orientation should be considered in DTI registration. Recently, several DTI registration methods have been proposed in the literature, but deformation fields are purely dependent on the tensor features not the whole tensor information. Other methods, such as the piece-wise affine transformation and the diffeomorphic non-linear registration algorithms, use analytical gradients of the registration objective functions by simultaneously considering the reorientation and deformation of tensors during the registration. However, only relatively local tensor information such as voxel-wise tensor-similarity, is utilized. This paper proposes a new DTI image registration algorithm, called local fast marching (FM)-based simultaneous registration. The algorithm not only considers the orientation of tensors during registration but also utilizes the neighborhood tensor information of each voxel to drive the deformation, and such neighborhood tensor information is extracted from a local fast marching algorithm around the voxels of interest. These local fast marching-based tensor features efficiently reflect the diffusion patterns around each voxel within a spherical neighborhood and can capture relatively distinctive features of the anatomical structures. Using simulated and real DTI human brain data the experimental results show that the proposed algorithm is more accurate compared with the FA-based registration and is more efficient than its counterpart, the neighborhood tensor similarity-based registration. PMID:20382233
Faster and More Accurate Transport Procedures for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.
2010-01-01
Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
A Subsystem Test Bed for Chinese Spectral Radioheliograph
NASA Astrophysics Data System (ADS)
Zhao, An; Yan, Yihua; Wang, Wei
2014-11-01
The Chinese Spectral Radioheliograph is a solar dedicated radio interferometric array that will produce high spatial resolution, high temporal resolution, and high spectral resolution images of the Sun simultaneously in decimetre and centimetre wave range. Digital processing of intermediate frequency signal is an important part in a radio telescope. This paper describes a flexible and high-speed digital down conversion system for the CSRH by applying complex mixing, parallel filtering, and extracting algorithms to process IF signal at the time of being designed and incorporates canonic-signed digit coding and bit-plane method to improve program efficiency. The DDC system is intended to be a subsystem test bed for simulation and testing for CSRH. Software algorithms for simulation and hardware language algorithms based on FPGA are written which use less hardware resources and at the same time achieve high performances such as processing high-speed data flow (1 GHz) with 10 MHz spectral resolution. An experiment with the test bed is illustrated by using geostationary satellite data observed on March 20, 2014. Due to the easy alterability of the algorithms on FPGA, the data can be recomputed with different digital signal processing algorithms for selecting optimum algorithm.
Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed
2011-01-01
We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734
NASA Technical Reports Server (NTRS)
Jentz, R. R.; Wackerman, C. C.; Shuchman, R. A.; Onstott, R. G.; Gloersen, Per; Cavalieri, Don; Ramseier, Rene; Rubinstein, Irene; Comiso, Joey; Hollinger, James
1991-01-01
Previous research studies have focused on producing algorithms for extracting geophysical information from passive microwave data regarding ice floe size, sea ice concentration, open water lead locations, and sea ice extent. These studies have resulted in four separate algorithms for extracting these geophysical parameters. Sea ice concentration estimates generated from each of these algorithms (i.e., NASA/Team, NASA/Comiso, AES/York, and Navy) are compared to ice concentration estimates produced from coincident high-resolution synthetic aperture radar (SAR) data. The SAR concentration estimates are produced from data collected in both the Beaufort Sea and the Greenland Sea in March 1988 and March 1989, respectively. The SAR data are coincident to the passive microwave data generated by the Special Sensor Microwave/Imager (SSM/I).
NASA Technical Reports Server (NTRS)
Bui, Trong T.; Mankbadi, Reda R.
1995-01-01
Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.
Drought Impacts on Agricultural Production and Land Fallowing in California's Central Valley in 2015
NASA Technical Reports Server (NTRS)
Rosevelt, Carolyn; Melton, Forrest S.; Johnson, Lee; Guzman, Alberto; Verdin, James P.; Thenkabail, Prasad S.; Mueller, Rick; Jones, Jeanine; Willis, Patrick
2016-01-01
The ongoing drought in California substantially reduced surface water supplies for millions of acres of irrigated farmland in California's Central Valley. Rapid assessment of drought impacts on agricultural production can aid water managers in assessing mitigation options, and guide decision making with respect to mitigation of drought impacts. Satellite remote sensing offers an efficient way to provide quantitative assessments of drought impacts on agricultural production and increases in fallow acreage associated with reductions in water supply. A key advantage of satellite-based assessments is that they can provide a measure of land fallowing that is consistent across both space and time. We describe an approach for monthly and seasonal mapping of uncultivated agricultural acreage developed as part of a joint effort by USGS, USDA, NASA, and the California Department of Water Resources to provide timely assessments of land fallowing during drought events. This effort has used the Central Valley of California as a pilot region for development and testing of an operational approach. To provide quantitative measures of uncultivated agricultural acreage from satellite data early in the season, we developed a decision tree algorithm and applied it to time-series data from Landsat TM (Thematic Mapper), ETM+ (Enhanced Thematic Mapper Plus), OLI (Operational Land Imager), and MODIS (Moderate Resolution Imaging Spectroradiometer). Our effort has been focused on development of indicators of drought impacts in the March-August timeframe based on measures of crop development patterns relative to a reference period with average or above average rainfall. To assess the accuracy of the algorithms, monthly ground validation surveys were conducted across 650 fields from March-September in 2014 and 2015. We present the algorithm along with updated results from the accuracy assessment, and data and maps of land fallowing in the Central Valley in 2015.
Automated Reconstruction of Neural Trees Using Front Re-initialization
Mukherjee, Amit; Stepanyants, Armen
2013-01-01
This paper proposes a greedy algorithm for automated reconstruction of neural arbors from light microscopy stacks of images. The algorithm is based on the minimum cost path method. While the minimum cost path, obtained using the Fast Marching Method, results in a trace with the least cumulative cost between the start and the end points, it is not sufficient for the reconstruction of neural trees. This is because sections of the minimum cost path can erroneously travel through the image background with undetectable detriment to the cumulative cost. To circumvent this problem we propose an algorithm that grows a neural tree from a specified root by iteratively re-initializing the Fast Marching fronts. The speed image used in the Fast Marching Method is generated by computing the average outward flux of the gradient vector flow field. Each iteration of the algorithm produces a candidate extension by allowing the front to travel a specified distance and then tracking from the farthest point of the front back to the tree. Robust likelihood ratio test is used to evaluate the quality of the candidate extension by comparing voxel intensities along the extension to those in the foreground and the background. The qualified extensions are appended to the current tree, the front is re-initialized, and Fast Marching is continued until the stopping criterion is met. To evaluate the performance of the algorithm we reconstructed 6 stacks of two-photon microscopy images and compared the results to the ground truth reconstructions by using the DIADEM metric. The average comparison score was 0.82 out of 1.0, which is on par with the performance achieved by expert manual tracers. PMID:24386539
Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration
2007-09-01
March 17, 2005. The seismic signals from both master and detected events are followed by infrasound arrivals. Note the long duration of the...correlation coefficient traces with a significant array -gain. A detected event that is co-located with the master event will record the same time-difference...estimating the detection threshold reduction for a range of highly repeating seismic sources using arrays of different configurations and at different
Measurement of thermally ablated lesions in sonoelastographic images using level set methods
NASA Astrophysics Data System (ADS)
Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.
2008-03-01
The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.
NASA Astrophysics Data System (ADS)
Afanasyev, A. P.; Bazhenov, R. I.; Luchaninov, D. V.
2018-05-01
The main purpose of the research is to develop techniques for defining the best technical and economic trajectories of cables in urban power systems. The proposed algorithms of calculation of the routes for laying cables take into consideration topological, technical and economic features of the cabling. The discrete option of an algorithm Fast marching method is applied as a calculating tool. It has certain advantages compared to other approaches. In particular, this algorithm is cost-effective to compute, therefore, it is not iterative. Trajectories of received laying cables are considered as optimal ones from the point of view of technical and economic criteria. They correspond to the present rules of modern urban development.
Umari, A.M.; Gorelick, S.M.
1986-01-01
It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)
Target surface finding using 3D SAR data
NASA Astrophysics Data System (ADS)
Ruiter, Jason R.; Burns, Joseph W.; Subotic, Nikola S.
2005-05-01
Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.
Psychophysical Comparisons in Image Compression Algorithms.
1999-03-01
Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March
NASA Technical Reports Server (NTRS)
Gedney, Stephen D.; Lansing, Faiza
1993-01-01
The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.
Unsteady Aerodynamic Force Sensing from Strain Data
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2017-01-01
A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development
NASA Astrophysics Data System (ADS)
Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.
2012-04-01
The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.
Development of upwind schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Chakravarthy, Sukumar R.
1987-01-01
Described are many algorithmic and computational aspects of upwind schemes and their second-order accurate formulations based on Total-Variation-Diminishing (TVD) approaches. An operational unification of the underlying first-order scheme is first presented encompassing Godunov's, Roe's, Osher's, and Split-Flux methods. For higher order versions, the preprocessing and postprocessing approaches to constructing TVD discretizations are considered. TVD formulations can be used to construct relaxation methods for unfactored implicit upwind schemes, which in turn can be exploited to construct space-marching procedures for even the unsteady Euler equations. A major part of the report describes time- and space-marching procedures for solving the Euler equations in 2-D, 3-D, Cartesian, and curvilinear coordinates. Along with many illustrative examples, several results of efficient computations on 3-D supersonic flows with subsonic pockets are presented.
Implementation of Preconditioned Dual-Time Procedures in OVERFLOW
NASA Technical Reports Server (NTRS)
Pandya, Shishir A.; Venkateswaran, Sankaran; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
Preconditioning methods have become the method of choice for the solution of flowfields involving the simultaneous presence of low Mach and transonic regions. It is well known that these methods are important for insuring accurate numerical discretization as well as convergence efficiency over various operating conditions such as low Mach number, low Reynolds number and high Strouhal numbers. For unsteady problems, the preconditioning is introduced within a dual-time framework wherein the physical time-derivatives are used to march the unsteady equations and the preconditioned time-derivatives are used for purposes of numerical discretization and iterative solution. In this paper, we describe the implementation of the preconditioned dual-time methodology in the OVERFLOW code. To demonstrate the performance of the method, we employ both simple and practical unsteady flowfields, including vortex propagation in a low Mach number flow, flowfield of an impulsively started plate (Stokes' first problem) arid a cylindrical jet in a low Mach number crossflow with ground effect. All the results demonstrate that the preconditioning algorithm is responsible for improvements to both numerical accuracy and convergence efficiency and, thereby, enables low Mach number unsteady computations to be performed at a fraction of the cost of traditional time-marching methods.
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less
A second-order accurate parabolized Navier-Stokes algorithm for internal flows
NASA Technical Reports Server (NTRS)
Chitsomboon, T.; Tiwari, S. N.
1984-01-01
A parabolized implicit Navier-Stokes algorithm which is of second-order accuracy in both the cross flow and marching directions is presented. The algorithm is used to analyze three model supersonic flow problems (the flow over a 10-degree edge). The results are found to be in good agreement with the results of other techniques available in the literature.
NASA Astrophysics Data System (ADS)
Cuellar Martinez, A.; Espinosa Aranda, J.; Suarez, G.; Ibarrola Alvarez, G.; Ramos Perez, S.; Camarillo Barranco, L.
2013-05-01
The Seismic Alert System of Mexico (SASMEX) uses three algorithms for alert activation that involve the distance between the seismic sensing field station (FS) and the city to be alerted; and the forecast for earthquake early warning activation in the cities integrated to the system, for example in Mexico City, the earthquakes occurred with the highest accelerations, were originated in the Pacific Ocean coast, whose distance this seismic region and the city, favors the use of algorithm called Algorithm SAS-I. This algorithm, without significant changes since its beginning in 1991, employs the data that generate one or more FS during P wave detection until S wave detection plus a period equal to the time employed to detect these phases; that is the double S-P time, called 2*(S-P). In this interval, the algorithm performs an integration process of quadratic samples from FS which uses a triaxial accelerometer to get two parameters: amplitude and growth rate measured until 2*(S-P) time. The parameters in SAS-I are used in a Magnitude classifier model, which was made from Guerrero Coast earthquakes time series, with reference to Mb magnitude mainly. This algorithm activates a Public or Preventive Alert if the model predicts whether Strong or Moderate earthquake. The SAS-I algorithm has been operating for over 23 years in the subduction zone of the Pacific Coast of Mexico, initially in Guerrero and followed by Oaxaca; and since March 2012 in the seismic region of Pacific covering the coasts among Jalisco, Colima, Michoacan, Guerrero and Oaxaca, where this algorithm has issued 16 Public Alert and 62 Preventive Alerts to the Mexico City where its soil conditions increase damages by earthquake such as the occurred in September 1985. This work shows the review of the SAS-I algorithm and possible alerts that it could generate from major earthquakes recordings detected by FS or seismometers near the earthquakes, coming from Pacific Ocean Coast whose have been felt in Mexico City, in order to observe the performance SAS-I algorithm.
NASA Astrophysics Data System (ADS)
Rosevelt, C.; Melton, F. S.; Johnson, L.; Verdin, J. P.; Thenkabail, P. S.; mueller, R.; Zakzeski, A.; Jones, J.
2013-12-01
Rapid assessment of drought impacts can aid water managers in assessing mitigation options, and guide decision making with respect to requests for local water transfers, county drought disaster designations, or state emergency proclamations. Satellite remote sensing offers an efficient way to provide quantitative assessments of drought impacts on agricultural production and land fallowing associated with reductions in water supply. A key advantage of satellite-based assessments is that they can provide a measure of land fallowing that is consistent across both space and time. Here we describe an approach for monthly mapping of land fallowing developed as part of a joint effort by USGS, USDA, and NASA to provide timely assessments of land fallowing during drought events. This effort has used the Central Valley of California as a pilot region for development and testing of an operational approach. To provide quantitative measures of fallowed land from satellite data early in the season, we developed a decision tree algorithm and applied it to timeseries of normalized difference vegetation index (NDVI) data from Landsat TM, ETM+, and MODIS. Our effort has been focused on development of leading indicators of drought impacts in the March - June timeframe based on measures of crop development patterns relative to a reference period with average or above average rainfall. This capability complements ongoing work by USDA to produce and publicly release within-season estimates of fallowed acreage from the USDA Cropland Data Layer. To assess the accuracy of the algorithms, monthly ground validation surveys were conducted along transects across the Central Valley at more than 200 fields per month from March - June, 2013. Here we present the algorithm for mapping fallowed acreage early in the season along with results from the accuracy assessment, and discuss potential applications to other regions.
Drought Impacts on Agricultural Production and Land Fallowing in California's Central Valley in 2015
NASA Astrophysics Data System (ADS)
Rosevelt, C.; Melton, F. S.; Johnson, L.; Guzman, A.; Verdin, J. P.; Thenkabail, P. S.; Mueller, R.; Jones, J.; Willis, P.
2015-12-01
The ongoing drought in California substantially reduced surface water supplies for millions of acres of irrigated farmland in California's Central Valley. Rapid assessment of drought impacts on agricultural production can aid water managers in assessing mitigation options, and guide decision making with respect to mitigation of drought impacts. Satellite remote sensing offers an efficient way to provide quantitative assessments of drought impacts on agricultural production and increases in fallow acreage associated with reductions in water supply. A key advantage of satellite-based assessments is that they can provide a measure of land fallowing that is consistent across both space and time. We describe an approach for monthly and seasonal mapping of uncultivated agricultural acreage developed as part of a joint effort by USGS, USDA, NASA, and the California Department of Water Resources to provide timely assessments of land fallowing during drought events. This effort has used the Central Valley of California as a pilot region for development and testing of an operational approach. To provide quantitative measures of uncultivated agricultural acreage from satellite data early in the season, we developed a decision tree algorithm and applied it to timeseries of data from Landsat TM, ETM+, OLI, and MODIS. Our effort has been focused on development of indicators of drought impacts in the March - August timeframe based on measures of crop development patterns relative to a reference period with average or above average rainfall. To assess the accuracy of the algorithms, monthly ground validation surveys were conducted across 650 fields from March - September in 2014 and 2015. We present the algorithm along with updated results from the accuracy assessment, and data and maps of land fallowing in the Central Valley in 2015.
Mapping Drought Impacts on Agricultural Production in California's Central Valley
NASA Astrophysics Data System (ADS)
Melton, F. S.; Guzman, A.; Johnson, L.; Rosevelt, C.; Verdin, J. P.; Dwyer, J. L.; Mueller, R.; Zakzeski, A.; Thenkabail, P. S.; Wallace, C.; Jones, J.; Windell, S.; Urness, J.; Teaby, A.; Hamblin, D.; Post, K. M.; Nemani, R. R.
2014-12-01
The ongoing drought in California has substantially reduced surface water supplies for millions of acres of irrigated farmland in California's Central Valley. Rapid assessment of drought impacts on agricultural production can aid water managers in assessing mitigation options, and guide decision making with respect to requests for local water transfers, county drought disaster designations, and allocation of emergency funds to mitigate drought impacts. Satellite remote sensing offers an efficient way to provide quantitative assessments of drought impacts on agricultural production and increases in idle acreage associated with reductions in water supply. A key advantage of satellite-based assessments is that they can provide a measure of land fallowing that is consistent across both space and time. We describe an approach for monthly and seasonal mapping of uncultivated agricultural acreage developed as part of a joint effort by USGS, USDA, NASA, and the California Department of Water Resources to provide timely assessments of land fallowing during drought events. This effort has used the Central Valley of California as a pilot region for development and testing of an operational approach. To provide quantitative measures of uncultivated agricultural acreage from satellite data early in the season, we developed a decision tree algorithm and applied it to timeseries of data from Landsat TM, ETM+, OLI, and MODIS. Our effort has been focused on development of indicators of drought impacts in the March - August timeframe based on measures of crop development patterns relative to a reference period with average or above average rainfall. To assess the accuracy of the algorithms, monthly ground validation surveys were conducted across 640 fields from March - September, 2014. We present the algorithm along with updated results from the accuracy assessment, and discuss potential applications to other regions.
Development of a three-dimensional Navier-Stokes code on CDC star-100 computer
NASA Technical Reports Server (NTRS)
Vatsa, V. N.; Goglia, G. L.
1978-01-01
A three-dimensional code in body-fitted coordinates was developed using MacCormack's algorithm. The code is structured to be compatible with any general configuration, provided that the metric coefficients for the transformation are available. The governing equations are developed in primitive variables in order to facilitate the incorporation of physical boundary conditions and turbulence-closure models. MacCormack's two-step, unsplit, time-marching algorithm is used to solve the unsteady Navier-Stokes equations until steady-state solution is achieved. Cases discussed include (1) flat plate in supersonic free stream; (2) supersonic flow along an axial corner; (3) subsonic flow in an axial corner at M infinity = 0.95; and (4) supersonic flow in an axial corner at M infinity 1.5.
A modified dodge algorithm for the parabolized Navier-Stokes equations and compressible duct flows
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1981-01-01
A revised version of a split-velocity method for numerical calculation of compressible duct flow was developed. The revision incorporates balancing of mass flow rates on each marching step in order to maintain front-to-back continuity during the calculation. The (checkerboard) zebra algorithm is applied to solution of the three-dimensional continuity equation in conservative form. A second-order A-stable linear multistep method is employed in effecting a marching solution of the parabolized momentum equations. A checkerboard successive overrelaxation iteration is used to solve the resulting implicit nonlinear systems of finite-difference equations which govern stepwise transition.
A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann
2003-01-01
A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.
Distributed Pheromone-Based Swarming Control of Unmanned Air and Ground Vehicles for RSTA
2008-03-20
Forthcoming in Proceedings of SPIE Defense & Security Conference, March 2008, Orlando, FL Distributed Pheromone -Based Swarming Control of Unmanned...describes recent advances in a fully distributed digital pheromone algorithm that has demonstrated its effectiveness in managing the complexity of...onboard digital pheromone responding to the needs of the automatic target recognition algorithms. UAVs and UGVs controlled by the same pheromone algorithm
Feasibility of the MUSIC Algorithm for the Active Protection System
2001-03-01
Feasibility of the MUSIC Algorithm for the Active Protection System ARL-MR-501 March 2001 Canh Ly Approved for public release; distribution... MUSIC Algorithm for the Active Protection System Canh Ly Sensors and Electron Devices Directorate Approved for public release; distribution unlimited...This report compares the accuracy of the doppler frequency of an incoming projectile with the use of the MUSIC (multiple signal classification
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion
Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.
2016-01-01
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.
Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G
2016-11-02
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.
Vector neural network signal integration for radar application
NASA Astrophysics Data System (ADS)
Bierman, Gregory S.
1994-07-01
The Litton Data Systems Vector Neural Network (VNN) is a unique multi-scan integration algorithm currently in development. The target of interest is a low-flying cruise missile. Current tactical radar cannot detect and track the missile in ground clutter at tactically useful ranges. The VNN solves this problem by integrating the energy from multiple frames to effectively increase the target's signal-to-noise ratio. The implementation plan is addressing the APG-63 radar. Real-time results will be available by March 1994.
Prototype for Meta-Algorithmic, Content-Aware Image Analysis
2015-03-01
PROTOTYPE FOR META-ALGORITHMIC, CONTENT-AWARE IMAGE ANALYSIS UNIVERSITY OF VIRGINIA MARCH 2015 FINAL TECHNICAL REPORT...ALGORITHMIC, CONTENT-AWARE IMAGE ANALYSIS 5a. CONTRACT NUMBER FA8750-12-C-0181 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62305E 6. AUTHOR(S) S...approaches were studied in detail and their results on a sample dataset are presented. 15. SUBJECT TERMS Image Analysis , Computer Vision, Content
De León-Luis, Juan; Bravo, Coral; Gámez, Francisco; Ortiz-Quintana, Luis
2015-07-01
To evaluate the reproducibility and feasibility of the new cardiovascular system sonographic evaluation algorithm for studying the extended fetal cardiovascular system, including the portal, thymic, and supra-aortic areas, in the second trimester of pregnancy (19-22 weeks). We performed a cross-sectional study of pregnant women with healthy fetuses (singleton and twin pregnancies) attending our center from March to August 2011. The extended fetal cardiovascular system was evaluated by following the new algorithm, a sequential acquisition of axial views comprising the following (caudal to cranial): I, portal sinus; II, ductus venosus; III, hepatic veins; IV, 4-chamber view; V, left ventricular outflow tract; VI, right ventricular outflow tract; VII, 3-vessel and trachea view; VIII, thy-box; and IX, subclavian arteries. Interobserver agreement on the feasibility and exploration time was estimated in a subgroup of patients. The feasibility and exploration time were determined for the main cohort. Maternal, fetal, and sonographic factors affecting both features were evaluated. Interobserver agreement was excellent for all views except view VIII; the difference in the mean exploration time between observers was 1.5 minutes (95% confidence interval, 0.7-2.1 minutes; P < .05). In 184 fetuses (mean gestational age ± SD, 20 ± 0.6 weeks), the feasibility of all views was close to 99% except view VIII (88.7%). The complete feasibility of the algorithm was 81.5%. The mean exploration time was 5.6 ± 4.2 minutes. Only the occiput anterior fetal position was associated with a lower frequency of visualization and a longer exploration time (P < .05). The cardiovascular system sonographic evaluation algorithm is a reproducible and feasible approach for exploration of the extended fetal cardiovascular system in a second-trimester scan. It can be used to explore these areas in normal and abnormal conditions and provides an integrated image of extended fetal cardiovascular anatomy. © 2015 by the American Institute of Ultrasound in Medicine.
A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1999-01-01
A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.
Nasr, Ahmed; Sullivan, Katrina J; Chan, Emily W; Wong, Coralie A; Benchimol, Eric I
2017-01-01
Objective Incidence rates of Hirschsprung disease (HD) vary by geographical region, yet no recent population-based estimate exists for Canada. The objective of our study was to validate and use health administrative data from Ontario, Canada to describe trends in incidence of HD between 1991 and 2013. Study design To identify children with HD we tested algorithms consisting of a combination of diagnostic, procedural, and intervention codes against the reference standard of abstracted clinical charts from a tertiary pediatric hospital. The algorithm with the highest positive predictive value (PPV) that could maintain high sensitivity was applied to health administrative data from April 31, 1991 to March 31, 2014 (fiscal years 1991–2013) to determine annual incidence. Temporal trends were evaluated using Poisson regression, controlling for sex as a covariate. Results The selected algorithm was highly sensitive (93.5%) and specific (>99.9%) with excellent predictive abilities (PPV 89.6% and negative predictive value >99.9%). Using the algorithm, a total of 679 patients diagnosed with HD were identified in Ontario between 1991 and 2013. The overall incidence during this time was 2.05 per 10,000 live births (or 1 in 4,868 live births). The incidence did not change significantly over time (odds ratio 0.998, 95% confidence interval 0.983–1.013, p = 0.80). Conclusion Ontario health administrative data can be used to accurately identify cases of HD and describe trends in incidence. There has not been a significant change in HD incidence over time in Ontario between 1991 and 2013. PMID:29180902
NASA Astrophysics Data System (ADS)
Tuozzolo, S.; Frasson, R. P. M.; Durand, M. T.
2017-12-01
We analyze a multi-temporal dataset of in-situ and airborne water surface measurements from the March 2015 AirSWOT field campaign on the Willamette River in Western Oregon, which included six days of AirSWOT flights over a 75km stretch of the river. We examine systematic errors associated with dark water and layover effects in the AirSWOT dataset, and test the efficacies of different filtering and spatial averaging techniques at reconstructing the water surface profile. Finally, we generate a spatially-averaged time-series of water surface elevation and water surface slope. These AirSWOT-derived reach-averaged values are ingested in a prospective SWOT discharge algorithm to assess its performance on SWOT-like data collected from a borderline SWOT-measurable river (mean width = 90m).
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.
2012-01-01
This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.
Localization of marine mammals near Hawaii using an acoustic propagation model
NASA Astrophysics Data System (ADS)
Tiemann, Christopher O.; Porter, Michael B.; Frazer, L. Neil
2004-06-01
Humpback whale songs were recorded on six widely spaced receivers of the Pacific Missile Range Facility (PMRF) hydrophone network near Hawaii during March of 2001. These recordings were used to test a new approach to localizing the whales that exploits the time-difference of arrival (time lag) of their calls as measured between receiver pairs in the PMRF network. The usual technique for estimating source position uses the intersection of hyperbolic curves of constant time lag, but a drawback of this approach is its assumption of a constant wave speed and straight-line propagation to associate acoustic travel time with range. In contrast to hyperbolic fixing, the algorithm described here uses an acoustic propagation model to account for waveguide and multipath effects when estimating travel time from hypothesized source positions. A comparison between predicted and measured time lags forms an ambiguity surface, or visual representation of the most probable whale position in a horizontal plane around the array. This is an important benefit because it allows for automated peak extraction to provide a location estimate. Examples of whale localizations using real and simulated data in algorithms of increasing complexity are provided.
A real-time method for autonomous passive acoustic detection-classification of humpback whales.
Abbot, Ted A; Premus, Vincent E; Abbot, Philip A
2010-05-01
This paper describes a method for real-time, autonomous, joint detection-classification of humpback whale vocalizations. The approach adapts the spectrogram correlation method used by Mellinger and Clark [J. Acoust. Soc. Am. 107, 3518-3529 (2000)] for bowhead whale endnote detection to the humpback whale problem. The objective is the implementation of a system to determine the presence or absence of humpback whales with passive acoustic methods and to perform this classification with low false alarm rate in real time. Multiple correlation kernels are used due to the diversity of humpback song. The approach also takes advantage of the fact that humpbacks tend to vocalize repeatedly for extended periods of time, and identification is declared only when multiple song units are detected within a fixed time interval. Humpback whale vocalizations from Alaska, Hawaii, and Stellwagen Bank were used to train the algorithm. It was then tested on independent data obtained off Kaena Point, Hawaii in February and March of 2009. Results show that the algorithm successfully classified humpback whales autonomously in real time, with a measured probability of correct classification in excess of 74% and a measured probability of false alarm below 1%.
NASA Technical Reports Server (NTRS)
Tuccillo, J. J.
1984-01-01
Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.
NASA Technical Reports Server (NTRS)
Chesters, Dennis; Keyser, Dennis A.; Larko, David E.; Uccellini, Louis W.
1987-01-01
An Atmospheric Variability Experiment (AVE) was conducted over the central U.S. in the spring of 1982, collecting radiosonde date to verify mesoscale soundings from the VISSR Atmospheric Sounder (VAS) on the GOES satellite. Previously published VAS/AVE comparisons for the 6 March 1982 case found that the satellite retrievals scarcely detected a low level temperature inversion or a mid-tropospheric cold pool over a special mesoscale radiosonde verification network in north central Texas. The previously published regression and physical retrieval algorithms did not fully utilize VAS' sensitivity to important subsynoptic thermal features. Therefore, the 6 March 1982 case was reprocessed adding two enhancements to the VAS regression retrieval algorithm: (1) the regression matrix was determined using AVE profile data obtained in the region at asynoptic times, and (2) more optimistic signal-to-noise statistical conditioning factors were applied to the VAS temperature sounding channels. The new VAS soundings resolve more of the low level temperature inversion and mid-level cold pool. Most of the improvements stems from the utilization of asynoptic radiosonde observations at NWS sites. This case suggests that VAS regression soundings may require a ground-based asynoptic profiler network to bridge the gap between the synoptic radiosonde network and the high resolution geosynchronous satellite observations during the day.
Bibliography of In-House and Contract Reports, Supplement 18
1992-10-01
Transparent Conforming Overlays 46 TITLE REPORT NO. YEAR Development, Service Tests, and Production Model 1307 -TR 1953 Tests, Autofocusing Rectifier...Development, Test, Preparation, Delivery, and ETL- 1307 1982 Installation of Algorithms for Optimal Adjustment of Inertial Survey Data Developmental Optical...B: Terrain ETL- 0428 1986 and Object Modeling Recognition (March 13, 1985 - March 13, 1986) Knowledge-Based Vision Techniques - Task B: Terrain ETL
A modified Dodge algorithm for the parabolized Navier-Stokes equation and compressible duct flows
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1981-01-01
A revised version of Dodge's split-velocity method for numerical calculation of compressible duct flow was developed. The revision incorporates balancing of mass flow rates on each marching step in order to maintain front-to-back continuity during the calculation. The (checkerboard) zebra algorithm is applied to solution of the three dimensional continuity equation in conservative form. A second-order A-stable linear multistep method is employed in effecting a marching solution of the parabolized momentum equations. A checkerboard iteration is used to solve the resulting implicit nonlinear systems of finite-difference equations which govern stepwise transition. Qualitive agreement with analytical predictions and experimental results was obtained for some flows with well-known solutions.
NASA Astrophysics Data System (ADS)
Brajard, J.; Moulin, C.; Thiria, S.
2008-10-01
This paper presents a comparison of the atmospheric correction accuracy between the standard sea-viewing wide field-of-view sensor (SeaWiFS) algorithm and the NeuroVaria algorithm for the ocean off the Indian coast in March 1999. NeuroVaria is a general method developed to retrieve aerosol optical properties and water-leaving reflectances for all types of aerosols, including absorbing ones. It has been applied to SeaWiFS images of March 1999, during an episode of transport of absorbing aerosols coming from pollutant sources in India. Water-leaving reflectances and aerosol optical thickness estimated by the two methods were extracted along a transect across the aerosol plume for three days. The comparison showed that NeuroVaria allows the retrieval of oceanic properties in the presence of absorbing aerosols with a better spatial and temporal stability than the standard SeaWiFS algorithm. NeuroVaria was then applied to the available SeaWiFS images over a two-week period. NeuroVaria algorithm retrieves ocean products for a larger number of pixels than the standard one and eliminates most of the discontinuities and artifacts associated with the standard algorithm in presence of absorbing aerosols.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Hixon, Duane; Sankar, L. N.
1993-01-01
During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.
Oginosawa, Yasushi; Kohno, Ritsuko; Honda, Toshihiro; Kikuchi, Kan; Nozoe, Masatsugu; Uchida, Takayuki; Minamiguchi, Hitoshi; Sonoda, Koichiro; Ogawa, Masahiro; Ideguchi, Takeshi; Kizaki, Yoshihisa; Nakamura, Toshihiro; Oba, Kageyuki; Higa, Satoshi; Yoshida, Keiki; Tsunoda, Soichi; Fujino, Yoshihisa; Abe, Haruhiko
2017-08-25
Shocks delivered by implanted anti-tachyarrhythmia devices, even when appropriate, lower the quality of life and survival. The new SmartShock Technology ® (SST) discrimination algorithm was developed to prevent the delivery of inappropriate shock. This prospective, multicenter, observational study compared the rate of inaccurate detection of ventricular tachyarrhythmia using the SST vs. a conventional discrimination algorithm.Methods and Results:Recipients of implantable cardioverter defibrillators (ICD) or cardiac resynchronization therapy defibrillators (CRT-D) equipped with the SST algorithm were enrolled and followed up every 6 months. The tachycardia detection rate was set at ≥150 beats/min with the SST algorithm. The primary endpoint was the time to first inaccurate detection of ventricular tachycardia (VT) with conventional vs. the SST discrimination algorithm, up to 2 years of follow-up. Between March 2012 and September 2013, 185 patients (mean age, 64.0±14.9 years; men, 74%; secondary prevention indication, 49.5%) were enrolled at 14 Japanese medical centers. Inaccurate detection was observed in 32 patients (17.6%) with the conventional, vs. in 19 patients (10.4%) with the SST algorithm. SST significantly lowered the rate of inaccurate detection by dual chamber devices (HR, 0.50; 95% CI: 0.263-0.950; P=0.034). Compared with previous algorithms, the SST discrimination algorithm significantly lowered the rate of inaccurate detection of VT in recipients of dual-chamber ICD or CRT-D.
High-order time-marching reinitialization for regional level-set functions
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-02-01
In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas
2015-04-01
Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.
Computation of multi-dimensional viscous supersonic jet flow
NASA Technical Reports Server (NTRS)
Kim, Y. N.; Buggeln, R. C.; Mcdonald, H.
1986-01-01
A new method has been developed for two- and three-dimensional computations of viscous supersonic flows with embedded subsonic regions adjacent to solid boundaries. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases relevant to internal supersonic flow is presented and compared with data. Comparison between data and computation are in general excellent thus indicating that the computational technique has great promise as a tool for calculating supersonic flow with embedded subsonic regions. Finally, a User's Manual is presented for the computer code used to perform the calculations.
Computation of multi-dimensional viscous supersonic flow
NASA Technical Reports Server (NTRS)
Buggeln, R. C.; Kim, Y. N.; Mcdonald, H.
1986-01-01
A method has been developed for two- and three-dimensional computations of viscous supersonic jet flows interacting with an external flow. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases associated with supersonic jet flow is presented and compared with other calculations for axisymmetric cases. Demonstration calculations indicate that the computational technique has great promise as a tool for calculating a wide range of supersonic flow problems including jet flow. Finally, a User's Manual is presented for the computer code used to perform the calculations.
Quantum Algorithms Based on Physical Processes
2013-12-03
quantum walks with hard-core bosons and the graph isomorphism problem,” American Physical Society March meeting, March 2011 Kenneth Rudinger, John...King Gamble, Mark Wellons, Mark Friesen, Dong Zhou, Eric Bach, Robert Joynt, and S.N. Coppersmith, “Quantum random walks of non-interacting bosons on...and noninteracting Bosons to distinguish nonisomorphic graphs. 1) We showed that quantum walks of two hard-core Bosons can distinguish all pairs of
Quantum Algorithms Based on Physical Processes
2013-12-02
quantum walks with hard-core bosons and the graph isomorphism problem,” American Physical Society March meeting, March 2011 Kenneth Rudinger, John...King Gamble, Mark Wellons, Mark Friesen, Dong Zhou, Eric Bach, Robert Joynt, and S.N. Coppersmith, “Quantum random walks of non-interacting bosons on...and noninteracting Bosons to distinguish nonisomorphic graphs. 1) We showed that quantum walks of two hard-core Bosons can distinguish all pairs of
Computer-Automated Evolution of Spacecraft X-Band Antennas
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Homby, Gregory S.; Linden, Derek S.
2010-01-01
A document discusses the use of computer- aided evolution in arriving at a design for X-band communication antennas for NASA s three Space Technology 5 (ST5) satellites, which were launched on March 22, 2006. Two evolutionary algorithms, incorporating different representations of the antenna design and different fitness functions, were used to automatically design and optimize an X-band antenna design. A set of antenna designs satisfying initial ST5 mission requirements was evolved by use these algorithms. The two best antennas - one from each evolutionary algorithm - were built. During flight-qualification testing of these antennas, the mission requirements were changed. After minimal changes in the evolutionary algorithms - mostly in the fitness functions - new antenna designs satisfying the changed mission requirements were evolved and within one month of this change, two new antennas were designed and prototypes of the antennas were built and tested. One of these newly evolved antennas was approved for deployment on the ST5 mission, and flight-qualified versions of this design were built and installed on the spacecraft. At the time of writing the document, these antennas were the first computer-evolved hardware in outer space.
A modified Dodge algorithm for the parabolized Navier-Stokes equations and compressible duct flows
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Dwoyer, D. M.
1983-01-01
A revised version of Dodge's split-velocity method for numerical calculation of compressible duct flow was developed. The revision incorporates balancing of mass flow rates on each marching step in order to maintain front-to-back continuity during the calculation. The (checkerboard) zebra algorithm is applied to solution of the three dimensional continuity equation in conservative form. A second-order A-stable linear multistep method is employed in effecting a marching solution of the parabolized momentum equations. A checkerboard iteration is used to solve the resulting implicit nonlinear systems of finite-difference equations which govern stepwise transition. Qualitative agreement with analytical predictions and experimental results was obtained for some flows with well-known solutions. Previously announced in STAR as N82-16363
Noniterative three-dimensional grid generation using parabolic partial differential equations
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1985-01-01
A new algorithm for generating three-dimensional grids has been developed and implemented which numerically solves a parabolic partial differential equation (PDE). The solution procedure marches outward in two coordinate directions, and requires inversion of a scalar tridiagonal system in the third. Source terms have been introduced to control the spacing and angle of grid lines near the grid boundaries, and to control the outer boundary point distribution. The method has been found to generate grids about 100 times faster than comparable grids generated via solution of elliptic PDEs, and produces smooth grids for finite-difference flow calculations.
Using flow information to support 3D vessel reconstruction from rotational angiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waechter, Irina; Bredno, Joerg; Weese, Juergen
2008-07-15
For the assessment of cerebrovascular diseases, it is beneficial to obtain three-dimensional (3D) morphologic and hemodynamic information about the vessel system. Rotational angiography is routinely used to image the 3D vascular geometry and we have shown previously that rotational subtraction angiography has the potential to also give quantitative information about blood flow. Flow information can be determined when the angiographic sequence shows inflow and possibly outflow of contrast agent. However, a standard volume reconstruction assumes that the vessel tree is uniformly filled with contrast agent during the whole acquisition. If this is not the case, the reconstruction exhibits artifacts. Here,more » we show how flow information can be used to support the reconstruction of the 3D vessel centerline and radii in this case. Our method uses the fast marching algorithm to determine the order in which voxels are analyzed. For every voxel, the rotational time intensity curve (R-TIC) is determined from the image intensities at the projection points of the current voxel. Next, the bolus arrival time of the contrast agent at the voxel is estimated from the R-TIC. Then, a measure of the intensity and duration of the enhancement is determined, from which a speed value is calculated that steers the propagation of the fast marching algorithm. The results of the fast marching algorithm are used to determine the 3D centerline by backtracking. The 3D radius is reconstructed from 2D radius estimates on the projection images. The proposed method was tested on computer simulated rotational angiography sequences with systematically varied x-ray acquisition, blood flow, and contrast agent injection parameters and on datasets from an experimental setup using an anthropomorphic cerebrovascular phantom. For the computer simulation, the mean absolute error of the 3D centerline and 3D radius estimation was 0.42 and 0.25 mm, respectively. For the experimental datasets, the mean absolute error of the 3D centerline was 0.45 mm. Under pulsatile and nonpulsatile conditions, flow information can be used to enable a 3D vessel reconstruction from rotational angiography with inflow and possibly outflow of contrast agent. We found that the most important parameter for the quality of the reconstruction of centerline and radii is the range through which the x-ray system rotates in the time span of the injection. Good results were obtained if this range was at least 135 deg. . As a standard c-arm can rotate 205 deg., typically one third of the acquisition can show inflow or outflow of contrast agent, which is required for the quantification of blood flow from rotational angiography.« less
Theoretical and experimental comparison of vapor cavitation in dynamically loaded journal bearings
NASA Astrophysics Data System (ADS)
Brewe, D. E.; Hamrock, B. J.; Jacobson, B. A.
Vapor cavitation for a submerged journal bearing under dynamically loaded conditions was investigated. The observation of vapor cavitation in the laboratory was done by high-speed photography. It was found that vapor cavitation occurs when the tensile stress applied to the oil exceeded the tensile strength of the oil or the binding of the oil to the surface. The theoretical solution to the Reynolds equation is determined numerically using a moving boundary algorithm. This algorithm conserves mass throughout the computational domain including the region of cavitation and its boundaries. An alternating direction implicit (MDI) method is used to effect the time march. A rotor undergoing circular whirl was studied. Predicted cavitation behavior was analyzed by three-dimensional computer graphic movies. The formation, growth, and collapse of the bubble in response to the dynamic conditions is shown. For the same conditions of dynamic loading, the cavitation bubble was studied in the laboratory using high-speed photography.
Theoretical and experimental comparison of vapor cavitation in dynamically loaded journal bearings
NASA Technical Reports Server (NTRS)
Brewe, D. E.; Hamrock, B. J.; Jacobson, B. A.
1985-01-01
Vapor cavitation for a submerged journal bearing under dynamically loaded conditions was investigated. The observation of vapor cavitation in the laboratory was done by high-speed photography. It was found that vapor cavitation occurs when the tensile stress applied to the oil exceeded the tensile strength of the oil or the binding of the oil to the surface. The theoretical solution to the Reynolds equation is determined numerically using a moving boundary algorithm. This algorithm conserves mass throughout the computational domain including the region of cavitation and its boundaries. An alternating direction implicit (MDI) method is used to effect the time march. A rotor undergoing circular whirl was studied. Predicted cavitation behavior was analyzed by three-dimensional computer graphic movies. The formation, growth, and collapse of the bubble in response to the dynamic conditions is shown. For the same conditions of dynamic loading, the cavitation bubble was studied in the laboratory using high-speed photography.
New Products and Perspectives from the Global Precipitation Measurement (GPM) Mission
NASA Astrophysics Data System (ADS)
Kummerow, C. D.; Randel, D.; Petkovic, V.
2016-12-01
The Global Precipitation Measurement (GPM) mission was launched in February 2014 as a joint mission between JAXA from Japan and NASA from the United States. GPM carries a state of the art dual-frequency precipitation radar and a multi-channel passive microwave radiometer that acts not only to enhance the radar's retrieval capability, but also as a reference for a constellation of existing satellites carrying passive microwave sensors. In March of 2016, GPM released Version 4 of its precipitation products that consists of radar, radiometer, and combined radar/radiometer products. The radiometer algorithm in Version 4 is the first time a fully parametric algorithm has been implemented. This talk will focus on the consistency among the constellation radiometers, and what these inconsistencies can tell us about the fundamental uncertainties within the rainfall products. This analysis will be used to then drive a bigger picture of how GPM's latest results inform the Global Water and Energy budgets.
NASA Astrophysics Data System (ADS)
Pura, John A.; Hamilton, Allison M.; Vargish, Geoffrey A.; Butman, John A.; Linguraru, Marius George
2011-03-01
Accurate ventricle volume estimates could improve the understanding and diagnosis of postoperative communicating hydrocephalus. For this category of patients, associated changes in ventricle volume can be difficult to identify, particularly over short time intervals. We present an automated segmentation algorithm that evaluates ventricle size from serial brain MRI examination. The technique combines serial T1- weighted images to increase SNR and segments the means image to generate a ventricle template. After pre-processing, the segmentation is initiated by a fuzzy c-means clustering algorithm to find the seeds used in a combination of fast marching methods and geodesic active contours. Finally, the ventricle template is propagated onto the serial data via non-linear registration. Serial volume estimates were obtained in an automated robust and accurate manner from difficult data.
2009-09-01
and could be used to compensate for high frequency distortions to the LOS caused by platform jitter and the effects of the optical turbulence . In...engineer an unknown detector based on few experimental interactions. For watermarking algorithms in particular, we seek to identify specific distortions ...of a watermarked image that clearly identify or rule out one particular class of embedding. These experimental distortions surgical test for rapid
A ’Multiple Pivoting’ Algorithm for Goal-Interval Programming Formulations.
1980-03-01
jotso _P- ,- Research Report CCS 355 A "MULTIPLE PIVOTING" ALGORITHM FOR GOAL-INTERVAL PROGRAMMING FORMULATIONS by R. Armstrong* A. Charnes*W. Cook...J. Godfrey*** March 1980 *The University of Texas at Austin **York University, Downsview, Ontario, Canada ***Washington, DC This research was partly...areas. However, the main direction of goal programing research has been in formulating models instead of seeking procedures that would provide
Algorithms for Data Intensive Applications on Intelligent and Smart Memories
2003-03-01
editors). Parallel Algorithms and Architectures. North Holland, 1986. [8] P. Diniz . USC ISI, Personal Communication, March, 2001. [9] M. Frigo, C. E ...hierarchy as well as the Translation Lookaside Buer TLB aect the e ectiveness of cache friendly optimizations These penalties vary among...processors and cause large variations in the e ectiveness of cache performance optimizations The area of graph problems is fundamental in a wide variety of
Search strategy in a complex and dynamic environment (the Indian Ocean case)
NASA Astrophysics Data System (ADS)
Loire, Sophie; Arbabi, Hassan; Clary, Patrick; Ivic, Stefan; Crnjaric-Zic, Nelida; Macesic, Senka; Crnkovic, Bojan; Mezic, Igor; UCSB Team; Rijeka Team
2014-11-01
The disappearance of Malaysia Airlines Flight 370 (MH370) in the early morning hours of 8 March 2014 has exposed the disconcerting lack of efficient methods for identifying where to look and how to look for missing objects in a complex and dynamic environment. The search area for plane debris is a remote part of the Indian Ocean. Searches, of the lawnmower type, have been unsuccessful so far. Lagrangian kinematics of mesoscale features are visible in hypergraph maps of the Indian Ocean surface currents. Without a precise knowledge of the crash site, these maps give an estimate of the time evolution of any initial distribution of plane debris and permits the design of a search strategy. The Dynamic Spectral Multiscale Coverage search algorithm is modified to search a spatial distribution of targets that is evolving with time following the dynamic of ocean surface currents. Trajectories are generated for multiple search agents such that their spatial coverage converges to the target distribution. Central to this DSMC algorithm is a metric for the ergodicity.
NASA Astrophysics Data System (ADS)
Levy, R. C.; Munchak, L. A.; Mattoo, S.; Patadia, F.; Remer, L. A.; Holz, R. E.
2015-10-01
To answer fundamental questions about aerosols in our changing climate, we must quantify both the current state of aerosols and how they are changing. Although NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, this period is still too short to create an aerosol climate data record (CDR). The Visible Infrared Imaging Radiometer Suite (VIIRS) was launched on the Suomi-NPP satellite in late 2011, with additional copies planned for future satellites. Can the MODIS aerosol data record be continued with VIIRS to create a consistent CDR? When compared to ground-based AERONET data, the VIIRS Environmental Data Record (V_EDR) has similar validation statistics as the MODIS Collection 6 (M_C6) product. However, the V_EDR and M_C6 are offset in regards to global AOD magnitudes, and tend to provide different maps of 0.55 μm AOD and 0.55/0.86 μm-based Ångström Exponent (AE). One reason is that the retrieval algorithms are different. Using the Intermediate File Format (IFF) for both MODIS and VIIRS data, we have tested whether we can apply a single MODIS-like (ML) dark-target algorithm on both sensors that leads to product convergence. Except for catering the radiative transfer and aerosol lookup tables to each sensor's specific wavelength bands, the ML algorithm is the same for both. We run the ML algorithm on both sensors between March 2012 and May 2014, and compare monthly mean AOD time series with each other and with M_C6 and V_EDR products. Focusing on the March-April-May (MAM) 2013 period, we compared additional statistics that include global and gridded 1° × 1° AOD and AE, histograms, sampling frequencies, and collocations with ground-based AERONET. Over land, use of the ML algorithm clearly reduces the differences between the MODIS and VIIRS-based AOD. However, although global offsets are near zero, some regional biases remain, especially in cloud fields and over brighter surface targets. Over ocean, use of the ML algorithm actually increases the offset between VIIRS and MODIS-based AOD (to ~ 0.025), while reducing the differences between AE. We characterize algorithm retrievability through statistics of retrieval fraction. In spite of differences between retrieved AOD magnitudes, the ML algorithm will lead to similar decisions about "whether to retrieve" on each sensor. Finally, we discuss how issues of calibration, as well as instrument spatial resolution may be contributing to the statistics and the ability to create a consistent MODIS → VIIRS aerosol CDR.
NASA Astrophysics Data System (ADS)
Levy, R. C.; Munchak, L. A.; Mattoo, S.; Patadia, F.; Remer, L. A.; Holz, R. E.
2015-07-01
To answer fundamental questions about aerosols in our changing climate, we must quantify both the current state of aerosols and how they are changing. Although NASA's Moderate resolution Imaging Spectroradiometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, this period is still too short to create an aerosol climate data record (CDR). The Visible Infrared Imaging Radiometer Suite (VIIRS) was launched on the Suomi-NPP satellite in late 2011, with additional copies planned for future satellites. Can the MODIS aerosol data record be continued with VIIRS to create a consistent CDR? When compared to ground-based AERONET data, the VIIRS Environmental Data Record (V_EDR) has similar validation statistics as the MODIS Collection 6 (M_C6) product. However, the V_EDR and M_C6 are offset in regards to global AOD magnitudes, and tend to provide different maps of 0.55 μm AOD and 0.55/0.86 μm-based Ångstrom Exponent (AE). One reason is that the retrieval algorithms are different. Using the Intermediate File Format (IFF) for both MODIS and VIIRS data, we have tested whether we can apply a single MODIS-like (ML) dark-target algorithm on both sensors that leads to product convergence. Except for catering the radiative transfer and aerosol lookup tables to each sensor's specific wavelength bands, the ML algorithm is the same for both. We run the ML algorithm on both sensors between March 2012 and May 2014, and compare monthly mean AOD time series with each other and with M_C6 and V_EDR products. Focusing on the March-April-May (MAM) 2013 period, we compared additional statistics that include global and gridded 1° × 1° AOD and AE, histograms, sampling frequencies, and collocations with ground-based AERONET. Over land, use of the ML algorithm clearly reduces the differences between the MODIS and VIIRS-based AOD. However, although global offsets are near zero, some regional biases remain, especially in cloud fields and over brighter surface targets. Over ocean, use of the ML algorithm actually increases the offset between VIIRS and MODIS-based AOD (to ∼ 0.025), while reducing the differences between AE. We characterize algorithm retrievibility through statistics of retrieval fraction. In spite of differences between retrieved AOD magnitudes, the ML algorithm will lead to similar decisions about "whether to retrieve" on each sensor. Finally, we discuss how issues of calibration, as well as instrument spatial resolution may be contributing to the statistics and the ability to create a consistent MODIS → VIIRS aerosol CDR.
Whyte, Joanna L; Engel-Nitz, Nicole M; Teitelbaum, April; Gomez Rey, Gabriel; Kallich, Joel D
2015-07-01
Administrative health care claims data are used for epidemiologic, health services, and outcomes cancer research and thus play a significant role in policy. Cancer stage, which is often a major driver of cost and clinical outcomes, is not typically included in claims data. Evaluate algorithms used in a dataset of cancer patients to identify patients with metastatic breast (BC), lung (LC), or colorectal (CRC) cancer using claims data. Clinical data on BC, LC, or CRC patients (between January 1, 2007 and March 31, 2010) were linked to a health care claims database. Inclusion required health plan enrollment ≥3 months before initial cancer diagnosis date. Algorithms were used in the claims database to identify patients' disease status, which was compared with physician-reported metastases. Generic and tumor-specific algorithms were evaluated using ICD-9 codes, varying diagnosis time frames, and including/excluding other tumors. Positive and negative predictive values, sensitivity, and specificity were assessed. The linked databases included 14,480 patients; of whom, 32%, 17%, and 14.2% had metastatic BC, LC, and CRC, respectively, at diagnosis and met inclusion criteria. Nontumor-specific algorithms had lower specificity than tumor-specific algorithms. Tumor-specific algorithms' sensitivity and specificity were 53% and 99% for BC, 55% and 85% for LC, and 59% and 98% for CRC, respectively. Algorithms to distinguish metastatic BC, LC, and CRC from locally advanced disease should use tumor-specific primary cancer codes with 2 claims for the specific primary cancer >30-42 days apart to reduce misclassification. These performed best overall in specificity, positive predictive values, and overall accuracy to identify metastatic cancer in a health care claims database.
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Blanchard, D. K.
1975-01-01
A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.
Faster and more accurate transport procedures for HZETRN
NASA Astrophysics Data System (ADS)
Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.
2010-12-01
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Faster and more accurate transport procedures for HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less
Vortex methods for separated flows
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.
1988-01-01
The numerical solution of the Euler or Navier-Stokes equations by Lagrangian vortex methods is discussed. The mathematical background is presented and includes the relationship with traditional point-vortex studies, convergence to smooth solutions of the Euler equations, and the essential differences between two and three-dimensional cases. The difficulties in extending the method to viscous or compressible flows are explained. Two-dimensional flows around bluff bodies are emphasized. Robustness of the method and the assessment of accuracy, vortex-core profiles, time-marching schemes, numerical dissipation, and efficient programming are treated. Operation counts for unbounded and periodic flows are given, and two algorithms designed to speed up the calculations are described.
NASA Technical Reports Server (NTRS)
2013-01-01
Topics covered include: Remote Data Access with IDL Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters Vectorized Rebinning Algorithm for Fast Data Down-Sampling Display Provides Pilots with Real-Time Sonic-Boom Information Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery Monitoring and Acquisition Real-time System (MARS) Analog Signal Correlating Using an Analog-Based Signal Conditioning Front End Micro-Textured Black Silicon Wick for Silicon Heat Pipe Array Robust Multivariable Optimization and Performance Simulation for ASIC Design; Castable Amorphous Metal Mirrors and Mirror Assemblies; Sandwich Core Heat-Pipe Radiator for Power and Propulsion Systems; Apparatus for Pumping a Fluid; Cobra Fiber-Optic Positioner Upgrade; Improved Wide Operating Temperature Range of Li-Ion Cells; Non-Toxic, Non-Flammable, -80 C Phase Change Materials; Soft-Bake Purification of SWCNTs Produced by Pulsed Laser Vaporization; Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models; Hand-Based Biometric Analysis; The Next Generation of Cold Immersion Dry Suit Design Evolution for Hypothermia Prevention; Integrated Lunar Information Architecture for Decision Support Version 3.0 (ILIADS 3.0); Relay Forward-Link File Management Services (MaROS Phase 2); Two Mechanisms to Avoid Control Conflicts Resulting from Uncoordinated Intent; XTCE GOVSAT Tool Suite 1.0; Determining Temperature Differential to Prevent Hardware Cross-Contamination in a Vacuum Chamber; SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws; Remote Data Exploration with the Interactive Data Language (IDL); Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals; Partitioned-Interval Quantum Optical Communications Receiver; and Practical UAV Optical Sensor Bench with Minimal Adjustability.
Developing the surveillance algorithm for detection of failure to recognize and treat severe sepsis.
Harrison, Andrew M; Thongprayoon, Charat; Kashyap, Rahul; Chute, Christopher G; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly
2015-02-01
To develop and test an automated surveillance algorithm (sepsis "sniffer") for the detection of severe sepsis and monitoring failure to recognize and treat severe sepsis in a timely manner. We conducted an observational diagnostic performance study using independent derivation and validation cohorts from an electronic medical record database of the medical intensive care unit (ICU) of a tertiary referral center. All patients aged 18 years and older who were admitted to the medical ICU from January 1 through March 31, 2013 (N=587), were included. The criterion standard for severe sepsis/septic shock was manual review by 2 trained reviewers with a third superreviewer for cases of interobserver disagreement. Critical appraisal of false-positive and false-negative alerts, along with recursive data partitioning, was performed for algorithm optimization. An algorithm based on criteria for suspicion of infection, systemic inflammatory response syndrome, organ hypoperfusion and dysfunction, and shock had a sensitivity of 80% and a specificity of 96% when applied to the validation cohort. In order, low systolic blood pressure, systemic inflammatory response syndrome positivity, and suspicion of infection were determined through recursive data partitioning to be of greatest predictive value. Lastly, 117 alert-positive patients (68% of the 171 patients with severe sepsis) had a delay in recognition and treatment, defined as no lactate and central venous pressure measurement within 2 hours of the alert. The optimized sniffer accurately identified patients with severe sepsis that bedside clinicians failed to recognize and treat in a timely manner. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
AATSR land surface temperature product algorithm verification over a WATERMED site
NASA Astrophysics Data System (ADS)
Noyes, E. J.; Sòria, G.; Sobrino, J. A.; Remedios, J. J.; Llewellyn-Jones, D. T.; Corlett, G. K.
A new operational Land Surface Temperature (LST) product generated from data acquired by the Advanced Along-Track Scanning Radiometer (AATSR) provides the opportunity to measure LST on a global scale with a spatial resolution of 1 km2. The target accuracy of the product, which utilises nadir data from the AATSR thermal channels at 11 and 12 μm, is 2.5 K for daytime retrievals and 1.0 K at night. We present the results of an experiment where the performance of the algorithm has been assessed for one daytime and one night time overpass occurring over the WATERMED field site near Marrakech, Morocco, on 05 March 2003. Top of atmosphere (TOA) brightness temperatures (BTs) are simulated for 12 pixels from each overpass using a radiative transfer model, with the LST product and independent emissivity values and atmospheric data as inputs. We have estimated the error in the LST product over this biome for this set of conditions by applying the operational AATSR LST retrieval algorithm to the modelled BTs and comparing the results with the original AATSR LSTs input into the model. An average bias of -1.00 K (standard deviation 0.07 K) for the daytime data, and -1.74 K (standard deviation 0.02 K) for the night time data is obtained, which indicates that the algorithm is yielding an LST that is too cold under these conditions. While these results are within specification for daytime retrievals, this suggests that the target accuracy of 1.0 K at night is not being met within this biome.
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1995-01-01
In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.
1992-08-26
the following three categories, de- pending where the nonlinear transformation is being applied on the data : (i) the Bussgang algorithms, where the...algorithms belong to one of the following three categories, depending where the nonlinear transformation is being applied on the data : "* The Bussgang...communication systems usually require an initial training period, during which a known data sequence (i.e., training sequence) is transmitted [43], [45]. An
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
NASA Technical Reports Server (NTRS)
Kleb, W. L.
1994-01-01
Steady flow over the leading portion of a multicomponent airfoil section is studied using computational fluid dynamics (CFD) employing an unstructured grid. To simplify the problem, only the inviscid terms are retained from the Reynolds-averaged Navier-Stokes equations - leaving the Euler equations. The algorithm is derived using the finite-volume approach, incorporating explicit time-marching of the unsteady Euler equations to a time-asymptotic, steady-state solution. The inviscid fluxes are obtained through either of two approximate Riemann solvers: Roe's flux difference splitting or van Leer's flux vector splitting. Results are presented which contrast the solutions given by the two flux functions as a function of Mach number and grid resolution. Additional information is presented concerning code verification techniques, flow recirculation regions, convergence histories, and computational resources.
NASA Astrophysics Data System (ADS)
Voznyuk, I.; Litman, A.; Tortel, H.
2015-08-01
A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.
Princic, Nicole; Gregory, Chris; Willson, Tina; Mahue, Maya; Felici, Diana; Werther, Winifred; Lenhart, Gregory; Foley, Kathleen A
2016-01-01
The objective was to expand on prior work by developing and validating a new algorithm to identify multiple myeloma (MM) patients in administrative claims. Two files were constructed to select MM cases from MarketScan Oncology Electronic Medical Records (EMR) and controls from the MarketScan Primary Care EMR during January 1, 2000-March 31, 2014. Patients were linked to MarketScan claims databases, and files were merged. Eligible cases were age ≥18, had a diagnosis and visit for MM in the Oncology EMR, and were continuously enrolled in claims for ≥90 days preceding and ≥30 days after diagnosis. Controls were age ≥18, had ≥12 months of overlap in claims enrollment (observation period) in the Primary Care EMR and ≥1 claim with an ICD-9-CM diagnosis code of MM (203.0×) during that time. Controls were excluded if they had chemotherapy; stem cell transplant; or text documentation of MM in the EMR during the observation period. A split sample was used to develop and validate algorithms. A maximum of 180 days prior to and following each MM diagnosis was used to identify events in the diagnostic process. Of 20 algorithms explored, the baseline algorithm of 2 MM diagnoses and the 3 best performing were validated. Values for sensitivity, specificity, and positive predictive value (PPV) were calculated. Three claims-based algorithms were validated with ~10% improvement in PPV (87-94%) over prior work (81%) and the baseline algorithm (76%) and can be considered for future research. Consistent with prior work, it was found that MM diagnoses before and after tests were needed.
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
A comparison of upwind schemes for computation of three-dimensional hypersonic real-gas flows
NASA Technical Reports Server (NTRS)
Gerbsch, R. A.; Agarwal, R. K.
1992-01-01
The method of Suresh and Liou (1992) is extended, and the resulting explicit noniterative upwind finite-volume algorithm is applied to the integration of 3D parabolized Navier-Stokes equations to model 3D hypersonic real-gas flowfields. The solver is second-order accurate in the marching direction and employs flux-limiters to make the algorithm second-order accurate, with total variation diminishing in the cross-flow direction. The algorithm is used to compute hypersonic flow over a yawed cone and over the Ames All-Body Hypersonic Vehicle. The solutions obtained agree well with other computational results and with experimental data.
Applications and Benefits for Big Data Sets Using Tree Distances and The T-SNE Algorithm
2016-03-01
BENEFITS FOR BIG DATA SETS USING TREE DISTANCES AND THE T-SNE ALGORITHM by Suyoung Lee March 2016 Thesis Advisor: Samuel E. Buttrey...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE APPLICATIONS AND BENEFITS FOR BIG DATA SETS USING TREE DISTANCES AND THE T-SNE...this work we use tree distance computed using Buttrey’s treeClust package in R, as discussed by Buttrey and Whitaker in 2015, to process mixed data
High-speed reacting flow simulation using USA-series codes
NASA Astrophysics Data System (ADS)
Chakravarthy, S. R.; Palaniswamy, S.
In this paper, the finite-rate chemistry (FRC) formulation for the USA-series of codes and three sets of validations are presented. USA-series computational fluid dynamics (CFD) codes are based on Unified Solution Algorithms including explicity and implicit formulations, factorization and relaxation approaches, time marching and space marching methodolgies, etc., in order to be able to solve a very wide class of CDF problems using a single framework. Euler or Navier-Stokes equations are solved using a finite-volume treatment with upwind Total Variation Diminishing discretization for the inviscid terms. Perfect and real gas options are available including equilibrium and nonequilibrium chemistry. This capability has been widely used to study various problems including Space Shuttle exhaust plumes, National Aerospace Plane (NASP) designs, etc. (1) Numerical solutions are presented showing the full range of possible solutions to steady detonation wave problems. (2) Comparison between the solution obtained by the USA code and Generalized Kinetics Analysis Program (GKAP) is shown for supersonic combustion in a duct. (3) Simulation of combustion in a supersonic shear layer is shown to have reasonable agreement with experimental observations.
A fast, parallel algorithm to solve the basic fluvial erosion/transport equations
NASA Astrophysics Data System (ADS)
Braun, J.
2012-04-01
Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on geological time scales. It can also be used to model surface processes at the continental or planetary scale and be linked to lithospheric or mantle flow models to predict the potential interactions between tectonics driving surface uplift in orogenic areas, mantle flow producing dynamic topography on continental scales and surface processes.
Computer-automated evolution of an X-band antenna for NASA's Space Technology 5 mission.
Hornby, Gregory S; Lohn, Jason D; Linden, Derek S
2011-01-01
Whereas the current practice of designing antennas by hand is severely limited because it is both time and labor intensive and requires a significant amount of domain knowledge, evolutionary algorithms can be used to search the design space and automatically find novel antenna designs that are more effective than would otherwise be developed. Here we present our work in using evolutionary algorithms to automatically design an X-band antenna for NASA's Space Technology 5 (ST5) spacecraft. Two evolutionary algorithms were used: the first uses a vector of real-valued parameters and the second uses a tree-structured generative representation for constructing the antenna. The highest-performance antennas from both algorithms were fabricated and tested and both outperformed a hand-designed antenna produced by the antenna contractor for the mission. Subsequent changes to the spacecraft orbit resulted in a change in requirements for the spacecraft antenna. By adjusting our fitness function we were able to rapidly evolve a new set of antennas for this mission in less than a month. One of these new antenna designs was built, tested, and approved for deployment on the three ST5 spacecraft, which were successfully launched into space on March 22, 2006. This evolved antenna design is the first computer-evolved antenna to be deployed for any application and is the first computer-evolved hardware in space.
Huang, Ting-Shuo; Huang, Shie-Shian; Shyu, Yu-Chiau; Lee, Chun-Hui; Jwo, Shyh-Chuan; Chen, Pei-Jer; Chen, Huang-Yang
2014-01-01
Procalcitonin (PCT)-based algorithms have been used to guide antibiotic therapy in several clinical settings. However, evidence supporting PCT-based algorithms for secondary peritonitis after emergency surgery is scanty. In this study, we aimed to investigate whether a PCT-based algorithm could safely reduce antibiotic exposure in this population. From April 2012 to March 2013, patients that had secondary peritonitis diagnosed at the emergency department and underwent emergency surgery were screened for eligibility. PCT levels were obtained pre-operatively, on post-operative days 1, 3, 5, and 7, and on subsequent days if needed. Antibiotics were discontinued if PCT was <1.0 ng/mL or decreased by 80% versus day 1, with resolution of clinical signs. Primary endpoints were time to discontinuation of intravenous antibiotics for the first episode and adverse events. Historical controls were retrieved for propensity score matching. After matching, 30 patients in the PCT group and 60 in the control were included for analysis. The median duration of antibiotic exposure in PCT group was 3.4 days (interquartile range [IQR] 2.2 days), while 6.1 days (IQR 3.2 days) in control (p < 0.001). The PCT algorithm significantly improves time to antibiotic discontinuation (p < 0.001, log-rank test). The rates of adverse events were comparable between 2 groups. Multivariate-adjusted extended Cox model demonstrated that the PCT-based algorithm was significantly associated with a 87% reduction in hazard of antibiotic exposure within 7 days (hazard ratio [HR] 0.13, 95% CI 0.07-0.21, p < 0.001), and a 68% reduction in hazard after 7 days (adjusted HR 0.32, 95% CI 0.11-0.99, p = 0.047). Advanced age, coexisting pulmonary diseases, and higher severity of illness were significantly associated with longer durations of antibiotic use. The PCT-based algorithm safely reduces antibiotic exposure in this study. Further randomized trials are needed to confirm our findings and incorporate cost-effectiveness analysis. Australian New Zealand Clinical Trials Registry ACTRN12612000601831.
A Domain Decomposition Parallelization of the Fast Marching Method
NASA Technical Reports Server (NTRS)
Herrmann, M.
2003-01-01
In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.
Deschamps, Thomas; Malladi, Ravi; Ravve, Igor
2004-01-01
In many instances, numerical integration of space-scale PDEs is the most time consuming operation of image processing. This is because the scale step is limited by conditional stability of explicit schemes. In this work, we introduce the unconditionally stable semi-implicit linearized difference scheme that is fashioned after additive operator split (AOS) [1], [2] for Beltrami and the subjective surface computation. The Beltrami flow [3], [4], [5] is one of the most effective denoising algorithms in image processing. For gray-level images, we show that the flow equation can be arranged in an advection-diffusion form, revealing the edge-enhancing properties of this flow. This also suggests the application of AOS method for faster convergence. The subjective surface [6] deals with constructing a perceptually meaningful interpretation from partial image data by mimicking the human visual system. However, initialization of the surface is critical for the final result and its main drawbacks are very slow convergence and the huge number of iterations required. In this paper, we first show that the governing equation for the subjective surface flow can be rearranged in an AOS implementation, providing a near real-time solution to the shape completion problem in 2D and 3D. Then, we devise a new initialization paradigm where we first "condition" the viewpoint surface using the Fast-Marching algorithm. We compare the original method with our new algorithm on several examples of real 3D medical images, thus revealing the improvement achieved.
Satellite observation of particulate organic carbon dynamics in ...
Particulate organic carbon (POC) plays an important role in coastal carbon cycling and the formation of hypoxia. Yet, coastal POC dynamics are often poorly understood due to a lack of long-term POC observations and the complexity of coastal hydrodynamic and biogeochemical processes that influence POC sources and sinks. Using field observations and satellite ocean color products, we developed a nw multiple regression algorithm to estimate POC on the Louisiana Continental Shelf (LCS) from satellite observations. The algorithm had reliable performance with mean relative error (MRE) of ?40% and root mean square error (RMSE) of ?50% for MODIS and SeaWiFS images for POC ranging between ?80 and ?1200 mg m23, and showed similar performance for a large estuary (Mobile Bay). Substantial spatiotemporal variability in the satellite-derived POC was observed on the LCS, with high POC found on the inner shelf (<10 m depth) and lower POC on the middle (10–50 m depth) and outer shelf (50–200 m depth), and with high POC found in winter (January–March) and lower POC in summer to fall (August–October). Correlation analysis between long-term POC time series and several potential influencing factors indicated that river discharge played a dominant role in POC dynamics on the LCS, while wind and surface currents also affected POC spatial patterns on short time scales. This study adds another example where satellite data with carefully developed algorithms can greatly increase
A multiresolution inversion for imaging the ionosphere
NASA Astrophysics Data System (ADS)
Yin, Ping; Zheng, Ya-Nan; Mitchell, Cathryn N.; Li, Bo
2017-06-01
Ionospheric tomography has been widely employed in imaging the large-scale ionospheric structures at both quiet and storm times. However, the tomographic algorithms to date have not been very effective in imaging of medium- and small-scale ionospheric structures due to limitations of uneven ground-based data distributions and the algorithm itself. Further, the effect of the density and quantity of Global Navigation Satellite Systems data that could help improve the tomographic results for the certain algorithm remains unclear in much of the literature. In this paper, a new multipass tomographic algorithm is proposed to conduct the inversion using intensive ground GPS observation data and is demonstrated over the U.S. West Coast during the period of 16-18 March 2015 which includes an ionospheric storm period. The characteristics of the multipass inversion algorithm are analyzed by comparing tomographic results with independent ionosonde data and Center for Orbit Determination in Europe total electron content estimates. Then, several ground data sets with different data distributions are grouped from the same data source in order to investigate the impact of the density of ground stations on ionospheric tomography results. Finally, it is concluded that the multipass inversion approach offers an improvement. The ground data density can affect tomographic results but only offers improvements up to a density of around one receiver every 150 to 200 km. When only GPS satellites are tracked there is no clear advantage in increasing the density of receivers beyond this level, although this may change if multiple constellations are monitored from each receiving station in the future.
Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover
NASA Technical Reports Server (NTRS)
Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa
2005-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20%, in cases with up to 80% effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small. HSB failed in February 2005, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC in April 2005 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.
Monitoring Antarctic ice sheet surface melting with TIMESAT algorithm
NASA Astrophysics Data System (ADS)
Ye, Y.; Cheng, X.; Li, X.; Liang, L.
2011-12-01
Antarctic ice sheet contributes significantly to the global heat budget by controlling the exchange of heat, moisture, and momentum at the surface-atmosphere interface, which directly influence the global atmospheric circulation and climate change. Ice sheet melting will cause snow humidity increase, which will accelerate the disintegration and movement of ice sheet. As a result, detecting Antarctic ice sheet melting is essential for global climate change research. In the past decades, various methods have been proposed for extracting snowmelt information from multi-channel satellite passive microwave data. Some methods are based on brightness temperature values or a composite index of them, and others are based on edge detection. TIMESAT (Time-series of Satellite sensor data) is an algorithm for extracting seasonality information from time-series of satellite sensor data. With TIMESAT long-time series brightness temperature (SSM/I 19H) is simulated by Double Logistic function. Snow is classified to wet and dry snow with generalized Gaussian model. The results were compared with those from a wavelet algorithm. On this basis, Antarctic automatic weather station data were used for ground verification. It shows that this algorithm is effective in ice sheet melting detection. The spatial distribution of melting areas(Fig.1) shows that, the majority of melting areas are located on the edge of Antarctic ice shelf region. It is affected by land cover type, surface elevation and geographic location (latitude). In addition, the Antarctic ice sheet melting varies with seasons. It is particularly acute in summer, peaking at December and January, staying low in March. In summary, from 1988 to 2008, Ross Ice Shelf and Ronnie Ice Shelf have the greatest interannual variability in amount of melting, which largely determines the overall interannual variability in Antarctica. Other regions, especially Larsen Ice Shelf and Wilkins Ice Shelf, which is in the Antarctic Peninsula region, have relative stable and consistent melt occurrence from year to year.
Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data
NASA Astrophysics Data System (ADS)
Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.
2011-12-01
M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi-core cpus, it is not as fast as machine code. In the case of large datasets, someone should consider transferring parts of the code to C or Fortran through mex files. This code is available through EPA's website on the following link http://www.epa.gov/esd/cmb/GeophysicsWebsite/index.html Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.
Mapping Snow Grain Size over Greenland from MODIS
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Tedesco, Marco; Wang, Yujie; Kokhanovsky, Alexander
2008-01-01
This paper presents a new automatic algorithm to derive optical snow grain size (SGS) at 1 km resolution using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements. Differently from previous approaches, snow grains are not assumed to be spherical but a fractal approach is used to account for their irregular shape. The retrieval is conceptually based on an analytical asymptotic radiative transfer model which predicts spectral bidirectional snow reflectance as a function of the grain size and ice absorption. The analytical form of solution leads to an explicit and fast retrieval algorithm. The time series analysis of derived SGS shows a good sensitivity to snow metamorphism, including melting and snow precipitation events. Preprocessing is performed by a Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm, which includes gridding MODIS data to 1 km resolution, water vapor retrieval, cloud masking and an atmospheric correction. MAIAC cloud mask (CM) is a new algorithm based on a time series of gridded MODIS measurements and an image-based rather than pixel-based processing. Extensive processing of MODIS TERRA data over Greenland shows a robust performance of CM algorithm in discrimination of clouds over bright snow and ice. As part of the validation analysis, SGS derived from MODIS over selected sites in 2004 was compared to the microwave brightness temperature measurements of SSM\\I radiometer, which is sensitive to the amount of liquid water in the snowpack. The comparison showed a good qualitative agreement, with both datasets detecting two main periods of snowmelt. Additionally, MODIS SGS was compared with predictions of the snow model CROCUS driven by measurements of the automatic whether stations of the Greenland Climate Network. We found that CROCUS grain size is on average a factor of two larger than MODIS-derived SGS. Overall, the agreement between CROCUS and MODIS results was satisfactory, in particular before and during the first melting period in mid-June. Following detailed time series analysis of SGS for four permanent sites, the paper presents SGS maps over the Greenland ice sheet for the March-September period of 2004.
Latest processing status and quality assessment of the GOMOS, MIPAS and SCIAMACHY ESA dataset
NASA Astrophysics Data System (ADS)
Niro, F.; Brizzi, G.; Saavedra de Miguel, L.; Scarpino, G.; Dehn, A.; Fehr, T.; von Kuhlmann, R.
2011-12-01
GOMOS, MIPAS and SCIAMACHY instruments are successfully observing the changing Earth's atmosphere since the launch of the ENVISAT-ESA platform on March 2002. The measurements recorded by these instruments are relevant for the Atmospheric-Chemistry community both in terms of time extent and variety of observing geometry and techniques. In order to fully exploit these measurements, it is crucial to maintain a good reliability in the data processing and distribution and to continuously improving the scientific output. The goal is to meet the evolving needs of both the near-real-time and research applications. Within this frame, the ESA operational processor remains the reference code, although many scientific algorithms are nowadays available to the users. In fact, the ESA algorithm has a well-established calibration and validation scheme, a certified quality assessment process and the possibility to reach a wide users' community. Moreover, the ESA algorithm upgrade procedures and the re-processing performances have much improved during last two years, thanks to the recent updates of the Ground Segment infrastructure and overall organization. The aim of this paper is to promote the usage and stress the quality of the ESA operational dataset for the GOMOS, MIPAS and SCIAMACHY missions. The recent upgrades in the ESA processor (GOMOS V6, MIPAS V5 and SCIAMACHY V5) will be presented, with detailed information on improvements in the scientific output and preliminary validation results. The planned algorithm evolution and on-going re-processing campaigns will be mentioned that involves the adoption of advanced set-up, such as the MIPAS V6 re-processing on a clouds-computing system. Finally, the quality control process will be illustrated that allows to guarantee a standard of quality to the users. In fact, the operational ESA algorithm is carefully tested before switching into operations and the near-real time and off-line production is thoughtfully verified via the implementation of automatic quality control procedures. The scientific validity of the ESA dataset will be additionally illustrated with examples of applications that can be supported, such as ozone-hole monitoring, volcanic ash detection and analysis of atmospheric composition changes during the past years.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
NASA Technical Reports Server (NTRS)
Gupta, R. N.; Moss, J. N.; Simmonds, A. L.
1982-01-01
Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.
Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.; vanLeer, Bram
1999-01-01
A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.
On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment
NASA Astrophysics Data System (ADS)
Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.
2004-12-01
The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.
Optimal Exploitation of the Temporal and Spatial Resolution of SEVIRI for the Nowcasting of Clouds
NASA Astrophysics Data System (ADS)
Sirch, Tobias; Bugliaro, Luca
2015-04-01
Optimal Exploitation of the Temporal and Spatial Resolution of SEVIRI for the Nowcasting of Clouds An algorithm was developed to forecast the development of water and ice clouds for the successive 5-120 minutes separately using satellite data from SEVIRI (Spinning Enhanced Visible and Infrared Imager) aboard Meteosat Second Generation (MSG). In order to derive cloud cover, optical thickness and cloud top height of high ice clouds "The Cirrus Optical properties derived from CALIOP and SEVIRI during day and night" (COCS, Kox et al. [2014]) algorithm is applied. For the determination of the liquid water clouds the APICS ("Algorithm for the Physical Investigation of Clouds with SEVIRI", Bugliaro e al. [2011]) cloud algorithm is used, which provides cloud cover, optical thickness and effective radius. The forecast rests upon an optical flow method determining a motion vector field from two satellite images [Zinner et al., 2008.] With the aim of determining the ideal time separation of the satellite images that are used for the determination of the cloud motion vector field for every forecast horizon time the potential of the better temporal resolution of the Meteosat Rapid Scan Service (5 instead of 15 minutes repetition rate) has been investigated. Therefore for the period from March to June 2013 forecasts up to 4 hours in time steps of 5 min based on images separated by a time interval of 5 min, 10 min, 15 min, 30 min have been created. The results show that Rapid Scan data produces a small reduction of errors for a forecast horizon up to 30 minutes. For the following time steps forecasts generated with a time interval of 15 min should be used and for forecasts up to several hours computations with a time interval of 30 min provide the best results. For a better spatial resolution the HRV channel (High Resolution Visible, 1km instead of 3km maximum spatial resolution at the subsatellite point) has been integrated into the forecast. To detect clouds the difference of the measured albedo from SEVIRI and the clear-sky albedo provided by MODIS has been used and additionally the temporal development of this quantity. A pre-requisite for this work was an adjustment of the geolocation accuracy for MSG and MODIS by shifting the MODIS data and quantifying the correlation between both data sets.
FOREWORD: IV International Time-Scale Algorithms Symposium, BIPM, Sèvres, 18-19 March 2002
NASA Astrophysics Data System (ADS)
Leschiutta, Sigfrido
2003-06-01
Time-scale formation, along with atomic time/frequency standards and time comparison techniques, is one of the three basic ingredients of Time Metrology. Before summarizing this Symposium and the relevant outcomes, let me make a couple of very general remarks. Clocks and comparison methods have today reached a very high level of accuracy: the nanosecond level. Some applications in the real word are now challenging the capacity of the National Metrological Laboratories. It is therefore essential that the algorithms dealing with clocks and comparison techniques should be such as to make the most of existing technologies. The comfortable margin of accuracy we were used to, between Laboratories and the Field, is gone forever. While clock makers and time-comparison experts meet regularly (FCS, PTTI, EFTF, CPEM, URSI, UIT, etc), the somewhat secluded community of experts in time-scale formation lacks a similar point of contact, with the exception of the CCTF meeting. This venue must consequently be welcomed. Let me recall some highlights from this Symposium: there were about 60 attendees from 15 nations, plus international institutions, such as the host BIPM, and a supranational one, ESA. About 30 papers, prepared in some 20 laboratories, were received: among these papers, four tutorials were offered; descriptions of local time scales including the local algorithms were presented; four papers considered the algorithms applied to the results of time-comparison methods; and six papers covered the special requirements of some specialized time-scale 'users'. The four basic ingredients of time-scale formation: models, noise, filtering and steering, received attention and were also discussed, not just during the sessions. The most demanding applications for time scales now come from Global Navigation Satellite systems; in six papers the progress of some programmes was described and the present and future needs were presented and documented. The lively discussion on future navigation systems led to the following four points: an overall accuracy in timing of one nanosecond is a must; the combined 'clock and orbit' effects on the knowledge of satellite position should be less than one metre; a combined solution for positioning and timing should be pursued; a 'new' time window (2 h to 4 h) emerged, in which the accuracy and stability parameters of the clocks forming a time scale for space application are to be optimized. That interval is linked to some criteria and methods for on-board clock corrections. A revival of interest in the time-proven Kalman filter was noted; in the course of a tutorial on past experience, a number of new approaches were discussed. Some further research is in order, but one should heed the comment: 'do not ask too much of a filter'. The Kalman approach is indeed powerful in combining sets of different data, provided that the possible problems of convergence are suitably addressed. Attention was also focused on the possibility of becoming victims of ever-present 'hidden' correlations. The TAI algorithm, ALGOS, is about 30 years old and the fundamental approach remains unchanged and unchallenged. A number of small refinements, all justified, were introduced in the 'constants' and parameters, but the general philosophy holds. In so far as the BIPM Time Section and the CCTF Working Group on Algorithms are concerned, on the basis of the outcome of this Symposium it is clear that they should follow the evolution of TAI and suggest any appropriate action to the CCTF. This Symposium, which gathered the world experts on T/F algorithms in Paris for two days, offered a wonderful opportunity for cross-fertilization between researchers operating in different and interdependent communities that are loosely connected. Thanks are due to Felicitas Arias, Demetrios Matsakis and Patrizia Tavella and their host organizations for having provided the community with this learning experience. One last comment: please do not wait another 14 years for the next Time Scale Algorithm Symposium.
NASA Astrophysics Data System (ADS)
Zhenqing, L.; Sheng, C.; Chaoying, H.
2017-12-01
The core satellite of Global Precipitation Measurement (GPM) mission was launched on 27 February2014 with two core sensors dual-frequency precipitation radar (DPR) and microwave imager (GMI). The algorithm of Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG) blends the advantages of currently most popular satellite-based quantitative precipitation estimates (QPE) algorithms, i.e. TRMM Multi-satellite Precipitation Analysis (TMPA), Climate Prediction Center morphing technique (CMORPH) ADDIN EN.CITE ADDIN EN.CITE.DATA , Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS).Therefore, IMERG is deemed to be the state-of-art precipitation product with high spatio-temporal resolution of 0.1°/30min. The real-time and post real-time IMERG products are now available online at https://stormpps.gsfc.nasa.gov/storm. Early studies about assessment of IMERG with gauge observations or analysis products show that the current version GPM Day-1 product IMERG demonstrates promising performance over China [1], Europe [2], and United States [3]. However, few studies are found to study the IMERG' potentials of hydrologic utility.In this study, the real-time and final run post real-time IMERG products are hydrologically evaluated with gauge analysis product as reference over Nanliu River basin (Fig.1) in Southern China since March 2014 to February 2017 with Xinanjiang model. Statistics metrics Relative Bias (RB), Root-Mean-Squared Error (RMSE), Correlation Coefficient (CC), Probability Of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), and Nash-Sutcliffe (NSCE) index will be used to compare the stream flow simulated with IMERG to the observed stream flow. This timely hydrologic evaluation is expected to offer insights into IMERG' potentials in hydrologic utility and thus provide useful feedback to the IMERG algorithm developers and the hydrologic users.
CATS Version 2 Aerosol Feature Detection and Applications for Data Assimilation
NASA Technical Reports Server (NTRS)
Nowottnick, E. P.; Yorks, J. E.; Selmer, P. A.; Palm, S. P.; Hlavka, D. L.; Pauly, R. M.; Ozog, S.; McGill, M. J.; Da Silva, A.
2017-01-01
The Cloud Aerosol Transport System (CATS) lidar has been operating onboard the International Space Station (ISS) since February 2015 and provides vertical observations of clouds and aerosols using total attenuated backscatter and depolarization measurements. From February March 2015, CATS operated in Mode 1, providing backscatter and depolarization measurements at 532 and 1064 nm. CATS began operation in Mode 2 in March 2015, providing backscatter and depolarization measurements at 1064 nm and has continued to operate to the present in this mode. CATS level 2 products are derived from these measurements, including feature detection, cloud aerosol discrimination, cloud and aerosol typing, and optical properties of cloud and aerosol layers. Here, we present changes to our level 2 algorithms, which were aimed at reducing several biases in our version 1 level 2 data products. These changes will be incorporated into our upcoming version 2 level 2 data release in summer 2017. Additionally, owing to the near real time (NRT) data downlinking capabilities of the ISS, CATS provides expedited NRT data products within 6 hours of observation time. This capability provides a unique opportunity for supporting field campaigns and for developing data assimilation techniques to improve simulated cloud and aerosol vertical distributions in models. We additionally present preliminary work toward assimilating CATS observations into the NASA Goddard Earth Observing System version 5 (GEOS-5) global atmospheric model and data assimilation system.
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
NASA Technical Reports Server (NTRS)
Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald
2015-01-01
In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other satellite, ground radar and surface reports. The algorithm was also applied to global land measurements for a single year and showed close agreement with other satellite based hail climatologies. Such a product could serve as a prototype for use with a future geostationary based microwave sensor such as NASA's proposed PATH mission.
Numerical study of hydrogen-air supersonic combustion by using elliptic and parabolized equations
NASA Technical Reports Server (NTRS)
Chitsomboon, T.; Tiwari, S. N.
1986-01-01
The two-dimensional Navier-Stokes and species continuity equations are used to investigate supersonic chemically reacting flow problems which are related to scramjet-engine configurations. A global two-step finite-rate chemistry model is employed to represent the hydrogen-air combustion in the flow. An algebraic turbulent model is adopted for turbulent flow calculations. The explicit unsplit MacCormack finite-difference algorithm is used to develop a computer program suitable for a vector processing computer. The computer program developed is then used to integrate the system of the governing equations in time until convergence is attained. The chemistry source terms in the species continuity equations are evaluated implicitly to alleviate stiffness associated with fast chemical reactions. The problems solved by the elliptic code are re-investigated by using a set of two-dimensional parabolized Navier-Stokes and species equations. A linearized fully-coupled fully-implicit finite difference algorithm is used to develop a second computer code which solves the governing equations by marching in spce rather than time, resulting in a considerable saving in computer resources. Results obtained by using the parabolized formulation are compared with the results obtained by using the fully-elliptic equations. The comparisons indicate fairly good agreement of the results of the two formulations.
Marching iterative methods for the parabolized and thin layer Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Israeli, M.
1985-01-01
Downstream marching iterative schemes for the solution of the Parabolized or Thin Layer (PNS or TL) Navier-Stokes equations are described. Modifications of the primitive equation global relaxation sweep procedure result in efficient second-order marching schemes. These schemes take full account of the reduced order of the approximate equations as they behave like the SLOR for a single elliptic equation. The improved smoothing properties permit the introduction of Multi-Grid acceleration. The proposed algorithm is essentially Reynolds number independent and therefore can be applied to the solution of the subsonic Euler equations. The convergence rates are similar to those obtained by the Multi-Grid solution of a single elliptic equation; the storage is also comparable as only the pressure has to be stored on all levels. Extensions to three-dimensional and compressible subsonic flows are discussed. Numerical results are presented.
Bioinspired architecture approach for a one-billion transistor smart CMOS camera chip
NASA Astrophysics Data System (ADS)
Fey, Dietmar; Komann, Marcus
2007-05-01
In the paper we present a massively parallel VLSI architecture for future smart CMOS camera chips with up to one billion transistors. To exploit efficiently the potential offered by future micro- or nanoelectronic devices traditional on central structures oriented parallel architectures based on MIMD or SIMD approaches will fail. They require too long and too many global interconnects for the distribution of code or the access to common memory. On the other hand nature developed self-organising and emergent principles to manage successfully complex structures based on lots of interacting simple elements. Therefore we developed a new as Marching Pixels denoted emergent computing paradigm based on a mixture of bio-inspired computing models like cellular automaton and artificial ants. In the paper we present different Marching Pixels algorithms and the corresponding VLSI array architecture. A detailed synthesis result for a 0.18 μm CMOS process shows that a 256×256 pixel image is processed in less than 10 ms assuming a moderate 100 MHz clock rate for the processor array. Future higher integration densities and a 3D chip stacking technology will allow the integration and processing of Mega pixels within the same time since our architecture is fully scalable.
An algorithm to identify functional groups in organic molecules.
Ertl, Peter
2017-06-07
The concept of functional groups forms a basis of organic chemistry, medicinal chemistry, toxicity assessment, spectroscopy and also chemical nomenclature. All current software systems to identify functional groups are based on a predefined list of substructures. We are not aware of any program that can identify all functional groups in a molecule automatically. The algorithm presented in this article is an attempt to solve this scientific challenge. An algorithm to identify functional groups in a molecule based on iterative marching through its atoms is described. The procedure is illustrated by extracting functional groups from the bioactive portion of the ChEMBL database, resulting in identification of 3080 unique functional groups. A new algorithm to identify all functional groups in organic molecules is presented. The algorithm is relatively simple and full details with examples are provided, therefore implementation in any cheminformatics toolkit should be relatively easy. The new method allows the analysis of functional groups in large chemical databases in a way that was not possible using previous approaches. Graphical abstract .
Mad Tea Party Cyclic Partitions
ERIC Educational Resources Information Center
Bekes, Robert; Pedersen, Jean; Shao, Bin
2012-01-01
Martin Gardner's "The Annotated Alice," and Robin Wilson's "Lewis Carroll in Numberland" led the authors to put this article in a fantasy setting. Alice, the March Hare, the Hatter, and the Dormouse describe a straightforward, elementary algorithm for counting the number of ways to fit "n" identical objects into "k" cups arranged in a circle. The…
Algorithms for parallel and vector computations
NASA Technical Reports Server (NTRS)
Ortega, James M.
1995-01-01
This is a final report on work performed under NASA grant NAG-1-1112-FOP during the period March, 1990 through February 1995. Four major topics are covered: (1) solution of nonlinear poisson-type equations; (2) parallel reduced system conjugate gradient method; (3) orderings for conjugate gradient preconditioners, and (4) SOR as a preconditioner.
Evaluation of Object Detection Algorithms for Ship Detection in the Visible Spectrum
2013-12-01
Kodak KAI-2093 was assumed throughout the model to be the image equitation sensor. The sensor was assumed to have taken all of the evaluation imagery...www.ShipPhotos.co.uk. [Online]. Available: http://www.shipphotos.co.uk/hull/ [42] Kodak (2007. March 19). Kodak KAI-2093 image sensor. [Online]. Available
NASA Astrophysics Data System (ADS)
Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.
2001-06-01
We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.
Pressman, Alice; Jacobson, Alice; Eguilos, Roderick; Gelfand, Amy; Huynh, Cynthia; Hamilton, Luisa; Avins, Andrew; Bakshi, Nandini; Merikangas, Kathleen
2016-04-01
The growing availability of electronic health data provides an opportunity to ascertain diagnosis-specific cases via systematic methods for sample recruitment for clinical research and health services evaluation. We developed and implemented a migraine probability algorithm (MPA) to identify migraine from electronic health records (EHR) in an integrated health plan. We identified all migraine outpatient diagnoses and all migraine-specific prescriptions for a five-year period (April 2008-March 2013) from the Kaiser Permanente, Northern California (KPNC) EHR. We developed and evaluated the MPA in two independent samples, and derived prevalence estimates of medically-ascertained migraine in KPNC by age, sex, and race. The period prevalence of medically-ascertained migraine among KPNC adults during April 2008-March 2013 was 10.3% (women: 15.5%, men: 4.5%). Estimates peaked with age in women but remained flat for men. Prevalence among Asians was half that of whites. We demonstrate the feasibility of an EHR-based algorithm to identify cases of diagnosed migraine and determine that prevalence patterns by our methods yield results comparable to aggregate estimates of treated migraine based on direct interviews in population-based samples. This inexpensive, easily applied EHR-based algorithm provides a new opportunity for monitoring changes in migraine prevalence and identifying potential participants for research studies. © International Headache Society 2015.
Pressman, Alice; Jacobson, Alice; Eguilos, Roderick; Gelfand, Amy; Huynh, Cynthia; Hamilton, Luisa; Avins, Andrew; Bakshi, Nandini; Merikangas, Kathleen
2016-01-01
Introduction The growing availability of electronic health data provides an opportunity to ascertain diagnosis-specific cases via systematic methods for sample recruitment for clinical research and health services evaluation. We developed and implemented a migraine probability algorithm (MPA) to identify migraine from electronic health records (EHR) in an integrated health plan. Methods We identified all migraine outpatient diagnoses and all migraine-specific prescriptions for a five-year period (April 2008–March 2013) from the Kaiser Permanente, Northern California (KPNC) EHR. We developed and evaluated the MPA in two independent samples, and derived prevalence estimates of medically-ascertained migraine in KPNC by age, sex, and race. Results The period prevalence of medically-ascertained migraine among KPNC adults during April 2008–March 2013 was 10.3% (women: 15.5%, men: 4.5%). Estimates peaked with age in women but remained flat for men. Prevalence among Asians was half that of whites. Conclusions We demonstrate the feasibility of an EHR-based algorithm to identify cases of diagnosed migraine and determine that prevalence patterns by our methods yield results comparable to aggregate estimates of treated migraine based on direct interviews in population-based samples. This inexpensive, easily applied EHR-based algorithm provides a new opportunity for monitoring changes in migraine prevalence and identifying potential participants for research studies. PMID:26069243
Kerr Reservoir LANDSAT experiment analysis for March 1981
NASA Technical Reports Server (NTRS)
Lecroy, S. R. (Principal Investigator)
1982-01-01
LANDSAT radiance data were used in an experiment conducted on the waters of Kerr Reservoir to determine if reliable algorithms could be developed that relate water quality parameters to remotely sensed data. A mix of different types of algorithms using the LANDSAT bands was generated to provide a thorough understanding of the relationships among the data involved. Except for secchi depth, the study demonstrated that for the ranges measured, the algorithms that satisfactorily represented the data encompass a mix of linear and nonlinear forms using only one LANDSAT band. Ratioing techniques did not improve the results since the initial design of the experiment minimized the errors against which this procedure is effective. Good correlations were found for total suspended solids, iron, turbidity, and secchi depth. Marginal correlations were discovered for nitrate and tannin + lignin. Quantification maps of Kerr Reservoir are presented for many of the water quality parameters using the developed algorithms.
76 FR 14005 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... Number: 20110307-5076. Comment Date: 5 p.m. Eastern Time on Monday, March 28, 2011. Docket Numbers: ER10...: 20110307-5175. Comment Date: 5 p.m. Eastern Time on Monday, March 28, 2011. Docket Numbers: ER11-3024-000... Comment Date: 5 p.m. Eastern Time on Monday, March 28, 2011. Docket Numbers: ER11-3025-000. Applicants...
Shiff, Natalie Jane; Oen, Kiem; Rabbani, Rasheda; Lix, Lisa M
2017-09-01
We validated case ascertainment algorithms for juvenile idiopathic arthritis (JIA) in the provincial health administrative databases of Manitoba, Canada. A population-based pediatric rheumatology clinical database from April 1st 1980 to March 31st 2012 was used to test case definitions in individuals diagnosed at ≤15 years of age. The case definitions varied the number of diagnosis codes (1, 2, or 3), time frame (1, 2 or 3 years), time between diagnoses (ever, >1 day, or ≥8 weeks), and physician specialty. Positive predictive value (PPV), sensitivity, and specificity with 95% confidence intervals (CIs) are reported. A case definition of 1 hospitalization or ≥2 diagnoses in 2 years by any provider ≥8 weeks apart using diagnosis codes for rheumatoid arthritis and ankylosing spondylitis produced a sensitivity of 89.2% (95% CI 86.8, 91.6), specificity of 86.3% (95% CI 83.0, 89.6), and PPV of 90.6% (95% CI 88.3, 92.9) when seronegative enthesopathy and arthropathy (SEA) was excluded as JIA; and sensitivity of 88.2% (95% CI 85.7, 90.7), specificity of 90.4% (95% CI 87.5, 93.3), and PPV of 93.9% (95% CI 92.0, 95.8) when SEA was included as JIA. This study validates case ascertainment algorithms for JIA in Canadian administrative health data using diagnosis codes for both rheumatoid arthritis (RA) and ankylosing spondylitis, to better reflect current JIA classification than codes for RA alone. Researchers will be able to use these results to define cohorts for population-based studies.
CFD analyses of combustor and nozzle flowfields
NASA Astrophysics Data System (ADS)
Tsuei, Hsin-Hua; Merkle, Charles L.
1993-11-01
The objectives of the research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustion processes. A Computational Fluid Dynamic (CFD) technique is employed to model the flowfields within the combustor, nozzle, and near plume field. The computational modeling of the rocket engine flowfields requires the application of the complete Navier-Stokes equations, coupled with species diffusion equations. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an ongoing experimental program at NASA LeRC. The numerical procedure is performed on both time-marching and time-accurate algorithms, using an LU approximate factorization in time, flux split upwinding differencing in space. The integrity of fuel film cooling along the wall, its effectiveness in the mixing with the core flow including unsteady large scale effects, the resultant impact on performance and the assessment of the near plume flow expansion to finite pressure altitude chamber are addressed.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Bettner, James L.
1991-01-01
The primary objective was the development of a time dependent 3-D Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The resulting computer codes are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). A computer program user's manual is presented for the ADPAC. Aerodynamic calculations were based on a four stage Runge-Kutta time marching finite volume solution technique with added numerical dissipation. A time accurate implicit residual smoothing operator was used for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted flows.
Mahajan, Prashant; Batra, Prerna; Thakur, Neha; Patel, Reena; Rai, Narendra; Trivedi, Nitin; Fassl, Bernhard; Shah, Binita; Lozon, Marie; Oteng, Rockerfeller A; Saha, Abhijeet; Shah, Dheeraj; Galwankar, Sagar
2017-08-15
India, home to almost 1.5 billion people, is in need of a country-specific, evidence-based, consensus approach for the emergency department (ED) evaluation and management of the febrile child. We held two consensus meetings, performed an exhaustive literature review, and held ongoing web-based discussions to arrive at a formal consensus on the proposed evaluation and management algorithm. The first meeting was held in Delhi in October 2015, under the auspices of Pediatric Emergency Medicine (PEM) Section of Academic College of Emergency Experts in India (ACEE-INDIA); and the second meeting was conducted at Pune during Emergency Medical Pediatrics and Recent Trends (EMPART 2016) in March 2016. The second meeting was followed with futher e-mail-based discussions to arrive at a formal consensus on the proposed algorithm. To develop an algorithmic approach for the evaluation and management of the febrile child that can be easily applied in the context of emergency care and modified based on local epidemiology and practice standards. We created an algorithm that can assist the clinician in the evaluation and management of the febrile child presenting to the ED, contextualized to health care in India. This guideline includes the following key components: triage and the timely assessment; evaluation; and patient disposition from the ED. We urge the development and creation of a robust data repository of minimal standard data elements. This would provide a systematic measurement of the care processes and patient outcomes, and a better understanding of various etiologies of febrile illnesses in India; both of which can be used to further modify the proposed approach and algorithm.
The Sky This Week, 2016 March 8 - 15 - Naval Oceanography Portal
section Advanced Search... Sections Home Time Earth Orientation Astronomy Meteorology Oceanography Ice You This Week, 2016 March 8 - 15 Info The Sky This Week, 2016 March 8 - 15 Springing forward in time week, waxing to First Quarter on the 15th at 1:03 pm Eastern Daylight Time. She joins the stars of the
75 FR 10491 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
...: Computational Biology, Image Processing, and Data Mining. Date: March 18, 2010. Time: 8 a.m. to 6 p.m. Agenda... Science. Date: March 24, 2010. Time: 12 p.m. to 3:30 p.m. Agenda: To review and evaluate grant...; Fellowship: Biophysical and Biochemical Sciences. Date: March 25-26, 2010. Time: 8 a.m. to 5 p.m. Agenda: To...
NASA Technical Reports Server (NTRS)
King, Michael C.
2016-01-01
The National Aeronautics and Space Administration (NASA) has developed a system for remotely detecting the hazardous conditions leading to aircraft icing in flight, the NASA Icing Remote Sensing System (NIRSS). Newly developed, weather balloon-borne instruments have been used to obtain in-situ measurements of supercooled liquid water during March 2014 to validate the algorithms used in the NIRSS. A mathematical model and a processing method were developed to analyze the data obtained from the weather balloon soundings. The data from soundings obtained in March 2014 were analyzed and compared to the output from the NIRSS and pilot reports.
Peak, Corey M; Wesolowski, Amy; Zu Erbach-Schoenberg, Elisabeth; Tatem, Andrew J; Wetter, Erik; Lu, Xin; Power, Daniel; Weidman-Grunewald, Elaine; Ramos, Sergio; Moritz, Simon; Buckee, Caroline O; Bengtsson, Linus
2018-06-26
Travel restrictions were implementeded on an unprecedented scale in 2015 in Sierra Leone to contain and eliminate Ebola virus disease. However, the impact of epidemic travel restrictions on mobility itself remains difficult to measure with traditional methods. New 'big data' approaches using mobile phone data can provide, in near real-time, the type of information needed to guide and evaluate control measures. We analysed anonymous mobile phone call detail records (CDRs) from a leading operator in Sierra Leone between 20 March and 1 July in 2015. We used an anomaly detection algorithm to assess changes in travel during a national 'stay at home' lockdown from 27 to 29 March. To measure the magnitude of these changes and to assess effect modification by region and historical Ebola burden, we performed a time series analysis and a crossover analysis. Routinely collected mobile phone data revealed a dramatic reduction in human mobility during a 3-day lockdown in Sierra Leone. The number of individuals relocating between chiefdoms decreased by 31% within 15 km, by 46% for 15-30 km and by 76% for distances greater than 30 km. This effect was highly heterogeneous in space, with higher impact in regions with higher Ebola incidence. Travel quickly returned to normal patterns after the restrictions were lifted. The effects of travel restrictions on mobility can be large, targeted and measurable in near real-time. With appropriate anonymization protocols, mobile phone data should play a central role in guiding and monitoring interventions for epidemic containment.
Assessment of remotely sensed chlorophyll-a concentration in Guanabara Bay, Brazil
NASA Astrophysics Data System (ADS)
Oliveira, Eduardo N.; Fernandes, Alexandre M.; Kampel, Milton; Cordeiro, Renato C.; Brandini, Nilva; Vinzon, Susana B.; Grassi, Renata M.; Pinto, Fernando N.; Fillipo, Alessandro M.; Paranhos, Rodolfo
2016-04-01
The Guanabara Bay (GB) is an estuarine system in the metropolitan region of Rio de Janeiro (Brazil), with a surface area of ˜346 km2 threatened by anthropogenic pressure. Remote sensing can provide frequent data for studies and monitoring of water quality parameters, such as chlorophyll-a concentration (Chl-a). Different combination of Medium Resolution Imaging Spectrometer (MERIS) remote sensing reflectance band ratios were used to estimate Chl-a. Standard algorithms such as Ocean Color 3-band, Ocean Color-4 band, fluorescence line height, and maximum chlorophyll index were also tested. The MERIS Chl-a estimates were statistically compared with a dataset of in situ Chl-a (2002 to 2012). Good correlations were obtained with the use of green, red, and near-infrared bands. The best performing algorithm was based on the red (665 nm) and green (560 nm) band ratio, named "RG3" algorithm (r2=0.71, chl-a=62,565*x1.6118). The RG3 was applied to a time series of MERIS images (2003- to 2012). The GB has a high temporal and spatial variability of Chl-a, with highest values found in the wet season (October to March) and in some of the most internal regions of the estuary. Lowest concentrations are found in the central circulation channel due to the flushing of ocean water masses promoted by pumping tide.
2006-03-01
have been the concentration of many literature compositions [12, 30, 38, 39, 49, 53]. Van Veldhuizen et. al. [53] improved the geometries of wire...Electric Waves”. J. IEE (Japan), volume 47, 273–282. March 1926. 53. Veldhuizen , David A. Van , Brian S. Sandlin, Rober E. Marmelstein, Gary B. Lam- ont, and
Golden Rays - March 2017 | Solar Research | NREL
, test and deploy a data enhanced hierarchical control architecture that adopts a hybrid approach to grid control. A centralized control layer will be complemented by distributed control algorithms for solar inverters and autonomous control of grid edge devices. The other NREL project will develop a novel control
Chiappini, Elena; Camaioni, Angelo; Benazzo, Marco; Biondi, Andrea; Bottero, Sergio; De Masi, Salvatore; Di Mauro, Giuseppe; Doria, Mattia; Esposito, Susanna; Felisati, Giovanni; Felisati, Dino; Festini, Filippo; Gaini, Renato Maria; Galli, Luisa; Gambini, Claudio; Gianelli, Umberto; Landi, Massimo; Lucioni, Marco; Mansi, Nicola; Mazzantini, Rachele; Marchisio, Paola; Marseglia, Gian Luigi; Miniello, Vito Leonardo; Nicola, Marta; Novelli, Andrea; Paulli, Marco; Picca, Marina; Pillon, Marta; Pisani, Paolo; Pipolo, Carlotta; Principi, Nicola; Sardi, Iacopo; Succo, Giovanni; Tomà, Paolo; Tortoli, Enrico; Tucci, Filippo; Varricchio, Attilio; de Martino, Maurizio; Italian Guideline Panel For Management Of Cervical Lymphadenopathy In Children
2015-01-01
Cervical lymphadenopathy is a common disorder in children due to a wide spectrum of disorders. On the basis of a complete history and physical examination, paediatricians have to select, among the vast majority of children with a benign self-limiting condition, those at risk for other, more complex, diseases requiring laboratory tests, imaging and, finally, tissue sampling. At the same time, they should avoid expensive and invasive examinations when unnecessary. The Italian Society of Preventive and Social Pediatrics, jointly with the Italian Society of Pediatric Infectious Diseases, the Italian Society of Pediatric Otorhinolaryngology, and other Scientific Societies, issued a National Consensus document, based on the most recent literature findings, including an algorithm for the management of cervical lymphadenopathy in children. The Consensus Conference method was used, following the Italian National Plan Guidelines. Relevant publications in English were identified through a systematic review of MEDLINE and the Cochrane Database of Systematic Reviews from their inception through March 21, 2014. Basing on literature results, an algorithm was developed, including several possible clinical scenarios. Situations requiring a watchful waiting strategy, those requiring an empiric antibiotic therapy, and those necessitating a prompt diagnostic workup, considering the risk for a severe underling disease, have been identified. The present algorithm is a practice tool for the management of pediatric cervical lymphadenopathy in the hospital and the ambulatory settings. A multidisciplinary approach is paramount. Further studies are required for its validation in the clinical field.
Numerical Procedures for Inlet/Diffuser/Nozzle Flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.
1998-01-01
Two primitive variable, pressure based, flux-split, RNS/NS solution procedures for viscous flows are presented. Both methods are uniformly valid across the full Mach number range, Le., from the incompressible limit to high supersonic speeds. The first method is an 'optimized' version of a previously developed global pressure relaxation RNS procedure. Considerable reduction in the number of relatively expensive matrix inversion, and thereby in the computational time, has been achieved with this procedure. CPU times are reduced by a factor of 15 for predominantly elliptic flows (incompressible and low subsonic). The second method is a time-marching, 'linearized' convection RNS/NS procedure. The key to the efficiency of this procedure is the reduction to a single LU inversion at the inflow cross-plane. The remainder of the algorithm simply requires back-substitution with this LU and the corresponding residual vector at any cross-plane location. This method is not time-consistent, but has a convective-type CFL stability limitation. Both formulations are robust and provide accurate solutions for a variety of internal viscous flows to be provided herein.
A New Architecture for Extending the Capabilities of the Copernicus Trajectory Optimization Program
NASA Technical Reports Server (NTRS)
Williams, Jacob
2015-01-01
This paper describes a new plugin architecture developed for the Copernicus spacecraft trajectory optimization program. Details of the software architecture design and development are described, as well as examples of how the capability can be used to extend the tool in order to expand the type of trajectory optimization problems that can be solved. The inclusion of plugins is a significant update to Copernicus, allowing user-created algorithms to be incorporated into the tool for the first time. The initial version of the new capability was released to the Copernicus user community with version 4.1 in March 2015, and additional refinements and improvements were included in the recent 4.2 release. It is proving quite useful, enabling Copernicus to solve problems that it was not able to solve before.
Nonlinear flutter analysis of composite panels
NASA Astrophysics Data System (ADS)
An, Xiaomin; Wang, Yan
2018-05-01
Nonlinear panel flutter is an interesting subject of fluid-structure interaction. In this paper, nonlinear flutter characteristics of curved composite panels are studied in very low supersonic flow. The composite panel with geometric nonlinearity is modeled by a nonlinear finite element method; and the responses are computed by the nonlinear Newmark algorithm. An unsteady aerodynamic solver, which contains a flux splitting scheme and dual time marching technology, is employed in calculating the unsteady pressure of the motion of the panel. Based on a half-step staggered coupled solution, the aeroelastic responses of two composite panels with different radius of R = 5 and R = 2.5 are computed and compared with each other at different dynamic pressure for Ma = 1.05. The nonlinear flutter characteristics comprising limited cycle oscillations and chaos are analyzed and discussed.
78 FR 12769 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-25
... Pathogenesis Study Section. Date: March 18, 2013. Time: 8:00 a.m. to 5:00 p.m. Agenda: To review and evaluate... Young Adulthood. Date: March 18, 2013. Time: 3:00 p.m. to 4:00 p.m. Agenda: To review and evaluate grant... Panel; RFA Panel: CounterACT U54 Centers of Excellence. Date: March 19, 2013. Time: 8:00 a.m. to 12:00 p...
COMOC: Three dimensional boundary region variant, programmer's manual
NASA Technical Reports Server (NTRS)
Orzechowski, J. A.; Baker, A. J.
1974-01-01
The three-dimensional boundary region variant of the COMOC computer program system solves the partial differential equation system governing certain three-dimensional flows of a viscous, heat conducting, multiple-species, compressible fluid including combustion. The solution is established in physical variables, using a finite element algorithm for the boundary value portion of the problem description in combination with an explicit marching technique for the initial value character. The computational lattice may be arbitrarily nonregular, and boundary condition constraints are readily applied. The theoretical foundation of the algorithm, a detailed description on the construction and operation of the program, and instructions on utilization of the many features of the code are presented.
Soyuz TMA-08M/34S Launch seen from ISS
2013-03-28
ISS035-E-010340 (28 March 2013) --- One of the Expedition 35 crew members aboard the Earth-orbiting International Space Station took this photo which was part of a series documenting the launch of the "other half" of the Expedition 35 crew. The Soyuz TMA-08M rocket launched from the Baikonur Cosmodrome in Kazakhstan on March 29, 2013 (Kazakh time) carrying Expedition 35 Soyuz Commander Pavel Vinogradov, NASA Flight Engineer Chris Cassidy and Russian Flight Engineer Alexander Misurkin to the International Space Station. Their Soyuz rocket launched at 2:43 a.m., March 29, local time, while it was still March 28 in GMT and USA time zones.
Soyuz TMA-08M/34S Launch seen from ISS
2013-03-28
ISS035-E-010263 (28 March 2013) --- One of the Expedition 35 crew members aboard the Earth-orbiting International Space Station took this photo which was part of a series documenting the launch of the "other half" of the Expedition 35 crew. The Soyuz TMA-08M rocket launched from the Baikonur Cosmodrome in Kazakhstan on March 29, 2013 (Kazakh time) carrying Expedition 35 Soyuz Commander Pavel Vinogradov, NASA Flight Engineer Chris Cassidy and Russian Flight Engineer Alexander Misurkin to the International Space Station. Their Soyuz rocket launched at 2:43 a.m., March 29, local time, while it was still March 28 in GMT and USA time zones.
Soyuz TMA-08M/34S Launch seen from ISS
2013-03-28
ISS035-E-010207 (28 March 2013) --- One of the Expedition 35 crew members aboard the Earth-orbiting International Space Station took this photo which was part of a series documenting the launch of the "other half" of the Expedition 35 crew. The Soyuz TMA-08M rocket launched from the Baikonur Cosmodrome in Kazakhstan on March 29, 2013 (Kazakh time) carrying Expedition 35 Soyuz Commander Pavel Vinogradov, NASA Flight Engineer Chris Cassidy and Russian Flight Engineer Alexander Misurkin to the International Space Station. Their Soyuz rocket launched at 2:43 a.m., March 29, local time, while it was still March 28 in GMT and USA time zones.
Soyuz TMA-08M/34S Launch seen from ISS
2013-03-28
ISS035-E-010313 (28 March 2013) --- One of the Expedition 35 crew members aboard the Earth-orbiting International Space Station took this photo which was part of a series documenting the launch of the "other half" of the Expedition 35 crew. The Soyuz TMA-08M rocket launched from the Baikonur Cosmodrome in Kazakhstan on March 29, 2013 (Kazakh time) carrying Expedition 35 Soyuz Commander Pavel Vinogradov, NASA Flight Engineer Chris Cassidy and Russian Flight Engineer Alexander Misurkin to the International Space Station. Their Soyuz rocket launched at 2:43 a.m., March 29, local time, while it was still March 28 in GMT and USA time zones.
Soyuz TMA-08M/34S Launch seen from ISS
2013-03-28
ISS035-E-010333 (28 March 2013) --- One of the Expedition 35 crew members aboard the Earth-orbiting International Space Station took this photo which was part of a series documenting the launch of the "other half" of the Expedition 35 crew. The Soyuz TMA-08M rocket launched from the Baikonur Cosmodrome in Kazakhstan on March 29, 2013 (Kazakh time) carrying Expedition 35 Soyuz Commander Pavel Vinogradov, NASA Flight Engineer Chris Cassidy and Russian Flight Engineer Alexander Misurkin to the International Space Station. Their Soyuz rocket launched at 2:43 a.m., March 29, local time, while it was still March 28 in GMT and USA time zones.
Soyuz TMA-08M/34S Launch seen from ISS
2013-03-28
ISS035-E-010317 (28 March 2013) --- One of the Expedition 35 crew members aboard the Earth-orbiting International Space Station took this photo which was part of a series documenting the launch of the "other half" of the Expedition 35 crew. The Soyuz TMA-08M rocket launched from the Baikonur Cosmodrome in Kazakhstan on March 29, 2013 (Kazakh time) carrying Expedition 35 Soyuz Commander Pavel Vinogradov, NASA Flight Engineer Chris Cassidy and Russian Flight Engineer Alexander Misurkin to the International Space Station. Their Soyuz rocket launched at 2:43 a.m., March 29, local time, while it was still March 28 in GMT and USA time zones.
Soyuz TMA-08M/34S Launch seen from ISS
2013-03-28
ISS035-E-010345 (28 March 2013) --- One of the Expedition 35 crew members aboard the Earth-orbiting International Space Station took this photo which was part of a series documenting the launch of the "other half" of the Expedition 35 crew. The Soyuz TMA-08M rocket launched from the Baikonur Cosmodrome in Kazakhstan on March 29, 2013 (Kazakh time) carrying Expedition 35 Soyuz Commander Pavel Vinogradov, NASA Flight Engineer Chris Cassidy and Russian Flight Engineer Alexander Misurkin to the International Space Station. Their Soyuz rocket launched at 2:43 a.m., March 29, local time, while it was still March 28 in GMT and USA time zones.
Computerized Liver Volumetry on MRI by Using 3D Geodesic Active Contour Segmentation
Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji
2014-01-01
OBJECTIVE Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. SUBJECTS AND METHODS Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. RESULTS The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). CONCLUSION The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time. PMID:24370139
Computerized liver volumetry on MRI by using 3D geodesic active contour segmentation.
Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji
2014-01-01
Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time.
Finite-volume WENO scheme for viscous compressible multicomponent flows
Coralic, Vedran; Colonius, Tim
2014-01-01
We develop a shock- and interface-capturing numerical method that is suitable for the simulation of multicomponent flows governed by the compressible Navier-Stokes equations. The numerical method is high-order accurate in smooth regions of the flow, discretely conserves the mass of each component, as well as the total momentum and energy, and is oscillation-free, i.e. it does not introduce spurious oscillations at the locations of shockwaves and/or material interfaces. The method is of Godunov-type and utilizes a fifth-order, finite-volume, weighted essentially non-oscillatory (WENO) scheme for the spatial reconstruction and a Harten-Lax-van Leer contact (HLLC) approximate Riemann solver to upwind the fluxes. A third-order total variation diminishing (TVD) Runge-Kutta (RK) algorithm is employed to march the solution in time. The derivation is generalized to three dimensions and nonuniform Cartesian grids. A two-point, fourth-order, Gaussian quadrature rule is utilized to build the spatial averages of the reconstructed variables inside the cells, as well as at cell boundaries. The algorithm is therefore fourth-order accurate in space and third-order accurate in time in smooth regions of the flow. We corroborate the properties of our numerical method by considering several challenging one-, two- and three-dimensional test cases, the most complex of which is the asymmetric collapse of an air bubble submerged in a cylindrical water cavity that is embedded in 10% gelatin. PMID:25110358
Finite-volume WENO scheme for viscous compressible multicomponent flows.
Coralic, Vedran; Colonius, Tim
2014-10-01
We develop a shock- and interface-capturing numerical method that is suitable for the simulation of multicomponent flows governed by the compressible Navier-Stokes equations. The numerical method is high-order accurate in smooth regions of the flow, discretely conserves the mass of each component, as well as the total momentum and energy, and is oscillation-free, i.e. it does not introduce spurious oscillations at the locations of shockwaves and/or material interfaces. The method is of Godunov-type and utilizes a fifth-order, finite-volume, weighted essentially non-oscillatory (WENO) scheme for the spatial reconstruction and a Harten-Lax-van Leer contact (HLLC) approximate Riemann solver to upwind the fluxes. A third-order total variation diminishing (TVD) Runge-Kutta (RK) algorithm is employed to march the solution in time. The derivation is generalized to three dimensions and nonuniform Cartesian grids. A two-point, fourth-order, Gaussian quadrature rule is utilized to build the spatial averages of the reconstructed variables inside the cells, as well as at cell boundaries. The algorithm is therefore fourth-order accurate in space and third-order accurate in time in smooth regions of the flow. We corroborate the properties of our numerical method by considering several challenging one-, two- and three-dimensional test cases, the most complex of which is the asymmetric collapse of an air bubble submerged in a cylindrical water cavity that is embedded in 10% gelatin.
Automated Cloud Observation for Ground Telescope Optimization
NASA Astrophysics Data System (ADS)
Lane, B.; Jeffries, M. W., Jr.; Therien, W.; Nguyen, H.
As the number of man-made objects placed in space each year increases with advancements in commercial, academic and industry, the number of objects required to be detected, tracked, and characterized continues to grow at an exponential rate. Commercial companies, such as ExoAnalytic Solutions, have deployed ground based sensors to maintain track custody of these objects. For the ExoAnalytic Global Telescope Network (EGTN), observation of such objects are collected at the rate of over 10 million unique observations per month (as of September 2017). Currently, the EGTN does not optimally collect data on nights with significant cloud levels. However, a majority of these nights prove to be partially cloudy providing clear portions in the sky for EGTN sensors to observe. It proves useful for a telescope to utilize these clear areas to continue resident space object (RSO) observation. By dynamically updating the tasking with the varying cloud positions, the number of observations could potentially increase dramatically due to increased persistence, cadence, and revisit. This paper will discuss the recent algorithms being implemented within the EGTN, including the motivation, need, and general design. The use of automated image processing as well as various edge detection methods, including Canny, Sobel, and Marching Squares, on real-time large FOV images of the sky enhance the tasking and scheduling of a ground based telescope is discussed in Section 2. Implementations of these algorithms on single and expanding to multiple telescopes, will be explored. Results of applying these algorithms to the EGTN in real-time and comparison to non-optimized EGTN tasking is presented in Section 3. Finally, in Section 4 we explore future work in applying these throughout the EGTN as well as other optical telescopes.
Comparison of satellite precipitation products with Q3 over the CONUS
NASA Astrophysics Data System (ADS)
Wang, J.; Petersen, W. A.; Wolff, D. B.; Kirstetter, P. E.
2016-12-01
The Global Precipitation Measurement (GPM) is an international satellite mission that provides a new-generation of global precipitation observations. A wealth of precipitation products have been generated since the launch of the GPM Core Observatory in February of 2014. However, the accuracy of the satellite-based precipitation products is affected by discrete temporal sampling and remote spaceborne retrieval algorithms. The GPM Ground Validation (GV) program is currently underway to independently verify the satellite precipitation products, which can be carried out by comparing satellite products with ground measurements. This study compares four Day-1 GPM surface precipitation products derived from the GPM Microwave Imager (GMI), Ku-band Precipitation Radar (KU), Dual-Frequency Precipitation Radar (DPR) and DPR-GMI CoMBined (CMB) algorithms, as well as the near-real-time Integrated Multi-satellitE Retrievals for GPM (IMERG) Late Run product and precipitation retrievals from Microwave Humidity Sounders (MHS) flown on NOAA and METOPS satellites, with the NOAA Multi-Radar Multi-Sensor suite (MRMS; now called "Q3"). The comparisons are conducted over the conterminous United States (CONUS) at various spatial and temporal scales with respect to different precipitation intensities, and filtered with radar quality index (RQI) thresholds and precipitation types. Various versions of GPM products are evaluated against Q3. The latest Version-04A GPM products are in reasonably good overall agreement with Q3. Based on the mission-to-date (March 2014 - May 2016) data from all GPM overpasses, the biases relative to Q3 for GMI and DPR precipitation estimates at 0.5o resolution are negative, whereas the biases for CMB and KU precipitation estimates are positive. Based on all available data (March 2015 - April 2016 at this writing), the CONUS-averaged near-real-time IMERG Late Run hourly precipitation estimate is about 46% higher than Q3. Preliminary comparison of 1-year (2015) MHS precipitation estimates with Q3 shows the MHS is bout 30% lower than Q3. Detailed comparison results are available at http://wallops-prf.gsfc.nasa.gov/NMQ/.
Recognition of military-specific physical activities with body-fixed sensors.
Wyss, Thomas; Mäder, Urs
2010-11-01
The purpose of this study was to develop and validate an algorithm for recognizing military-specific, physically demanding activities using body-fixed sensors. To develop the algorithm, the first group of study participants (n = 15) wore body-fixed sensors capable of measuring acceleration, step frequency, and heart rate while completing six military-specific activities: walking, marching with backpack, lifting and lowering loads, lifting and carrying loads, digging, and running. The accuracy of the algorithm was tested in these isolated activities in a laboratory setting (n = 18) and in the context of daily military training routine (n = 24). The overall recognition rates during isolated activities and during daily military routine activities were 87.5% and 85.5%, respectively. We conclude that the algorithm adequately recognized six military-specific physical activities based on sensor data alone both in a laboratory setting and in the military training environment. By recognizing type of physical activities this objective method provides additional information on military-job descriptions.
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge
2008-01-01
This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.
Gabel, Eilon; Hofer, Ira S; Satou, Nancy; Grogan, Tristan; Shemin, Richard; Mahajan, Aman; Cannesson, Maxime
2017-05-01
In medical practice today, clinical data registries have become a powerful tool for measuring and driving quality improvement, especially among multicenter projects. Registries face the known problem of trying to create dependable and clear metrics from electronic medical records data, which are typically scattered and often based on unreliable data sources. The Society for Thoracic Surgery (STS) is one such example, and it supports manually collected data by trained clinical staff in an effort to obtain the highest-fidelity data possible. As a possible alternative, our team designed an algorithm to test the feasibility of producing computer-derived data for the case of postoperative mechanical ventilation hours. In this article, we study and compare the accuracy of algorithm-derived mechanical ventilation data with manual data extraction. We created a novel algorithm that is able to calculate mechanical ventilation duration for any postoperative patient using raw data from our EPIC electronic medical record. Utilizing nursing documentation of airway devices, documentation of lines, drains, and airways, and respiratory therapist ventilator settings, the algorithm produced results that were then validated against the STS registry. This enabled us to compare our algorithm results with data collected by human chart review. Any discrepancies were then resolved with manual calculation by a research team member. The STS registry contained a total of 439 University of California Los Angeles cardiac cases from April 1, 2013, to March 31, 2014. After excluding 201 patients for not remaining intubated, tracheostomy use, or for having 2 surgeries on the same day, 238 cases met inclusion criteria. Comparing the postoperative ventilation durations between the 2 data sources resulted in 158 (66%) ventilation durations agreeing within 1 hour, indicating a probable correct value for both sources. Among the discrepant cases, the algorithm yielded results that were exclusively correct in 75 (93.8%) cases, whereas the STS results were exclusively correct once (1.3%). The remaining 4 cases had inconclusive results after manual review because of a prolonged documentation gap between mechanical and spontaneous ventilation. In these cases, STS and algorithm results were different from one another but were both within the transition timespan. This yields an overall accuracy of 99.6% (95% confidence interval, 98.7%-100%) for the algorithm when compared with 68.5% (95% confidence interval, 62.6%-74.4%) for the STS data (P < .001). There is a significant appeal to having a computer algorithm capable of calculating metrics such as total ventilator times, especially because it is labor intensive and prone to human error. By incorporating 3 different sources into our algorithm and by using preprogrammed clinical judgment to overcome common errors with data entry, our results proved to be more comprehensive and more accurate, and they required a fraction of the computation time compared with manual review.
2015-03-21
Media document Expedition 43 NASA Astronaut Scott Kelly as he plays billiards during media day, Saturday, March 21, 2015, Baikonur, Kazakhstan. Kelly, and Russian Cosmonauts Gennady Padalka, and Mikhail Kornienko of the Russian Federal Space Agency (Roscosmos) are scheduled to launch to the International Space Station in the Soyuz TMA-16M spacecraft from the Baikonur Cosmodrome in Kazakhstan March 28, Kazakh time (March 27 Eastern time.) As the one-year crew, Kelly and Kornienko will return to Earth on Soyuz TMA-18M in March 2016. Photo Credit: (NASA/Bill Ingalls)
2015-03-21
Expedition 43 NASA Astronaut Scott Kelly waters a tree planted in his honor during media day, Saturday, March 21, 2015, Baikonur, Kazakhstan. Kelly, and Russian Cosmonauts Gennady Padalka, and Mikhail Kornienko of the Russian Federal Space Agency (Roscosmos) are scheduled to launch to the International Space Station in the Soyuz TMA-16M spacecraft from the Baikonur Cosmodrome in Kazakhstan March 28, Kazakh time (March 27 Eastern time.) As the one-year crew, Kelly and Kornienko will return to Earth on Soyuz TMA-18M in March 2016. Photo Credit: (NASA/Bill Ingalls)
A novel approach to model the transient behavior of solid-oxide fuel cell stacks
NASA Astrophysics Data System (ADS)
Menon, Vikram; Janardhanan, Vinod M.; Tischer, Steffen; Deutschmann, Olaf
2012-09-01
This paper presents a novel approach to model the transient behavior of solid-oxide fuel cell (SOFC) stacks in two and three dimensions. A hierarchical model is developed by decoupling the temperature of the solid phase from the fluid phase. The solution of the temperature field is considered as an elliptic problem, while each channel within the stack is modeled as a marching problem. This paper presents the numerical model and cluster algorithm for coupling between the solid phase and fluid phase. For demonstration purposes, results are presented for a stack operated on pre-reformed hydrocarbon fuel. Transient response to load changes is studied by introducing step changes in cell potential and current. Furthermore, the effect of boundary conditions and stack materials on response time and internal temperature distribution is investigated.
Epidemiology of health concerns among collegiate student musicians participating in marching band.
Hatheway, Melissa; Chesky, Kris
2013-12-01
Participation in marching band requires intense physical and mental requirements, altered and potentially elevated biomechanical demands related to performing musical instruments while marching, routine exposures to elevated noise levels and at times hazardous weather conditions, and time commitments for practice and travel. Unfortunately, there are no known epidemiologic studies that systematically examine the perception of health-related consequences among college students participating in a collegiate marching band. There are no known studies that attempt to understand if the perceived consequences of marching band are different for students majoring in music compared to non-music major students. In response to this deficiency, this study collected and characterized occupational health patterns and concerns associated with participation in a collegiate marching band. Members of a large collegiate marching band (n=246/310, 76%) responded to a 70-item epidemiologic survey. Results reveal patterns of health concerns and how they differ when compared across music majors vs non-music majors and instrument groups.
A modified Dodge algorithm for the parabolized Navier-Stokes equations and compressible duct flows
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Dwoyer, D. M.
1983-01-01
A revised version of Dodge's split-velocity method for numerical calculation of compressible duct flow has been developed. The revision incorporates balancing of massflow rates on each marching step in order to maintain front-to-back continuity during the calculation. Qualitative agreement with analytical predictions and experimental results has been obtained for some flows with well-known solutions.
The Center for Nonlinear Phenomena and Magnetic Materials
1992-09-30
ORGANIZATION Howard University REPORT NUMBER ComSERCIWashington DC 20059 AFOSR- ,, ? 9 v 5 4 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10... University . Visualization - Improved Marching Cubes. January 27, 1992: Dr. Gerald Chachere, Math Dept., Howard University . "An algorithm for box...James Gates, Physics Department, Howard University . "Introduction to Strings Part I". February 5, 1992: Dr. James Gates, Physics Department, Howard
NASA Astrophysics Data System (ADS)
Chander, Shard; Ganguly, Debojyoti
2017-01-01
Water level was estimated, using AltiKa radar altimeter onboard the SARAL satellite, over the Ukai reservoir using modified algorithms specifically for inland water bodies. The methodology was based on waveform classification, waveform retracking, and dedicated inland range corrections algorithms. The 40-Hz waveforms were classified based on linear discriminant analysis and Bayesian classifier. Waveforms were retracked using Brown, Ice-2, threshold, and offset center of gravity methods. Retracking algorithms were implemented on full waveform and subwaveforms (only one leading edge) for estimating the improvement in the retrieved range. European Centre for Medium-Range Weather Forecasts (ECMWF) operational, ECMWF re-analysis pressure fields, and global ionosphere maps were used to exactly estimate the range corrections. The microwave and optical images were used for estimating the extent of the water body and altimeter track location. Four global positioning system (GPS) field trips were conducted on same day as the SARAL pass using two dual frequency GPS. One GPS was mounted close to the dam in static mode and the other was used on a moving vehicle within the reservoir in Kinematic mode. In situ gauge dataset was provided by the Ukai dam authority for the time period January 1972 to March 2015. The altimeter retrieved water level results were then validated with the GPS survey and in situ gauge dataset. With good selection of virtual station (waveform classification, back scattering coefficient), Ice-2 retracker and subwaveform retracker both work better with an overall root-mean-square error <15 cm. The results support that the AltiKa dataset, due to a smaller foot-print and sharp trailing edge of the Ka-band waveform, can be utilized for more accurate water level information over inland water bodies.
NASA Astrophysics Data System (ADS)
Tilstone, Gavin H.; Lotliker, Aneesh A.; Miller, Peter I.; Ashraf, P. Muhamed; Kumar, T. Srinivasa; Suresh, T.; Ragavan, B. R.; Menon, Harilal B.
2013-08-01
The use of ocean colour remote sensing to facilitate the monitoring of phytoplankton biomass in coastal waters is hampered by the high variability in absorption and scattering from substances other than phytoplankton. The eastern Arabian Sea coastal shelf is influenced by river run-off, winter convection and monsoon upwelling. Bio-optical parameters were measured along this coast from March 2009 to June 2011, to characterise the optical water type and validate three Chlorophyll-a (Chla) algorithms applied to Moderate Resolution Imaging Spectroradiometer on Aqua (MODIS-Aqua) data against in situ measurements. Ocean Colour 3 band ratio (OC3M), Garver-Siegel-Maritorena Model (GSM) and Generalized Inherent Optical Property (GIOP) Chla algorithms were evaluated. OC3M performed better than GSM and GIOP in all regions and overall, was within 11% of in situ Chla. GSM was within 24% of in situ Chla and GIOP on average was 55% lower. OC3M was less affected by errors in remote sensing reflectance Rrs(λ) and by spectral variations in absorption coefficient (aCDOM(λ)) of coloured dissolved organic material (CDOM) and total suspended matter (TSM) compared to the other algorithms. A nine year Chla time series from 2002 to 2011 was generated to assess regional differences between OC3M and GSM. This showed that in the north eastern shelf, maximum Chla occurred during the winter monsoon from December to February, where GSM consistently gave higher Chla compared to OC3M. In the south eastern shelf, maximum Chla occurred in June to July during the summer monsoon upwelling, and OC3M yielded higher Chla compared to GSM. OC3M currently provides the most accurate Chla estimates for the eastern Arabian Sea coastal waters.
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2017-06-01
Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.
Nabeta, Pamela; Havumaki, Joshua; Ha, Dang Thi Minh; Caceres, Tatiana; Hang, Pham Thu; Collantes, Jimena; Thi Ngoc Lan, Nguyen; Gotuzzo, Eduardo; Denkinger, Claudia M
2017-01-01
Improved and affordable diagnostic or triage tests are urgently needed at the microscopy centre level. Automated digital microscopy has the potential to overcome issues related to conventional microscopy, including training time requirement and inconsistencies in results interpretation. For this blinded prospective study, sputum samples were collected from adults with presumptive pulmonary tuberculosis in Lima, Peru and Ho Chi Minh City, Vietnam. TBDx performance was evaluated as a stand-alone and as a triage test against conventional microscopy and Xpert, with culture as the reference standard. Xpert was used to confirm positive cases. A total of 613 subjects were enrolled between October 2014 and March 2015, with 539 included in the final analysis. The sensitivity of TBDx was 62·2% (95% CI 56·6-67·4) and specificity was 90·7% (95% CI 85·9-94·2) compared to culture. The algorithm assessing TBDx as a triage test achieved a specificity of 100% while maintaining sensitivity. While the diagnostic performance of TBDx did not reach the levels obtained by experienced microscopists in reference laboratories, it is conceivable that it would exceed the performance of less experienced microscopists. In the absence of highly sensitive and specific molecular tests at the microscopy centre level, TBDx in a triage-testing algorithm would optimize specificity and limit overall cost without compromising the number of patients receiving up-front drug susceptibility testing for rifampicin. However, the algorithm would miss over one third of patients compared to Xpert alone.
Molecular surface mesh generation by filtering electron density map.
Giard, Joachim; Macq, Benoît
2010-01-01
Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.
Shrink-wrapped isosurface from cross sectional images
Choi, Y. K.; Hahn, J. K.
2010-01-01
Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361
An efficient iteration strategy for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Walters, R. W.; Dwoyer, D. L.
1985-01-01
A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two-dimensions is described. The basic algorithm has the property that convergence to the steady-state is quadratic for fully supersonic flows and linear otherwise. This is in contrast to the block ADI methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented here is easily enhanced to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, thus yielding a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing both oblique and normal shock waves which confirm the efficiency of the iteration strategy.
Efficient solutions to the Euler equations for supersonic flow with embedded subsonic regions
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Dwoyer, Douglas L.
1987-01-01
A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two dimensions is described. Convergence of the basic algorithm to the steady state is quadratic for fully supersonic flows and is linear for other flows. This is in contrast to the block alternating direction implicit methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented herein is easily coupled with methods to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, and yields a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing oblique and normal shock waves which confirm the efficiency of the iteration strategy.
The West Midlands breast cancer screening status algorithm - methodology and use as an audit tool.
Lawrence, Gill; Kearins, Olive; O'Sullivan, Emma; Tappenden, Nancy; Wallis, Matthew; Walton, Jackie
2005-01-01
To illustrate the ability of the West Midlands breast screening status algorithm to assign a screening status to women with malignant breast cancer, and its uses as a quality assurance and audit tool. Breast cancers diagnosed between the introduction of the National Health Service [NHS] Breast Screening Programme and 31 March 2001 were obtained from the West Midlands Cancer Intelligence Unit (WMCIU). Screen-detected tumours were identified via breast screening units, and the remaining cancers were assigned to one of eight screening status categories. Multiple primaries and recurrences were excluded. A screening status was assigned to 14,680 women (96% of the cohort examined), 110 cancers were not registered at the WMCIU and the cohort included 120 screen-detected recurrences. The West Midlands breast screening status algorithm is a robust simple tool which can be used to derive data to evaluate the efficacy and impact of the NHS Breast Screening Programme.
2015-03-21
Media wait to be escorted to the next event during the Expedition 43 prime and backup crew media day on Saturday, March 21, 2015 at the Cosmonaut Hotel in Baikonur, Kazakhstan. Expedition 43 NASA Astronaut Scott Kelly, and Russian Cosmonauts Gennady Padalka, and Mikhail Kornienko of the Russian Federal Space Agency (Roscosmos) are scheduled to launch to the International Space Station in the Soyuz TMA-16M spacecraft from the Baikonur Cosmodrome in Kazakhstan March 28, Kazakh time (March 27 Eastern time.) As the one-year crew, Kelly and Kornienko will return to Earth on Soyuz TMA-18M in March 2016. Photo Credit: (NASA/Bill Ingalls)
2015-03-21
Media document Expedition 43 Russian Cosmonaut Mikhail Kornienko of the Russian Federal Space Agency (Roscosmos), left, and NASA Astronaut Scott Kelly, right, as they play billiards during media day, Saturday, March 21, 2015, Baikonur, Kazakhstan. Kelly, and Russian Cosmonauts Gennady Padalka, and Mikhail Kornienko of Roscosmos are scheduled to launch to the International Space Station in the Soyuz TMA-16M spacecraft from the Baikonur Cosmodrome in Kazakhstan March 28, Kazakh time (March 27 Eastern time.) As the one-year crew, Kelly and Kornienko will return to Earth on Soyuz TMA-18M in March 2016. Photo Credit: (NASA/Bill Ingalls)
Expedition 43 Press Conference
2015-03-26
Expedition 43 NASA Astronaut Scott Kelly waves hello to family and friends as he and, Russian cosmonauts Gennady Padalka, and Mikhail Kornienko of the Russian Federal Space Agency (Roscosmos) participate in a crew press conference, Thursday, March 26, 2015, at the Cosmonaut Hotel in Baikonur, Kazakhstan. Kelly, Kornienko, and Padalka launched to the International Space Station in the Soyuz TMA-16M spacecraft from the Baikonur Cosmodrome in Kazakhstan March 28, Kazakh time (March 27 Eastern time.) As the one-year crew, Kelly and Kornienko will return to Earth on Soyuz TMA-18M in March 2016. Photo Credit (NASA/Bill Ingalls)
Observational evidence of seasonality in the timing of loop current eddy separation
NASA Astrophysics Data System (ADS)
Hall, Cody A.; Leben, Robert R.
2016-12-01
Observational datasets, reports and analyses over the time period from 1978 through 1992 are reviewed to derive pre-altimetry Loop Current (LC) eddy separation dates. The reanalysis identified 20 separation events in the 15-year record. Separation dates are estimated to be accurate to approximately ± 1.5 months and sufficient to detect statistically significant LC eddy separation seasonality, which was not the case for previously published records because of the misidentification of separation events and their timing. The reanalysis indicates that previously reported LC eddy separation dates, determined for the time period before the advent of continuous altimetric monitoring in the early 1990s, are inaccurate because of extensive reliance on satellite sea surface temperature (SST) imagery. Automated LC tracking techniques are used to derive LC eddy separation dates in three different altimetry-based sea surface height (SSH) datasets over the time period from 1993 through 2012. A total of 28-30 LC eddy separation events were identified in the 20-year record. Variations in the number and dates of eddy separation events are attributed to the different mean sea surfaces and objective-analysis smoothing procedures used to produce the SSH datasets. Significance tests on various altimetry and pre-altimetry/altimetry combined date lists consistently show that the seasonal distribution of separation events is not uniform at the 95% confidence level. Randomization tests further show that the seasonal peak in LC eddy separation events in August and September is highly unlikely to have occurred by chance. The other seasonal peak in February and March is less significant, but possibly indicates two seasons of enhanced probability of eddy separation centered near the spring and fall equinoxes. This is further quantified by objectively dividing the seasonal distribution into two seasons using circular statistical techniques and a k-means clustering algorithm. The estimated spring and fall centers are March 2nd and August 23rd, respectively, with season boundaries in May and December.
NASA Technical Reports Server (NTRS)
Fairall, C. W.; Hare, J. E.; Snider, Jack B.
1990-01-01
As part of the FIRE/Extended Time Observations (ETO) program, extended time observations were made at San Nicolas Island (SNI) from March to October, 1987. Hourly averages of air temperature, relative humidity, wind speed and direction, solar irradiance, and downward longwave irradiance were recorded. The radiation sensors were standard Eppley pyranometers (shortwave) and pyrgeometers (longwave). The SNI data were processed in several ways to deduce properties of the stratocumulus covered marine boundary layer (MBL). For example, from the temperature and humidity the lifting condensation level, which is an estimate of the height of the cloud bottom, can be computed. A combination of longwave irradiance statistics can be used to estimate fractional cloud cover. An analysis technique used to estimate the integrated cloud liquid water content (W) and the cloud albedo from the measured solar irradiance is also described. In this approach, the cloud transmittance is computed by dividing the irradiance measured at some time by a clear sky value obtained at the same hour on a cloudless day. From the transmittance and the zenith angle, values of cloud albedo and W are computed using the radiative transfer parameterizations of Stephens (1978). These analysis algorithms were evaluated with 17 days of simultaneous and colocated mm-wave (20.6 and 31.65 GHz) radiometer measurements of W and lidar ceilometer measurements of cloud fraction and cloudbase height made during the FIRE IFO. The algorithms are then applied to the entire data set to produce a climatology of these cloud properties for the eight month period.
Implementation of national practice guidelines to reduce waste and optimize patient value.
Langell, John T; Bledsoe, Amber; Vijaykumar, Sathya; Anderson, Terry; Zawalski, Ivy; Zimmerman, Joshua
2016-06-15
The financial health care crisis has provided the platform to drive operational improvements at US health care facilities. This has led to adoption of lean operation principles by many health care organizations as a means of eliminating waste and improving operational efficiencies and overall value to patients. We believe that standardized implementation of national practice guidelines can provide the framework to help to reduce financial waste. We analyzed our institutional preoperative electrocardiogram (ECG) ordering practices for patients undergoing elective surgery at our institution from February-March, 2012 to identify utilization and review compliance with American Heart Association guidelines. We then implemented an ECG ordering algorithm based on these guidelines and studied changes in ordering patterns, associated cost savings and hospital billing for the same period in 2013. From February-March 2012, 677 noncardiac surgical procedures were performed at our institution, and 312 (46.1%) had a preoperative ECG. After implementation of our evidence-based ECG ordering algorithm for the same period in 2013, 707 noncardiac surgical cases were performed, and 120 (16.9%) had a preoperative ECG. Preoperative ECG utilization dropped 63% with an annual institutional cost savings of $72,906 and $291,618 in total annual health care savings. Based on our data, US-wide implementation of our evidence-based ECG ordering algorithm could save the US health care system >$1,868,800,000 per year. Here, we demonstrate that standardized application of a national practice guideline can be used to eliminate nearly $2 billion per year in waste from the US health care system. Copyright © 2016 Elsevier Inc. All rights reserved.
Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover
NASA Technical Reports Server (NTRS)
Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa
2006-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of lK, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze Atmospheric InfraRed Sounder/Advanced Microwave Sounding Unit/Humidity Sounder Brazil (AIRS/AMSU/HSB) data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small and the RMS accuracy of lower tropospheric temperature retrieved with 80 percent cloud cover is about 0.5 K poorer than for clear cases. HSB failed in February 2003, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC (Distributed Active Archive Center) in April 2003 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.
Schuh-Renner, Anna; Grier, Tyson L; Canham-Chervak, Michelle; Hauschild, Veronique D; Roy, Tanja C; Fletcher, Jeremy; Jones, Bruce H
2017-11-01
Road marching is an important physical training activity that prepares soldiers for a common occupational task. Continued exploration of risk factors for road marching-related injuries is needed. This analysis has assessed the association between modifiable characteristics of physical training and injury risk. Injuries in the previous 6 months were captured by survey from 831 U.S. Army infantry soldiers. Road marching-related injuries were reported as those attributed to road marching on foot for specified distances while carrying equipment. Frequencies, means, and relative risk ratios (RR) for road marching-related injury with 95% confidence intervals (CI) were calculated. Adjusted odds ratios (OR) and 95% CI were calculated for leading risk factors using multivariable logistic regression. Retrospective cohort study. Half (50%) of reported injuries were attributed to road marching or running. When miles of exposure were considered, injury risk during road marching was higher than during running (RR road marching/running =1.8, 95% CI: 1.38-2.37). A higher product of road marching distance and weight worn (pound-miles per month) resulted in greater injury risk (RR ≥1473 pound-miles/<1472 =1.92, 95% CI: 1.17-2.41). Road marching-related injuries were associated with carrying a load >25% of one's body weight (OR >25%/1-20% =2.09, 95% CI: 1.08-4.05), having high occupational lifting demands (OR 50-100+lbs/25-50lbs =3.43, 95% CI: 1.50-7.85), road marching ≥5 times per month (OR ≥5 times/4 times =2.11, 95% CI: 1.14-3.91), and running <4 miles per week during personal physical training (OR 0/≥10 miles/week =3.56, 95% CI: 1.49-8.54, OR 1-4/≥10 miles/week =4.14, 95% CI: 1.85-9.25). Ideally, attempts should be made to decrease the percentage of body weight carried to reduce road marching-related injuries. Since this is not always operationally feasible, reducing the cumulative overloading from both physical training and occupational tasks may help prevent injury. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Irani Rahaghi, Abolfazl; Lemmin, Ulrich; Bouffard, Damien; Riffler, Michael; Wunderle, Stefan; Barry, Andrew
2017-04-01
Lake surface water temperature (LSWT), which varies spatially and temporarily, reflects meteorological and climatological forcing more than any other physical lake parameter. There are different data sources for LSWT mapping, including remote sensing and in situ measurements. Depending on cloud cover, satellite data can depict large-scale thermal patterns, but not the meso- or small-scale processes. Meso-scale thermography allows complementing (and hence ground-truth) satellite imagery at the sub-pixel scale. A Balloon Launched Imaging and Monitoring Platform (BLIMP) was used to measure the LSWT at the meso-scale. The BLIMP consists of a small balloon tethered to a boat and is equipped with thermal and RGB cameras, as well as other instrumentation for geo-location and communication. A feature matching-based algorithm was implemented to create composite thermal images. Simultaneous ground-truthing of the BLIMP data were achieved using an autonomous craft measuring among other in situ surface/near surface temperatures, radiation and meteorological data. Latent and sensible surface heat fluxes were calculated using the bulk parameterization algorithm based on similarity theory. Results are presented for the day-time stratified low wind speed (up to 3 ms-1) conditions over Lake Geneva for two field campaigns, each of 6 h on 18 March and 19 July 2016. The meso-scale temperature field ( 1-m pixel resolution) had a range and standard deviation of 2.4°C and 0.3°C, respectively, over a 1-km2 area (typical satellite pixel size). Interestingly, at the sub-pixel scale, various temporal and spatial thermal structures are evident - an obvious example being streaks in the along-wind direction during March, which we hypothesize are caused by the steady 3 h wind condition. The results also show that the spatial variability of the estimated total heat flux is due to the corresponding variability of the longwave cooling from the water surface and the latent heat flux.
NASA Technical Reports Server (NTRS)
Martin, S.; Cavalieri, D. J.; Gloersen, P.; Mcnutt, S. L.
1982-01-01
During March 1979, field operations were carried out in the Marginal Ice Zone (MIZ) of the Bering Sea. The field measurements which included oceanographic, meteorological and sea ice observations were made nearly coincident with a number of Nimbus-7 and Tiros-N satellite observations. The results of a comparison between surface and aircraft observations, and images from the Tiros-N satellite, with ice concentrations derived from the microwave radiances of the Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR) are given. Following a brief discussion of the field operations, including a summary of the meteorological conditions during the experiment, the satellite data is described with emphasis on the Nimbus-7 SMMR and the physical basis of the algorithm used to retrieve ice concentrations.
NASA Astrophysics Data System (ADS)
Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.; Song, Chul H.; Lim, Jae-Hyun; Song, Chang-Keun
2016-04-01
The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGON-NE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Ångström exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 × AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better agreement with MODIS DB than MODIS DT. The other GOCI YAER products (AE, FMF, and SSA) show lower correlation with AERONET than AOD, but still show some skills for qualitative use.
NASA Technical Reports Server (NTRS)
Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.;
2016-01-01
The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGONNE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Angstrom exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 x AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better agreement with MODIS DB than MODIS DT. The other GOCI YAER products (AE, FMF, and SSA) show lower correlation with AERONET than AOD, but still show some skills for qualitative use.
Distribution majorization of corner points by reinforcement learning for moving object detection
NASA Astrophysics Data System (ADS)
Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang
2018-04-01
Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.
Seeking out SARI: an automated search of electronic health records.
O'Horo, John C; Dziadzko, Mikhail; Sakusic, Amra; Ali, Rashid; Sohail, M Rizwan; Kor, Daryl J; Gajic, Ognjen
2018-06-01
The definition of severe acute respiratory infection (SARI) - a respiratory illness with fever and cough, occurring within the past 10 days and requiring hospital admission - has not been evaluated for critically ill patients. Using integrated electronic health records data, we developed an automated search algorithm to identify SARI cases in a large cohort of critical care patients and evaluate patient outcomes. We conducted a retrospective cohort study of all admissions to a medical intensive care unit from August 2009 through March 2016. Subsets were randomly selected for deriving and validating a search algorithm, which was compared with temporal trends in laboratory-confirmed influenza to ensure that SARI was correlated with influenza. The algorithm was applied to the cohort to identify clinical differences for patients with and without SARI. For identifying SARI, the algorithm (sensitivity, 86.9%; specificity, 95.6%) outperformed billing-based searching (sensitivity, 73.8%; specificity, 78.8%). Automated searching correlated with peaks in laboratory-confirmed influenza. Adjusted for severity of illness, SARI was associated with more hospital, intensive care unit and ventilator days but not with death or dismissal to home. The search algorithm accurately identified SARI for epidemiologic study and surveillance.
Seismic velocity estimation from time migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, Maria Kourkina
2007-01-01
This is concerned with imaging and wave propagation in nonhomogeneous media, and includes a collection of computational techniques, such as level set methods with material transport, Dijkstra-like Hamilton-Jacobi solvers for first arrival Eikonal equations and techniques for data smoothing. The theoretical components include aspects of seismic ray theory, and the results rely on careful comparison with experiment and incorporation as input into large production-style geophysical processing codes. Producing an accurate image of the Earth's interior is a challenging aspect of oil recovery and earthquake analysis. The ultimate computational goal, which is to accurately produce a detailed interior map of themore » Earth's makeup on the basis of external soundings and measurements, is currently out of reach for several reasons. First, although vast amounts of data have been obtained in some regions, this has not been done uniformly, and the data contain noise and artifacts. Simply sifting through the data is a massive computational job. Second, the fundamental inverse problem, namely to deduce the local sound speeds of the earth that give rise to measured reacted signals, is exceedingly difficult: shadow zones and complex structures can make for ill-posed problems, and require vast computational resources. Nonetheless, seismic imaging is a crucial part of the oil and gas industry. Typically, one makes assumptions about the earth's substructure (such as laterally homogeneous layering), and then uses this model as input to an iterative procedure to build perturbations that more closely satisfy the measured data. Such models often break down when the material substructure is significantly complex: not surprisingly, this is often where the most interesting geological features lie. Data often come in a particular, somewhat non-physical coordinate system, known as time migration coordinates. The construction of substructure models from these data is less and less reliable as the earth becomes horizontally nonconstant. Even mild lateral velocity variations can significantly distort subsurface structures on the time migrated images. Conversely, depth migration provides the potential for more accurate reconstructions, since it can handle significant lateral variations. However, this approach requires good input data, known as a 'velocity model'. We address the problem of estimating seismic velocities inside the earth, i.e., the problem of constructing a velocity model, which is necessary for obtaining seismic images in regular Cartesian coordinates. The main goals are to develop algorithms to convert time-migration velocities to true seismic velocities, and to convert time-migrated images to depth images in regular Cartesian coordinates. Our main results are three-fold. First, we establish a theoretical relation between the true seismic velocities and the 'time migration velocities' using the paraxial ray tracing. Second, we formulate an appropriate inverse problem describing the relation between time migration velocities and depth velocities, and show that this problem is mathematically ill-posed, i.e., unstable to small perturbations. Third, we develop numerical algorithms to solve regularized versions of these equations which can be used to recover smoothed velocity variations. Our algorithms consist of efficient time-to-depth conversion algorithms, based on Dijkstra-like Fast Marching Methods, as well as level set and ray tracing algorithms for transforming Dix velocities into seismic velocities. Our algorithms are applied to both two-dimensional and three-dimensional problems, and we test them on a collection of both synthetic examples and field data.« less
NASA Astrophysics Data System (ADS)
Miladinovich, D.; Datta-Barua, S.; Bust, G. S.; Ramirez, U.
2017-12-01
Understanding physical processes during storm time in the ionosphere-thermosphere (IT) system is limited, in part, due to the inability to obtain accurate estimates of IT states on a global scale. One reason for this inability is the sparsity of spatially distributed high quality data sets. Data assimilation is showing promise toward enabling global estimates by blending high quality observational data sets with established climate models. We are continuing development of an algorithm called Estimating Model Parameters for Ionospheric Reverse Engineering (EMPIRE) to enable assimilation of global datasets for storm time estimates of IT drivers. EMPIRE is a data assimilation algorithm that uses a Kalman filtering routine to ingest model and observational data. The EMPIRE algorithm is based on spherical harmonics which provide a spherically symmetric, smooth, continuous, and orthonormal set of basis functions suitable for a spherical domain such as Earth's IT region (200-600 km altitude). Once the basis function coefficients are determined, the newly fitted function represents the disagreement between observational measurements and models. We apply spherical harmonics to study the March 17, 2015 storm. Data sources include Fabry-Perot interferometer neutral wind measurements and global Ionospheric Data Assimilation 4 Dimensional (IDA4D) assimilated total electron content (TEC). Models include Weimer 2000 electric potential, International Geomagnetic Reference Field (IGRF) magnetic field, and Horizontal Wind Model 2014 (HWM14) neutral winds. We present the EMPIRE assimilation results of Earth's electric potential and thermospheric winds. We also compare EMPIRE storm time E cross B ion drift estimates to measured drifts produced from the Super Dual Auroral Radar Network (SuperDARN) and Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) measurement datasets. The analysis from these results will enable the generation of globally assimilated storm time IT state estimates for future studies. In particular, the ability to provide data assimilated estimation of the drivers of the IT system from high to low latitudes is a critical step toward forecasting the influence of geomagnetic storms on the near Earth space environment.
Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm
2009-03-10
pp. 594-606. 8. Inverse Approaches to Drying of Thin Bodies With Significant Shrinkage Effects (with G. H. Kanevce, L. P. Kanevce, V. B. Mitrevski ...Kanevce, L. Kanevce, V. Mitrevski ), ICCES: International Conference on Computational & Experimental Engineering and Sciences, Honolulu, Hawaii, March 17...Miami Beach, FL, April 16-18, 2007. 16. Inverse Approaches to Drying of Sliced Foods (with Kanevce, G. H., Kanevce, Lj. P., and Mitrevski , V. B
2007-03-24
ISS014-E-17880 (24 March 2007) --- This medium close-up view shows three bowling-ball-sized free-flying satellites called Synchronized Position Hold, Engage, Reorient, Experimental Satellites (SPHERES) in the Destiny laboratory of the International Space Station. SPHERES were designed to test control algorithms for spacecraft by performing autonomous rendezvous and docking maneuvers inside the station. The results are important for multi-body control and in designing constellation and array spacecraft configurations.
Three-Dimensional Shallow Water Acoustics
2015-09-30
converts the Helmholtz wave equation of elliptic type to a one-way wave equation of parabolic type. The conversion allows efficient marching solution ...algorithms for 2 solving the boundary value problem posed by the Helmholtz equation . This can reduce significantly the requirement for computational...Fourier parabolic- equation sound propagation solution scheme," J. Acoust. Soc. Am, vol. 132, pp. EL61-EL67 (2012). [6] Y.-T. Lin, J.M. Collis and T.F
2015-03-25
A Security team walks the railroad tracks ahead of the Soyuz TMA-16M spacecraft as it is rolled out by train to the launch pad at the Baikonur Cosmodrome, Kazakhstan, Wednesday, March 25, 2015. NASA Astronaut Scott Kelly, and Russian Cosmonauts Mikhail Kornienko, and Gennady Padalka of the Russian Federal Space Agency (Roscosmos) are scheduled to launch to the International Space Station in the Soyuz TMA-16M spacecraft from the Baikonur Cosmodrome in Kazakhstan March 28, Kazakh time (March 27 Eastern time.) As the one-year crew, Kelly and Kornienko will return to Earth on Soyuz TMA-18M in March 2016. Photo Credit (NASA/Bill Ingalls)
Approximate Bayesian Computation in the estimation of the parameters of the Forbush decrease model
NASA Astrophysics Data System (ADS)
Wawrzynczak, A.; Kopka, P.
2017-12-01
Realistic modeling of the complicated phenomena as Forbush decrease of the galactic cosmic ray intensity is a quite challenging task. One aspect is a numerical solution of the Fokker-Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The second difficulty arises from a lack of detailed knowledge about the spatial and time profiles of the parameters responsible for the creation of the Forbush decrease. Among these parameters, the central role plays a diffusion coefficient. Assessment of the correctness of the proposed model can be done only by comparison of the model output with the experimental observations of the galactic cosmic ray intensity. We apply the Approximate Bayesian Computation (ABC) methodology to match the Forbush decrease model to experimental data. The ABC method is becoming increasing exploited for dynamic complex problems in which the likelihood function is costly to compute. The main idea of all ABC methods is to accept samples as an approximate posterior draw if its associated modeled data are close enough to the observed one. In this paper, we present application of the Sequential Monte Carlo Approximate Bayesian Computation algorithm scanning the space of the diffusion coefficient parameters. The proposed algorithm is adopted to create the model of the Forbush decrease observed by the neutron monitors at the Earth in March 2002. The model of the Forbush decrease is based on the stochastic approach to the solution of the Fokker-Planck equation.
75 FR 6676 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-10
... Special Emphasis Panel; Small Business: Genes, Genomes, and Genetics. Date: March 4, 2010. Time: 8 a.m. to... Special Emphasis Panel; Small Business: Diabetes, Obesity and Nutrition. Date: March 4-5, 2010. Time: 10 a...
NASA Astrophysics Data System (ADS)
Churilova, T.; Suslin, V.; Berseneva, G.; Georgieva, L.
At present time for the analysis and prediction of marine ecosystem state Chlorophyll and Primary production models based on optical satellite data are widely used. However, the SeaWiFS algorithms providing the transformation of color images to chlorophyll maps give inaccurate estimation of chlorophyll "a" (Chl "a") concentration in the Black Sea - an overestimation approximately two times in summer and an underestimation - ~1,5 times during the large diatom bloom in winter-spring. A development of the regional Chl "a" algorithm requires an estimation of spectral characteristics of all light absorbing components and their relationships with Chl "a" concentration. With this aim bio-optical monitoring was organized in two fixed stations in deep-water central western part of the Black Sea and in shelf waters near the Crimea. The weekly monitoring in deep-waters region allowed to determine phytoplankton community succession: seasonal dynamics of size and taxonomic structure, development of large diatoms blooming in March and coccolithophores - in June. The significant variability in pigment concentration and species content of phytoplankton is accompanied by high variability in shape of the phytoplankton absorption spectra and in values of chl a-specific absorption coefficients. This variability had seasonal character depending mostly on the optical status of phytoplankton cells and partly on taxonomic structure of phytoplankton. The pigment packaging parameter fluctuated from 0.64-0.68 (October-December) to 0.95-0.97 (April-May). The package effect depended on intracellular pigment concentration and the size and geometry of cells, which change significantly over the year, because of extremely different environmental conditions. The relationships between phytoplankton specific absorption coefficients (at 412, 443, 490, 510, 555, 678 nm) and Chl "a" concentration have been described by power functions. The contribution of detritus to total particulate absorption significantly varied and correlated with Chl "a" concentration. The main light-absorbing component in the Black Sea is colored dissolved organic matter (CDOM), its absorption at 443 nm is 50-70 % to total particulate and CDOM absorption. Special attention should be given to shelf regions. The comparison of bio-optical data for the open part with those for the shelf region showed pronounced differences: a) the relationships between phytoplankton specific absorption coefficients and Chl "a" concentrations (at 412, 443, 490, 510, 555 nm) are different; b) in the shelf waters relative absorption by detritus was higher and weakly correlated with Chl "a" in comparison with deep-water part of the Sea. Obtained relationships have been used for development of regional algorithms to estimate Chl "a" concentration. The new regional algorithm allowed to get more correct values of Chl "a" in comparison with standard SeaWiFS algorithm.
Using multiplets to track volcanic processes at Kilauea Volcano, Hawaii
NASA Astrophysics Data System (ADS)
Thelen, W. A.
2011-12-01
Multiplets, or repeating earthquakes, are commonly observed at volcanoes, particularly those exhibiting unrest. At Kilauea, multiplets have been observed as part of long period (LP) earthquake swarms [Battaglia et al., 2003] and as volcano-tectonic (VT) earthquakes associated with dike intrusion [Rubin et al., 1998]. The focus of most previous studies has been on the precise location of the multiplets based on reviewed absolute locations, a process that can require extensive human intervention and post-processing. Conversely, the detection of multiplets and measurement of multiplet parameters can be done in real-time without human interaction with locations approximated by the stations that best record the multiplet. The Hawaiian Volcano Observatory (HVO) is in the process of implementing and testing an algorithm to detect multiplets in near-real time and to analyze certain metrics to provide enhanced interpretive insights into ongoing volcanic processes. Metrics such as multiplet percent of total seismicity, multiplet event recurrence interval, multiplet lifespan, average event amplitude, and multiplet event amplitude variability have been shown to be valuable in understanding volcanic processes at Bezymianny Volcano, Russia and Mount St. Helens, Washington and thus are tracked as part of the algorithm. The near real-time implementation of the algorithm can be triggered from an earthworm subnet trigger or other triggering algorithm and employs a MySQL database to store results, similar to an algorithm implemented by Got et al. [2002]. Initial results using this algorithm to analyze VT earthquakes along Kilauea's Upper East Rift Zone between September 2010 and August 2011 show that periods of summit pressurization coincide with ample multiplet development. Summit pressurization is loosely defined by high rates of seismicity within the summit and Upper East Rift areas, coincident with lava high stands in the Halema`uma`u lava lake. High percentages, up to 100%, of earthquakes occurring during summit pressurization were part of a multiplet. Percentages were particularly high immediately prior to the March 5 Kamoamoa eruption. Interestingly, many multiplets that were present prior to the Kamoamoa eruption were reactivated during summit pressurization occurring in late July 2011. At a correlation coefficient of 0.7, 90% of the multiplets during the study period had populations of 10 or fewer earthquakes. Between periods of summit pressurization, earthquakes that belong to multiplets rarely occur, even though magma is flowing through the Upper East Rift Zone. Battaglia, J., Got, J. L. and Okubo, P., 2003. Location of long-period events below Kilauea Volcano using seismic amplitudes and accurate relative relocation. Journal of Geophysical Research-Solid Earth, v.108 (B12) 2553. Got, J. L., P. Okubo, R. Machenbaum, and W. Tanigawa (2002), A real-time procedure for progressive multiplet relative relocation at the Hawaiian Volcano Observatory, Bulletin of the Seismological Society of America, 92(5), 2019. Rubin, A. M., D. Gillard, and J. L. Got (1998), A reinterpretation of seismicity associated with the January 1983 dike intrusion at Kilauea Volcano, Hawaii, Journal of Geophysical Research-Solid Earth, 103(B5), 10003.
76 FR 12980 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-09
... Emphasis Panel; Program Project: Cell Biology. Date: March 29-30, 2011. Time: 8 a.m. to 5 p.m. Agenda: To... Review Special Emphasis Panel; Program Project: NeuroAIDS Applications. Date: March 30-31, 2011. Time: 8...
78 FR 11212 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-15
... Integrated Review Group; AIDS Molecular and Cellular Biology Study Section. Date: March 11, 2013. Time: 8:00... Panel; Member Conflict: Cancer Biology. Date: March 11, 2013. Time: 1:00 p.m. to 2:00 p.m. Agenda: To...
Lundquist, J.D.; Flint, A.L.
2006-01-01
Historic streamflow records show that the onset of snowfed streamflow in the western United States has shifted earlier over the past 50 yr, and March 2004 was one of the earliest onsets on record. Record high temperatures occurred throughout the western United States during the second week of March, and U.S. Geological Survey (USGS) stream gauges throughout the area recorded early onsets of streamflow at this time. However, a set of nested subbasins in Yosemite National Park, California, told a more complicated story. In spite of high air temperatures, many streams draining high-elevation basins did not start flowing until later in the spring. Temperatures during early March 2004 were as high as temperatures in late March 2002, when streams at all of the monitored Yosemite basins began flowing at the same time. However, the March 2004 onset occurred before the spring equinox, when the sun was lower in the sky. Thus, shading and solar radiation differences played a much more important role in 2004, leading to differences in streamflow timing. These results suggest that as temperatures warm and spring melt shifts earlier in the season, topographic effects will play an even more important role than at present in determining snowmelt timing. ?? 2006 American Meteorological Society.
Capital market based warning indicators of bank runs
NASA Astrophysics Data System (ADS)
Vakhtina, Elena; Wosnitza, Jan Henrik
2015-01-01
In this investigation, we examine the univariate as well as the multivariate capabilities of the log-periodic [super-exponential] power law (LPPL) for the prediction of bank runs. The research is built upon daily CDS spreads of 40 international banks for the period from June 2007 to March 2010, i.e. at the heart of the global financial crisis. For this time period, 20 of the financial institutions received federal bailouts and are labeled as defaults while the remaining institutions are categorized as non-defaults. The employed multivariate pattern recognition approach represents a modification of the CORA3 algorithm. The approach is found to be robust regardless of reasonable changes of its inputs. Despite the fact that distinct alarm indices for banks do not clearly demonstrate predictive capabilities of the LPPL, the synchronized alarm indices confirm the multivariate discriminative power of LPPL patterns in CDS spread developments acknowledged by bootstrap intervals with 70% confidence level.
A Prototype MODI- SSM/I Snow Mapping Algorithm
NASA Technical Reports Server (NTRS)
Tait, Andrew B.; Barton, Jonathan S.; Hall, Dorothy K.
1999-01-01
Data in the wavelength range 0.545 - 1.652 microns from the Moderate Resolution Imaging Spectroradiometer (MODIS), to be launched aboard the Earth Observing System (EOS) Terra in the fall of 1999, will be used to map daily global snow cover at 500m resolution. However, during darkness, or when the satellite's view of the surface is obscured by cloud, snow cover cannot be mapped using MODIS data. We show that during these conditions, it is possible to supplement the MODIS product by mapping the snow cover using passive microwave data from the Special Sensor Microwave Imager (SSM/I), albeit with much poorer resolution. For a 7-day time period in March 1999, a prototype MODIS snow-cover product was compared with a prototype MODIS-SSM/I product for the same area in the mid-western United States. The combined MODIS-SSM/I product mapped 9% more snow cover than the MODIS-only product.
Finite volume model for two-dimensional shallow environmental flow
Simoes, F.J.M.
2011-01-01
This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.
A Viscoelastic Hybrid Shell Finite Element
NASA Technical Reports Server (NTRS)
Johnson, Arthur
1999-01-01
An elastic large displacement thick-shell hybrid finite element is modified to allow for the calculation of viscoelastic stresses. Internal strain variables are introduced at he element's stress nodes and are employed to construct a viscous material model. First order ordinary differential equations relate the internal strain variables to the corresponding elastic strains at the stress nodes. The viscous stresses are computed from the internal strain variables using viscous moduli which are a fraction of the elastic moduli. The energy dissipated by the action of the viscous stresses in included in the mixed variational functional. Nonlinear quasi-static viscous equilibrium equations are then obtained. Previously developed Taylor expansions of the equilibrium equations are modified to include the viscous terms. A predictor-corrector time marching solution algorithm is employed to solve the algebraic-differential equations. The viscous shell element is employed to numerically simulate a stair-step loading and unloading of an aircraft tire in contact with a frictionless surface.
Finite-difference time-domain simulation of GPR data
NASA Astrophysics Data System (ADS)
Chen, How-Wei; Huang, Tai-Min
1998-10-01
Simulation of digital ground penetrating radar (GPR) wave propagation in two-dimensional (2-D) media is developed, tested, implemented, and applied using a time-domain staggered-grid finite-difference (FD) numerical method. Three types of numerical algorithms for constructing synthetic common-shot, constant-offset radar profiles based on an actual transmitter-to-receiver configuration and based on the exploding reflector concept are demonstrated to mimic different types of radar survey geometries. Frequency-dependent attenuation is also incorporated to account for amplitude decay and time shift in the recorded responses. The algorithms are based on an explicit FD solution to Maxwell's curl equations. In addition, the first-order TE mode responses of wave propagation phenomena are considered due to the operating frequency of current GPR instruments. The staggered-grid technique is used to sample the fields and approximate the spatial derivatives with fourth-order FDs. The temporal derivatives are approximated by an explicit second-order difference time-marching scheme. By combining paraxial approximation of the one-way wave equation ( A2) and the damping mechanisms (sponge filter), we propose a new composite absorbing boundary conditions (ABC) algorithm that effectively absorb both incoming and outgoing waves. To overcome the angle- and frequency-dependent characteristic of the absorbing behaviors, each ABC has two types of absorption mechanism. The first ABC uses a modified Clayton and Enquist's A2 condition. Moreover, a fixed and a floating A2 ABC that operates at one grid point is proposed. The second ABC uses a damping mechanism. By superimposing artificial damping and by alternating the physical attenuation properties and impedance contrast of the media within the absorbing region, those waves impinging on the boundary can be effectively attenuated and can prevent waves from reflecting back into the grid. The frequency-dependent characteristic of the damping mechanism can be used to adjust the width of the absorbing zone around the computational domain. By applying any combination of absorbing mechanism, non-physical reflections from the computation domain boundary can be effectively minimized. The algorithm enables us to use very thin absorbing boundaries. The model can be parameterized through velocity, relative electrical permittivity (dielectric constants), electrical conductivity, magnetic permeability, loss tangent, Q values, and attenuation. According to this scheme, widely varying electrical properties of near-surface earth materials can be modeled. The capability of simulating common-source, constant-offset and zero-offset gathers is also demonstrated through various synthetic examples. The synthetic cases for typical GPR applications include buried objects such as pipes of different materials, AVO analysis for ground water exploration, archaeological site investigation, and stratigraphy studies. The algorithms are also applied to iterative modeling of GPR data acquired over a gymnasium construction site on the NCCU campus.
NASA Technical Reports Server (NTRS)
Murman, E. M. (Editor); Abarbanel, S. S. (Editor)
1985-01-01
Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.
Rhythmic Extended Kalman Filter for Gait Rehabilitation Motion Estimation and Segmentation.
Joukov, Vladimir; Bonnet, Vincent; Karg, Michelle; Venture, Gentiane; Kulic, Dana
2018-02-01
This paper proposes a method to enable the use of non-intrusive, small, wearable, and wireless sensors to estimate the pose of the lower body during gait and other periodic motions and to extract objective performance measures useful for physiotherapy. The Rhythmic Extended Kalman Filter (Rhythmic-EKF) algorithm is developed to estimate the pose, learn an individualized model of periodic movement over time, and use the learned model to improve pose estimation. The proposed approach learns a canonical dynamical system model of the movement during online observation, which is used to accurately model the acceleration during pose estimation. The canonical dynamical system models the motion as a periodic signal. The estimated phase and frequency of the motion also allow the proposed approach to segment the motion into repetitions and extract useful features, such as gait symmetry, step length, and mean joint movement and variance. The algorithm is shown to outperform the extended Kalman filter in simulation, on healthy participant data, and stroke patient data. For the healthy participant marching dataset, the Rhythmic-EKF improves joint acceleration and velocity estimates over regular EKF by 40% and 37%, respectively, estimates joint angles with 2.4° root mean squared error, and segments the motion into repetitions with 96% accuracy.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-18
... DEPARTMENT OF THE TREASURY United States Mint Public Meeting ACTION: Notification of Citizens Coinage Advisory Committee March 1, 2011, Public Meeting. SUMMARY: Pursuant to United States Code, Title... (CCAC) public meeting scheduled for March 1, 2011. Date: March 1, 2011. Time: 10 a.m. to 1 p.m. Location...
75 FR 4831 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
... Emphasis Panel Molecular Biology. Date: March 2, 2010. Time: 12 p.m. to 2 p.m. Agenda: To review and...; Special Topic: Diet and Physical Activity Methodologies. Date: March 3-4, 2010. Time: 8 a.m. to 5 p.m...
78 FR 14099 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... Pathobiology of Cardiovascular and Respiratory Systems. Date: March 26-27, 2013. Time: 8:00 a.m. to 5:00 p.m... Special Emphasis Panel; Member Conflict: Respiratory Diseases. Date: March 26-27, 2013. Time: 9:00 a.m. to...
Consistent satellite XCO 2 retrievals from SCIAMACHY and GOSAT using the BESD algorithm
Heymann, J.; Reuter, M.; Hilker, M.; ...
2015-02-13
Consistent and accurate long-term data sets of global atmospheric concentrations of carbon dioxide (CO 2) are required for carbon cycle and climate related research. However, global data sets based on satellite observations may suffer from inconsistencies originating from the use of products derived from different satellites as needed to cover a long enough time period. One reason for inconsistencies can be the use of different retrieval algorithms. We address this potential issue by applying the same algorithm, the Bremen Optimal Estimation DOAS (BESD) algorithm, to different satellite instruments, SCIAMACHY on-board ENVISAT (March 2002–April 2012) and TANSO-FTS on-board GOSAT (launched inmore » January 2009), to retrieve XCO 2, the column-averaged dry-air mole fraction of CO 2. BESD has been initially developed for SCIAMACHY XCO 2 retrievals. Here, we present the first detailed assessment of the new GOSAT BESD XCO 2 product. GOSAT BESD XCO 2 is a product generated and delivered to the MACC project for assimilation into ECMWF's Integrated Forecasting System (IFS). We describe the modifications of the BESD algorithm needed in order to retrieve XCO 2 from GOSAT and present detailed comparisons with ground-based observations of XCO 2 from the Total Carbon Column Observing Network (TCCON). We discuss detailed comparison results between all three XCO 2 data sets (SCIAMACHY, GOSAT and TCCON). The comparison results demonstrate the good consistency between the SCIAMACHY and the GOSAT XCO 2. For example, we found a mean difference for daily averages of −0.60 ± 1.56 ppm (mean difference ± standard deviation) for GOSAT-SCIAMACHY (linear correlation coefficient r = 0.82), −0.34 ± 1.37 ppm ( r = 0.86) for GOSAT-TCCON and 0.10 ± 1.79 ppm ( r = 0.75) for SCIAMACHY-TCCON. The remaining differences between GOSAT and SCIAMACHY are likely due to non-perfect collocation (±2 h, 10° × 10° around TCCON sites), i.e., the observed air masses are not exactly identical, but likely also due to a still non-perfect BESD retrieval algorithm, which will be continuously improved in the future. Our overarching goal is to generate a satellite-derived XCO 2 data set appropriate for climate and carbon cycle research covering the longest possible time period. We therefore also plan to extend the existing SCIAMACHY and GOSAT data set discussed here by using also data from other missions (e.g., OCO-2, GOSAT-2, CarbonSat) in the future.« less
2015-03-21
Gagarin Cosmonaut Training Center (GCTC) Chief Epidemiologist Sergei Savin stands in the Cosmonaut Hotel lobby and instructs the media on how their access to the Expedition 43 prime and backup crews will be organized during media day, Saturday, March 21, 2015, Baikonur, Kazakhstan. Expedition 43 NASA Astronaut Scott Kelly, and Russian Cosmonauts Gennady Padalka, and Mikhail Kornienko of the Russian Federal Space Agency (Roscosmos) are scheduled to launch to the International Space Station in the Soyuz TMA-16M spacecraft from the Baikonur Cosmodrome in Kazakhstan March 28, Kazakh time (March 27 Eastern time.) As the one-year crew, Kelly and Kornienko will return to Earth on Soyuz TMA-18M in March 2016. Photo Credit: (NASA/Bill Ingalls)
Late Summer Frazil Ice-Associated Algal Blooms around Antarctica
NASA Astrophysics Data System (ADS)
DeJong, Hans B.; Dunbar, Robert B.; Lyons, Evan A.
2018-01-01
Antarctic continental shelf waters are the most biologically productive in the Southern Ocean. Although satellite-derived algorithms report peak productivity during the austral spring/early summer, recent studies provide evidence for substantial late summer productivity that is associated with green colored frazil ice. Here we analyze daily Moderate Resolution Imaging Spectroradiometer satellite images for February and March from 2003 to 2017 to identify green colored frazil ice hot spots. Green frazil ice is concentrated in 11 of the 13 major sea ice production polynyas, with the greenest frazil ice in the Terra Nova Bay and Cape Darnley polynyas. While there is substantial interannual variability, green frazil ice is present over greater than 300,000 km2 during March. Late summer frazil ice-associated algal productivity may be a major phenomenon around Antarctica that is not considered in regional carbon and ecosystem models.
Palleschi, Giovanni; Mosiello, Giovanni; Iacovelli, Valerio; Musco, Stefania; Del Popolo, Giulio; Giannantoni, Antonella; Carbone, Antonio; Carone, Roberto; Tubaro, Andrea; De Gennaro, Mario; Marte, Antonio; Finazzi Agrò, Enrico
2018-03-01
OnabotulinumtoxinA (onaBNTa) for treating neurogenic detrusor overactivity (NDO) is widely used after its regulatory approval in adults. Although the administration of onaBNTa is still considered off-label in children, data have already been reported on its efficacy and safety. Nowadays, there is a lack of standardized protocols for treatment of NDO with onaBNTa in adolescent patients in their transition from the childhood to the adult age. With the aim to address this issue a consensus panel was obtained. A panel of leading urologists and urogynaecologists skilled in functional urology, neuro-urology, urogynaecology, and pediatric urology participated in a consensus-forming project using a Delphi method to reach national consensus on NDO-onaBNTa treatment in adolescence transitional care. In total, 11 experts participated. All panelists participated in the four phases of the consensus process. Consensus was reached if ≥70% of the experts agreed on recommendations. To facilitate a common understanding among all experts, a face-to-face consensus meeting was held in Rome in march 2015 and then with a follow-up teleconference in march 2017. By the end of the Delphi process, formal consensus was achieved for 100% of the items and an algorithm was then developed. This manuscript represents the first report on the onaBNTa in adolescents. Young adults should be treated as a distinct sub-population in policy, planning, programming, and research, as strongly sustained by national public health care. This consensus and the algorithm could support multidisciplinary communication, reduce the extent of variations in clinical practice and optimize clinical decision making. © 2017 Wiley Periodicals, Inc.
75 FR 13344 - Revised Meeting Time for Citizens Coinage Advisory Committee March 2010 Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-19
... DEPARTMENT OF THE TREASURY United States Mint Revised Meeting Time for Citizens Coinage Advisory...: March 23, 2010. Time: 2:30 p.m. to 5 p.m. Location: 8th Floor Boardroom, United States Mint, 801 9th... call 202-354-7502 for the latest update on meeting time and room location. In accordance with 31 U.S.C...
45 CFR 162.920 - Availability of implementation specifications and operating rules.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: Eligibility and Benefit Real Time Companion Guide Rule, version 1.1.0, March 2011, as referenced in § 162.1203...: Eligibility and Benefits Real Time Response Time Rule, version 1.1.0, March 2011, as referenced in § 162.1203... operating rules. 162.920 Section 162.920 Public Welfare Department of Health and Human Services...
45 CFR 162.920 - Availability of implementation specifications and operating rules.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: Eligibility and Benefit Real Time Companion Guide Rule, version 1.1.0, March 2011, as referenced in § 162.1203...: Eligibility and Benefits Real Time Response Time Rule, version 1.1.0, March 2011, as referenced in § 162.1203... operating rules. 162.920 Section 162.920 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES...
75 FR 17137 - Combined Notice of Filings No. 1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
...-0132. Comment Date: 5 p.m. Eastern Time on Monday, March 29, 2010. Docket Numbers: RP10-490-000... effective 3/9/10. Filed Date: 03/12/2010. Accession Number: 20100315-0133. Comment Date: 5 p.m. Eastern Time.... Comment Date: 5 p.m. Eastern Time on Wednesday, March 24, 2010. Docket Numbers: RP10-493-000. Applicants...
75 FR 11159 - Combined Notice of Filings No. 1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-10
... Volume No. 1. Filed Date: 02/26/2010. Accession Number: 20100226-0038. Comment Date: 5 p.m. Eastern Time... Number: 20100301-5225. Comment Date: 5 p.m. Eastern Time on Monday, March 15, 2010. Docket Numbers: RP10.... Accession Number: 20100301-5226. Comment Date: 5 p.m. Eastern Time on Monday, March 15, 2010. Docket Numbers...
Asteroid 2014 EC Flyby of Earth on March 6, 2014
2014-03-06
This graphic depicts the passage of asteroid 2014 EC past Earth on March 6, 2014. The asteroid closest approach is a distance equivalent to about one-sixth of the distance between Earth and the moon. The indicated times are in Universal Time.
75 FR 8977 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-26
... Panel; Translational Diabetes and Obesity. Date: March 16-17, 2010. Time: 8 a.m. to 5 p.m. Agenda: To... Special Emphasis Panel; UKGD Member Conflict SEP. Date: March 25, 2010. Time: 11 a.m. to 1 p.m. Agenda: To...
75 FR 6044 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-05
..., Member Conflicts: Lung Physiology. Date: March 9-10, 2010. Time: 9 a.m. to 6 p.m. Agenda: To review and..., Stress and Aging. Date: March 12, 2010. Time: 8 a.m. to 6 p.m. Agenda: To review and evaluate grant...
77 FR 6812 - Center For Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-09
...: Tools for Zebrafish Research. Date: March 6-7, 2012. Time: 9 a.m. to 6 p.m. Agenda: To review and... Panel; Neurobiology of Brain Disease and Aging. Date: March 6-7, 2012. Time: 10 a.m. to 5 p.m. Agenda...
75 FR 7486 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-19
... Business: Cell Biology and Molecular Imaging. Date: March 11, 2010. Time: 8 a.m. to 6 p.m. Agenda: To... Panel, Genetics and Epigenetics in Health and Disease. Date: March 12-13, 2010. Time: 9 a.m. to 11 a.m...
A refined orbit for the satellite of asteroid (107) Camilla
NASA Astrophysics Data System (ADS)
Pajuelo, Myriam Virginia; Carry, Benoit; Vachier, Frederic; Berthier, Jerome; Descamp, Pascal; Merline, William J.; Tamblyn, Peter M.; Conrad, Al; Storrs, Alex; Margot, Jean-Luc; Marchis, Frank; Kervella, Pierre; Girard, Julien H.
2015-11-01
The satellite of the Cybele asteroid (107) Camilla was discovered in March 2001 using the Hubble Space Telescope (Storrs et al., 2001, IAUC 7599). From a set of 23 positions derived from adaptive optics observations obtained over three years with the ESO VLT, Keck-II and Gemini-North telescopes, Marchis et al. (2008, Icarus 196) determined its orbit to be nearly circular.In the new work reported here, we compiled, reduced, and analyzed observations at 39 epochs (including the 23 positions previously analyzed) by adding additional observations taken from data archives: HST in 2001; Keck in 2002, 2003, and 2009; Gemini in 2010; and VLT in 2011. The present dataset hence contains twice as many epochs as the prior analysis and covers a time span that is three times longer (more than a decade).We use our orbit determination algorithm Genoid (GENetic Orbit IDentification), a genetic based algorithm that relies on a metaheuristic method and a dynamical model of the Solar System (Vachier et al., 2012, A&A 543). The method uses two models: a simple Keplerian model to minimize the search-time for an orbital solution, exploring a wide space of solutions; and a full N-body problem that includes the gravitational field of the primary asteroid up to 4th order.The orbit we derive fits all 39 observed positions of the satellite with an RMS residual of only milli-arcseconds, which corresponds to sub-pixel accuracy. We found the orbit of the satellite to be circular and roughly aligned with the equatorial plane of Camilla. The refined mass of the system is (12 ± 1) x 10^18 kg, for an orbital period of 3.71 days.We will present this improved orbital solution of the satellite of Camilla, as well as predictions for upcoming stellar occultation events.
Full three-dimensional investigation of structural contact interactions in turbomachines
NASA Astrophysics Data System (ADS)
Legrand, Mathias; Batailly, Alain; Magnain, Benoît; Cartraud, Patrice; Pierre, Christophe
2012-05-01
Minimizing the operating clearance between rotating bladed-disks and stationary surrounding casings is a primary concern in the design of modern turbomachines since it may advantageously affect their energy efficiency. This technical choice possibly leads to interactions between elastic structural components through direct unilateral contact and dry friction, events which are now accepted as normal operating conditions. Subsequent nonlinear dynamical behaviors of such systems are commonly investigated with simplified academic models mainly due to theoretical difficulties and numerical challenges involved in non-smooth large-scale realistic models. In this context, the present paper introduces an adaptation of a full three-dimensional contact strategy for the prediction of potentially damaging motions that would imply highly demanding computational efforts for the targeted aerospace application in an industrial context. It combines a smoothing procedure including bicubic B-spline patches together with a Lagrange multiplier based contact strategy within an explicit time-marching integration procedure preferred for its versatility. The proposed algorithm is first compared on a benchmark configuration against the more elaborated bi-potential formulation and the commercial software Ansys. The consistency of the provided results and the low energy fluctuations of the introduced approach underlines its reliable numerical properties. A case study featuring blade-tip/casing contact on industrial finite element models is then proposed: it incorporates component mode synthesis and the developed three-dimensional contact algorithm for investigating structural interactions occurring within a turbomachine compressor stage. Both time results and frequency-domain analysis emphasize the practical use of such a numerical tool: detection of severe operating conditions and critical rotational velocities, time-dependent maps of stresses acting within the structures, parameter studies and blade design tests.
Suprathermal O(+) and H(+) ion behavior during the March 22, 1979 (CDAW 6), substorms
NASA Technical Reports Server (NTRS)
Ipavich, F. M.; Galvin, A. B.; Gloeckler, G.; Scholer, M.; Hovestadt, D.; Klecker, B.
1985-01-01
The present investigation has the objective to report on the behavior of energetic (approximately 130 keV) O(+) ions in the earth's plasma sheet, taking into account observations by the ISEE 1 spacecraft during a magnetically active time interval encompassing two major substorms on March 22, 1979. Attention is also given to suprathermal H(+) and He(++) ions. ISEE 1 plasma sheet observations of the proton and alpha particle phase space densities as a function of energy per charge during the time interval 0933-1000 UT on March 22, 1979 are considered along with the proton phase space density versus energy in the energy interval approximately 10 to 70 keV for the selected time periods 0933-1000 UT (presubstorm) and 1230-1243 UT (recovery phase) during the 1055 substorm on March 22, 1979. A table listing the proton energy density for presubstorm and recovery periods is also provided.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
... the enlargement of time. On March 16, 2011, respondents filed a joint motion for an enlargement of the... order granting respondents' motion for an enlargement of time and making responses due on March 28, 2011...
Time series clustering analysis of health-promoting behavior
NASA Astrophysics Data System (ADS)
Yang, Chi-Ta; Hung, Yu-Shiang; Deng, Guang-Feng
2013-10-01
Health promotion must be emphasized to achieve the World Health Organization goal of health for all. Since the global population is aging rapidly, ComCare elder health-promoting service was developed by the Taiwan Institute for Information Industry in 2011. Based on the Pender health promotion model, ComCare service offers five categories of health-promoting functions to address the everyday needs of seniors: nutrition management, social support, exercise management, health responsibility, stress management. To assess the overall ComCare service and to improve understanding of the health-promoting behavior of elders, this study analyzed health-promoting behavioral data automatically collected by the ComCare monitoring system. In the 30638 session records collected for 249 elders from January, 2012 to March, 2013, behavior patterns were identified by fuzzy c-mean time series clustering algorithm combined with autocorrelation-based representation schemes. The analysis showed that time series data for elder health-promoting behavior can be classified into four different clusters. Each type reveals different health-promoting needs, frequencies, function numbers and behaviors. The data analysis result can assist policymakers, health-care providers, and experts in medicine, public health, nursing and psychology and has been provided to Taiwan National Health Insurance Administration to assess the elder health-promoting behavior.
Validation of classification algorithms for childhood diabetes identified from administrative data.
Vanderloo, Saskia E; Johnson, Jeffrey A; Reimer, Kim; McCrea, Patrick; Nuernberger, Kimberly; Krueger, Hans; Aydede, Sema K; Collet, Jean-Paul; Amed, Shazhan
2012-05-01
Type 1 diabetes is the most common form of diabetes among children; however, the proportion of cases of childhood type 2 diabetes is increasing. In Canada, the National Diabetes Surveillance System (NDSS) uses administrative health data to describe trends in the epidemiology of diabetes, but does not specify diabetes type. The objective of this study was to validate algorithms to classify diabetes type in children <20 yr identified using the NDSS methodology. We applied the NDSS case definition to children living in British Columbia between 1 April 1996 and 31 March 2007. Through an iterative process, four potential classification algorithms were developed based on demographic characteristics and drug-utilization patterns. Each algorithm was then validated against a gold standard clinical database. Algorithms based primarily on an age rule (i.e., age <10 at diagnosis categorized type 1 diabetes) were most sensitive in the identification of type 1 diabetes; algorithms with restrictions on drug utilization (i.e., no prescriptions for insulin ± glucose monitoring strips categorized type 2 diabetes) were most sensitive for identifying type 2 diabetes. One algorithm was identified as having the optimal balance of sensitivity (Sn) and specificity (Sp) for the identification of both type 1 (Sn: 98.6%; Sp: 78.2%; PPV: 97.8%) and type 2 diabetes (Sn: 83.2%; Sp: 97.5%; PPV: 73.7%). Demographic characteristics in combination with drug-utilization patterns can be used to differentiate diabetes type among cases of pediatric diabetes identified within administrative health databases. Validation of similar algorithms in other regions is warranted. © 2011 John Wiley & Sons A/S.
Geochemical Evidence for Calcification from the Drake Passage Time-series
NASA Astrophysics Data System (ADS)
Munro, D. R.; Lovenduski, N. S.; Takahashi, T.; Stephens, B. B.; Newberger, T.; Dierssen, H. M.; Randolph, K. L.; Freeman, N. M.; Bushinsky, S. M.; Key, R. M.; Sarmiento, J. L.; Sweeney, C.
2016-12-01
Satellite imagery suggests high particulate inorganic carbon within a circumpolar region north of the Antarctic Polar Front (APF), but in situ evidence for calcification in this region is sparse. Given the geochemical relationship between calcification and total alkalinity (TA), seasonal changes in surface concentrations of potential alkalinity (PA), which accounts for changes in TA due to variability in salinity and nitrate, can be used as a means to evaluate satellite-based calcification algorithms. Here, we use surface carbonate system measurements collected from 2002 to 2016 for the Drake Passage Time-series (DPT) to quantify rates of calcification across the Antarctic Circumpolar Current. We also use vertical PA profiles collected during two cruises across the Drake Passage in March 2006 and September 2009 to estimate the calcium carbonate to organic carbon export ratio. We find geochemical evidence for calcification both north and south of the APF with the highest rates observed north of the APF. Calcification estimates from the DPT are compared to satellite-based estimates and estimates based on hydrographic data from other regions around the Southern Ocean.
Capabilities of a Global 3D MHD Model for Monitoring Extremely Fast CMEs
NASA Astrophysics Data System (ADS)
Wu, C. C.; Plunkett, S. P.; Liou, K.; Socker, D. G.; Wu, S. T.; Wang, Y. M.
2015-12-01
Since the start of the space era, spacecraft have recorded many extremely fast coronal mass ejections (CMEs) which have resulted in severe geomagnetic storms. Accurate and timely forecasting of the space weather effects of these events is important for protecting expensive space assets and astronauts and avoiding communications interruptions. Here, we will introduce a newly developed global, three-dimensional (3D) magnetohydrodynamic (MHD) model (G3DMHD). The model takes the solar magnetic field maps at 2.5 solar radii (Rs) and intepolates the solar wind plasma and field out to 18 Rs using the algorithm of Wang and Sheeley (1990, JGR). The output is used as the inner boundary condition for a 3D MHD model. The G3DMHD model is capable of simulating (i) extremely fast CME events with propagation speeds faster than 2500 km/s; and (ii) multiple CME events in sequence or simultaneously. We will demonstrate the simulation results (and comparison with in-situ observation) for the fastest CME in record on 23 July 2012, the shortest transit time in March 1976, and the well-known historic Carrington 1859 event.
Studies of the Antarctic Sea Ice Edges and Ice Extents from Satellite and Ship Observations
NASA Technical Reports Server (NTRS)
Worby, Anthony P.; Comiso, Josefino C.
2003-01-01
Passive-microwave derived ice edge locations in Antarctica are assessed against other satellite data as well as in situ observations of ice edge location made between 1989 and 2000. The passive microwave data generally agree with satellite and ship data but the ice concentration at the observed ice edge varies greatly with averages of 14% for the TEAM algorithm and 19% for the Bootstrap algorithm. The comparisons of passive microwave with the field data show that in the ice growth season (March - October) the agreement is extremely good, with r(sup 2) values of 0.9967 and 0.9797 for the Bootstrap and TEAM algorithms respectively. In the melt season however (November - February) the passive microwave ice edge is typically 1-2 degrees south of the observations due to the low concentration and saturated nature of the ice. Sensitivity studies show that these results can have significant impact on trend and mass balance studies of the sea ice cover in the Southern Ocean.
Fast marching methods for the continuous traveling salesman problem.
Andrews, June; Sethian, J A
2007-01-23
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ("cities") in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M.N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.
Lyles, Courtney Rees; Godbehere, Andrew; Le, Gem; El Ghaoui, Laurent; Sarkar, Urmimala
2016-06-10
It is difficult to synthesize the vast amount of textual data available from social media websites. Capturing real-world discussions via social media could provide insights into individuals' opinions and the decision-making process. We conducted a sequential mixed methods study to determine the utility of sparse machine learning techniques in summarizing Twitter dialogues. We chose a narrowly defined topic for this approach: cervical cancer discussions over a 6-month time period surrounding a change in Pap smear screening guidelines. We applied statistical methodologies, known as sparse machine learning algorithms, to summarize Twitter messages about cervical cancer before and after the 2012 change in Pap smear screening guidelines by the US Preventive Services Task Force (USPSTF). All messages containing the search terms "cervical cancer," "Pap smear," and "Pap test" were analyzed during: (1) January 1-March 13, 2012, and (2) March 14-June 30, 2012. Topic modeling was used to discern the most common topics from each time period, and determine the singular value criterion for each topic. The results were then qualitatively coded from top 10 relevant topics to determine the efficiency of clustering method in grouping distinct ideas, and how the discussion differed before vs. after the change in guidelines . This machine learning method was effective in grouping the relevant discussion topics about cervical cancer during the respective time periods (~20% overall irrelevant content in both time periods). Qualitative analysis determined that a significant portion of the top discussion topics in the second time period directly reflected the USPSTF guideline change (eg, "New Screening Guidelines for Cervical Cancer"), and many topics in both time periods were addressing basic screening promotion and education (eg, "It is Cervical Cancer Awareness Month! Click the link to see where you can receive a free or low cost Pap test.") It was demonstrated that machine learning tools can be useful in cervical cancer prevention and screening discussions on Twitter. This method allowed us to prove that there is publicly available significant information about cervical cancer screening on social media sites. Moreover, we observed a direct impact of the guideline change within the Twitter messages.
NASA Astrophysics Data System (ADS)
Liu, Xiliang; Lu, Feng; Zhang, Hengcai; Qiu, Peiyuan
2013-06-01
It is a pressing task to estimate the real-time travel time on road networks reliably in big cities, even though floating car data has been widely used to reflect the real traffic. Currently floating car data are mainly used to estimate the real-time traffic conditions on road segments, and has done little for turn delay estimation. However, turn delays on road intersections contribute significantly to the overall travel time on road networks in modern cities. In this paper, we present a technical framework to calculate the turn delays on road networks with float car data. First, the original floating car data collected with GPS equipped taxies was cleaned and matched to a street map with a distributed system based on Hadoop and MongoDB. Secondly, the refined trajectory data set was distributed among 96 time intervals (from 0: 00 to 23: 59). All of the intersections where the trajectories passed were connected with the trajectory segments, and constituted an experiment sample, while the intersections on arterial streets were specially selected to form another experiment sample. Thirdly, a principal curve-based algorithm was presented to estimate the turn delays at the given intersections. The algorithm argued is not only statistically fitted the real traffic conditions, but also is insensitive to data sparseness and missing data problems, which currently are almost inevitable with the widely used floating car data collecting technology. We adopted the floating car data collected from March to June in Beijing city in 2011, which contains more than 2.6 million trajectories generated from about 20000 GPS-equipped taxicabs and accounts for about 600 GB in data volume. The result shows the principal curve based algorithm we presented takes precedence over traditional methods, such as mean and median based approaches, and holds a higher estimation accuracy (about 10%-15% higher in RMSE), as well as reflecting the changing trend of traffic congestion. With the estimation result for the travel delay at intersections, we analyzed the spatio-temporal distribution of turn delays in three time scenarios (0: 00-0: 15, 8: 15-8: 30 and 12: 00-12: 15). It indicates that during one's single trip in Beijing, average 60% of the travel time on the road networks is wasted on the intersections, and this situation is even worse in daytime. Although the 400 main intersections take only 2.7% of all the intersections, they occupy about 18% travel time.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-21
... stating good cause for the enlargement of time. On March 16, 2011, respondents Effervescent and Double Diamond filed a joint motion for an enlargement of the time for filing petitions for review of the remand ID. On March 18, 2011, the Commission issued an order granting the motion for an enlargement of time...
Enhanced round robin CPU scheduling with burst time based time quantum
NASA Astrophysics Data System (ADS)
Indusree, J. R.; Prabadevi, B.
2017-11-01
Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.
Long-term variability in Northern Hemisphere snow cover and associations with warmer winters
McCabe, Gregory J.; Wolock, David M.
2010-01-01
A monthly snow accumulation and melt model is used with gridded monthly temperature and precipitation data for the Northern Hemisphere to generate time series of March snow-covered area (SCA) for the period 1905 through 2002. The time series of estimated SCA for March is verified by comparison with previously published time series of SCA for the Northern Hemisphere. The time series of estimated Northern Hemisphere March SCA shows a substantial decrease since about 1970, and this decrease corresponds to an increase in mean winter Northern Hemisphere temperature. The increase in winter temperature has caused a decrease in the fraction of precipitation that occurs as snow and an increase in snowmelt for some parts of the Northern Hemisphere, particularly the mid-latitudes, thus reducing snow packs and March SCA. In addition, the increase in winter temperature and the decreases in SCA appear to be associated with a contraction of the circumpolar vortex and a poleward movement of storm tracks, resulting in decreased precipitation (and snow) in the low- to mid-latitudes and an increase in precipitation (and snow) in high latitudes. If Northern Hemisphere winter temperatures continue to warm as they have since the 1970s, then March SCA will likely continue to decrease.
Long-term variability in Northern Hemisphere snow cover and associations with warmer winters
McCabe, G.J.; Wolock, D.M.
2010-01-01
A monthly snow accumulation and melt model is used with gridded monthly temperature and precipitation data for the Northern Hemisphere to generate time series of March snow-covered area (SCA) for the period 1905 through 2002. The time series of estimated SCA for March is verified by comparison with previously published time series of SCA for the Northern Hemisphere. The time series of estimated Northern Hemisphere March SCA shows a substantial decrease since about 1970, and this decrease corresponds to an increase in mean winter Northern Hemisphere temperature. The increase in winter temperature has caused a decrease in the fraction of precipitation that occurs as snow and an increase in snowmelt for some parts of the Northern Hemisphere, particularly the mid-latitudes, thus reducing snow packs and March SCA. In addition, the increase in winter temperature and the decreases in SCA appear to be associated with a contraction of the circumpolar vortex and a poleward movement of storm tracks, resulting in decreased precipitation (and snow) in the low- to mid-latitudes and an increase in precipitation (and snow) in high latitudes. If Northern Hemisphere winter temperatures continue to warm as they have since the 1970s, then March SCA will likely continue to decrease. ?? 2009 Springer Science+Business Media B.V.
Spring Arrives in Time for a March on the Monocacy | Poster
Clouds and rain finally gave way this week just in time for Occupational Health Services’ March on the Monocacy event. More than 20 people came out to enjoy the beautiful spring day and walk the 1.4-mile course at the Advanced Technology Research Facility.
78 FR 13364 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-27
...: Multidisciplinary Studies of HIV/AIDS and Aging. Date: March 21, 2013 Time: 9:00 a.m. to 5:00 p.m. Agenda: To review... Physiology, Pathology and Pharmacology. Date: March 21, 2013. Time: 2:00 p.m. to 4:00 p.m. Agenda: To review...
77 FR 9997 - NASA Advisory Council; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice (12-016)] NASA Advisory Council; Meeting... Space Administration announces a meeting of the NASA Advisory Council (NAC). DATES: Thursday, March 8, 2012, 8 a.m.-5 p.m., local time and Friday, March 9, 2012, 8 a.m.-12 p.m., local time. ADDRESSES: NASA...
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
Ariel, Gil; Ophir, Yotam; Levi, Sagi; Ben-Jacob, Eshel; Ayali, Amir
2014-01-01
The principal interactions leading to the emergence of order in swarms of marching locust nymphs was studied both experimentally, using small groups of marching locusts in the lab, and using computer simulations. We utilized a custom tracking algorithm to reveal fundamental animal-animal interactions leading to collective motion. Uncovering this behavior introduced a new agent-based modeling approach in which pause-and-go motion is pivotal. The behavioral and modeling findings are largely based on motion-related visual sensory inputs obtained by the individual locust. Results suggest a generic principle, in which intermittent animal motion can be considered as a sequence of individual decisions as animals repeatedly reassess their situation and decide whether or not to swarm. This interpretation implies, among other things, some generic characteristics regarding the build-up and emergence of collective order in swarms: in particular, that order and disorder are generic meta-stable states of the system, suggesting that the emergence of order is kinetic and does not necessarily require external environmental changes. This work calls for further experimental as well as theoretical investigation of the neural mechanisms underlying locust coordinative behavior. PMID:24988464
Recent Developments in Grid Generation and Force Integration Technology for Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; VanDalsem, William R. (Technical Monitor)
1994-01-01
Recent developments in algorithms and software tools for generating overset grids for complex configurations are described. These include the overset surface grid generation code SURGRD and version 2.0 of the hyperbolic volume grid generation code HYPGEN. The SURGRD code is in beta test mode where the new features include the capability to march over a collection of panel networks, a variety of ways to control the side boundaries and the marching step sizes and distance, a more robust projection scheme and an interpolation option. New features in version 2.0 of HYPGEN include a wider range of boundary condition types. The code also allows the user to specify different marching step sizes and distance for each point on the surface grid. A scheme that takes into account of the overlapped zones on the body surface for the purpose of forces and moments computation is also briefly described, The process involves the following two software modules: MIXSUR - a composite grid generation module to produce a collection of quadrilaterals and triangles on which pressure and viscous stresses are to be integrated, and OVERINT - a forces and moments integration module.
The Canadian Ozone Watch and UV-B advisory programs
NASA Technical Reports Server (NTRS)
Kerr, J. B.; Mcelroy, C. T.; Tarasick, D. W.; Wardle, D. I.
1994-01-01
The Ozone Watch, initiated in March, 1992, is a weekly bulletin describing the state of the ozone layer over Canada. The UV-B advisory program, which started in May, 1992, produces daily forecasts of clear-sky UV-B radiation. The forecast procedures use daily ozone measurements from the eight-station monitoring network, the output from the Canadian operational forecast model and a UV-B algorithm based on three years of spectral UV-B measurements with the Brewer spectrophotometer.
MABEL Iceland 2012 Flight Report
NASA Technical Reports Server (NTRS)
Cook, William B.; Brunt, Kelly M.; De Marco, Eugenia L.; Reed, Daniel L.; Neumann, Thomas A.; Markus, Thorsten
2017-01-01
In March and April 2012, NASA conducted an airborne lidar campaign based out of Keflavik, Iceland, in support of Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) algorithm development. The survey targeted the Greenland Ice Sheet, Iceland ice caps, and sea ice in the Arctic Ocean during the winter season. Ultimately, the mission, MABEL Iceland 2012, including checkout and transit flights, conducted 14 science flights, for a total of over 80 flight hours over glaciers, icefields, and sea ice.
NASA Technical Reports Server (NTRS)
Luo, Yali; Xu, Kuan-Man; Wielicki, Bruce A.; Wong, Takmeng; Eitzen, Zachary A.
2007-01-01
The present study evaluates the ability of a cloud-resolving model (CRM) to simulate the physical properties of tropical deep convective cloud objects identified from a Clouds and the Earth s Radiant Energy System (CERES) data product. The emphasis of this study is the comparisons among the small-, medium- and large-size categories of cloud objects observed during March 1998 and between the large-size categories of cloud objects observed during March 1998 (strong El Ni o) and March 2000 (weak La Ni a). Results from the CRM simulations are analyzed in a way that is consistent with the CERES retrieval algorithm and they are averaged to match the scale of the CERES satellite footprints. Cloud physical properties are analyzed in terms of their summary histograms for each category. It is found that there is a general agreement in the overall shapes of all cloud physical properties between the simulated and observed distributions. Each cloud physical property produced by the CRM also exhibits different degrees of disagreement with observations over different ranges of the property. The simulated cloud tops are generally too high and cloud top temperatures are too low except for the large-size category of March 1998. The probability densities of the simulated top-of-the-atmosphere (TOA) albedos for all four categories are underestimated for high albedos, while those of cloud optical depth are overestimated at its lowest bin. These disagreements are mainly related to uncertainties in the cloud microphysics parameterization and inputs such as cloud ice effective size to the radiation calculation. Summary histograms of cloud optical depth and TOA albedo from the CRM simulations of the large-size category of cloud objects do not differ significantly between the March 1998 and 2000 periods, consistent with the CERES observations. However, the CRM is unable to reproduce the significant differences in the observed cloud top height while it overestimates the differences in the observed outgoing longwave radiation and cloud top temperature between the two periods. Comparisons between the CRM results and the observations for most parameters in March 1998 consistently show that both the simulations and observations have larger differences between the large- and small-size categories than between the large- and medium-size, or between the medium- and small-size categories. However, the simulated cloud properties do not change as much with size as observed. These disagreements are likely related to the spatial averaging of the forcing data and the mismatch in time and in space between the numerical weather prediction model from which the forcing data are produced and the CERES observed cloud systems.
El-Sayed, Abdulrahman; Hadley, Craig; Galea, Sandro
2008-01-01
To assess whether the incidence of adverse birth outcomes among Arab Americans in Michigan changed after September 11, 2001. Birth data were collected on all births in Michigan from September 11, 2000, to March 11, 2001, and from September 11, 2001, to March 11, 2002. Self-reported ancestry and a name algorithm were used to determine Arab American ethnicity. Unadjusted and adjusted logistic regression analysis was used to assess the relationship between birth before/after September 11 and birth outcomes. Main outcome measures were low birth weight (LBW), very low birth weight (VLBW), and preterm birth (PTB). We observed no association between birth before/after September 11 and risk of adverse birth outcomes among Arab Americans in Michigan by using either the name algorithm or self-reported ancestry to determine Arab American ethnicity. Arab name was significantly associated with lower risk of VLBW and PTB in adjusted and unadjusted models. Arab ancestry was significantly associated with lower risk of VLBW and PTB in adjusted and unadjusted models and significantly associated with lower risk of LBW in an unadjusted model. In contrast to previous findings in California, we observed no difference in adverse birth outcomes before and after the events of September 11, 2001, among Arab Americans in Michigan. Arab American ethnicity is associated with lower risk of adverse birth outcomes compared to other racial/ethnic groups.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Vessel network detection using contour evolution and color components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Medeiros, Fatima; Cuadros, Jorge
2011-06-22
Automated retinal screening relies on vasculature segmentation before the identification of other anatomical structures of the retina. Vasculature extraction can also be input to image quality ranking, neovascularization detection and image registration, among other applications. There is an extensive literature related to this problem, often excluding the inherent heterogeneity of ophthalmic clinical images. The contribution of this paper relies on an algorithm using front propagation to segment the vessel network. The algorithm includes a penalty in the wait queue on the fast marching heap to minimize leakage of the evolving interface. The method requires no manual labeling, a minimum numbermore » of parameters and it is capable of segmenting color ocular fundus images in real scenarios, where multi-ethnicity and brightness variations are parts of the problem.« less
Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model
NASA Astrophysics Data System (ADS)
Deng, Guang-Feng; Lin, Woo-Tsong
This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.
Asteroid mass estimation using Markov-chain Monte Carlo
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2017-11-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.
GOCI Yonsei Aerosol Retrieval (YAER) algorithm and validation during DRAGON-NE Asia 2012 campaign
NASA Astrophysics Data System (ADS)
Choi, M.; Kim, J.; Lee, J.; Kim, M.; Park, Y. Je; Jeong, U.; Kim, W.; Holben, B.; Eck, T. F.; Lim, J. H.; Song, C. K.
2015-09-01
The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorology Satellites (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm over ocean and land together with validation results during the DRAGON-NE Asia 2012 campaign. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single scattering albedo (SSA) at 440 nm, Angstrom exponent (AE) between 440 and 860 nm, and aerosol type from selected aerosol models in calculating AOD. Assumed aerosol models are compiled from global Aerosol Robotic Networks (AERONET) inversion data, and categorized according to AOD, FMF, and SSA. Nonsphericity is considered, and unified aerosol models are used over land and ocean. Different assumptions for surface reflectance are applied over ocean and land. Surface reflectance over the ocean varies with geometry and wind speed, while surface reflectance over land is obtained from the 1-3 % darkest pixels in a 6 km × 6 km area during 30 days. In the East China Sea and Yellow Sea, significant area is covered persistently by turbid waters, for which the land algorithm is used for aerosol retrieval. To detect turbid water pixels, TOA reflectance difference at 660 nm is used. GOCI YAER products are validated using other aerosol products from AERONET and the MODIS Collection 6 aerosol data from "Dark Target (DT)" and "Deep Blue (DB)" algorithms during the DRAGON-NE Asia 2012 campaign from March to May 2012. Comparison of AOD from GOCI and AERONET gives a Pearson correlation coefficient of 0.885 and a linear regression equation with GOCI AOD =1.086 × AERONET AOD - 0.041. GOCI and MODIS AODs are more highly correlated over ocean than land. Over land, especially, GOCI AOD shows better agreement with MODIS DB than MODIS DT because of the choice of surface reflectance assumptions. Other GOCI YAER products show lower correlation with AERONET than AOD, but are still qualitatively useful.
Foreshocks and aftershocks of Pisagua 2014 earthquake: time and space evolution of megathrust event.
NASA Astrophysics Data System (ADS)
Fuenzalida Velasco, Amaya; Rietbrock, Andreas; Wollam, Jack; Thomas, Reece; de Lima Neto, Oscar; Tavera, Hernando; Garth, Thomas; Ruiz, Sergio
2016-04-01
The 2014 Pisagua earthquake of magnitude 8.2 is the first case in Chile where a foreshock sequence was clearly recorded by a local network, as well all the complete sequence including the mainshock and its aftershocks. The seismicity of the last year before the mainshock include numerous clusters close to the epicentral zone (Ruiz et al; 2014) but it was on 16th March that this activity became stronger with the Mw 6.7 precursory event taking place in front of Iquique coast at 12 km depth. The Pisagua earthquake arrived on 1st April 2015 breaking almost 120 km N-S and two days after a 7.6 aftershock occurred in the south of the rupture, enlarging the zone affected by this sequence. In this work, we analyse the foreshocks and aftershock sequence of Pisagua earthquake, from the spatial and time evolution for a total of 15.764 events that were recorded from the 1st March to 31th May 2015. This event catalogue was obtained from the automatic analyse of seismic raw data of more than 50 stations installed in the north of Chile and the south of Peru. We used the STA/LTA algorithm for the detection of P and S arrival times on the vertical components and then a method of back propagation in a 1D velocity model for the event association and preliminary location of its hypocenters following the algorithm outlined by Rietbrock et al. (2012). These results were then improved by locating with NonLinLoc software using a regional velocity model. We selected the larger events to analyse its moment tensor solution by a full waveform inversion using ISOLA software. In order to understand the process of nucleation and propagation of the Pisagua earthquake, we also analysed the evolution in time of the seismicity of the three months of data. The zone where the precursory events took place was strongly activated two weeks before the mainshock and remained very active until the end of the analysed period with an important quantity of the seismicity located in the upper plate and having variations in its focal mechanisms. The evolution of the Pisagua sequence point out a rupture by steps, that we suggest to be related to the properties of the upper plate, as well as along in the subduction interface. The spatial distribution of seismicity was compared to the inter-seismic coupling of previous studies, the regional bathymetry and the slip distribution of both the mainshock and the Magnitude 7.6 event. The results show an important relation between the low coupling zones and the areas lacking large magnitude events
SEXTANT - Station Explorer for X-ray Timing and Navigation Technology
NASA Technical Reports Server (NTRS)
Mitchell, Jason W.; Hasouneh, Munther Abdel Hamid; Winternitz, Luke M. B.; Valdez, Jennifer E.; Price, Samuel R.; Semper, Sean R.; Yu, Wayne H.; Arzoumanian, Zaven; Ray, Paul S.; Wood, Kent S.;
2015-01-01
The Station Explorer for X-ray Timing and Navigation Technology (SEXTANT) is a technology demonstration enhancement to the Neutron-star Interior Composition Explorer (NICER) mission, which is scheduled to launch in late 2016 and will be hosted as an externally attached payload on the International Space Station (ISS) via the ExPRESS Logistics Carrier (ELC). During NICER's 18-month baseline science mission to understand ultra-dense matter though observations of neutron stars in the soft X-ray band, SEXTANT will, for the first-time, demonstrate real-time, on-board X-ray pulsar navigation, which is a significant milestone in the quest to establish a GPS-like navigation capability that will be available throughout our Solar System and beyond. Along with NICER, SEXTANT has proceeded through Phase B, Mission Definition, and received numerous refinements in concept of operation, algorithms, flight software, ground system, and ground test capability. NICER/SEXTANT's Phase B work culminated in NASA's confirmation of NICER to Phase C, Design and Development, in March 2014. Recently, NICER/SEXTANT successfully passed its Critical Design Review and SEXTANT received continuation approval in September 2014. In this paper, we describe the X-ray pulsar navigation concept and provide a brief history of previous work, and then summarize the SEXTANT technology demonstration objective, hardware and software components, and development to date.
75 FR 11948 - Board of Governors; Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-12
... POSTAL SERVICE Board of Governors; Sunshine Act Meeting DATES AND TIMES: Tuesday, March 23, 2010, at 10 a.m.; Wednesday, March 24, at 8:30 a.m. and 11 a.m. PLACE: Washington, DC at U.S. Postal Service Headquarters, 475 L'Enfant Plaza, SW., in the Benjamin Franklin Room. STATUS: March 23 at 10 a.m...
76 FR 14004 - Combined Notice of Filings #2
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... Number: 20110228-5001. Comment Date: 5 p.m. Eastern Time on Monday, March 21, 2011. Docket Numbers: ER10...: 03/08/2011. Accession Number: 20110308-5026. Comment Date: 5 p.m. Eastern Time on Tuesday, March 29, 2011. Docket Numbers: ER10-2288-003. Applicants: Optim Energy Marketing LLC. Description: Optim Energy...
2015-03-21
Expedition 43 Russian Cosmonauts Mikhail Kornienko, left, and Gennady Padalka of the Russian Federal Space Agency (Roscosmos), center, and NASA Astronaut Scott Kelly answer questions from the press while standing on the Avenue of the Cosmonauts where two long rows of trees are all marked with the name and year of the crew member who planted them starting from Yuri Gagarin's tree, Saturday, March 21, 2015 at the Cosmonaut Hotel in Baikonur, Kazakhstan. Kelly, Padalka, and Kornienko are preparing for launch to the International Space Station in their Soyuz TMA-16M spacecraft from the Baikonur Cosmodrome in Kazakhstan March 28, Kazakh time (March 27 Eastern time.) As the one-year crew, Kelly and Kornienko will return to Earth on Soyuz TMA-18M in March 2016. Photo Credit: (NASA/Bill Ingalls)
NASA Astrophysics Data System (ADS)
Chen, Naijin
2013-03-01
Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.
Texas Medication Algorithm Project, phase 3 (TMAP-3): rationale and study design.
Rush, A John; Crismon, M Lynn; Kashner, T Michael; Toprac, Marcia G; Carmody, Thomas J; Trivedi, Madhukar H; Suppes, Trisha; Miller, Alexander L; Biggs, Melanie M; Shores-Wilson, Kathy; Witte, Bradley P; Shon, Steven P; Rago, William V; Altshuler, Kenneth Z
2003-04-01
Medication treatment algorithms may improve clinical outcomes, uniformity of treatment, quality of care, and efficiency. However, such benefits have never been evaluated for patients with severe, persistent mental illnesses. This study compared clinical and economic outcomes of an algorithm-driven disease management program (ALGO) with treatment-as-usual (TAU) for adults with DSM-IV schizophrenia (SCZ), bipolar disorder (BD), and major depressive disorder (MDD) treated in public mental health outpatient clinics in Texas. The disorder-specific intervention ALGO included a consensually derived and feasibility-tested medication algorithm, a patient/family educational program, ongoing physician training and consultation, a uniform medical documentation system with routine assessment of symptoms and side effects at each clinic visit to guide ALGO implementation, and prompting by on-site clinical coordinators. A total of 19 clinics from 7 local authorities were matched by authority and urban status, such that 4 clinics each offered ALGO for only 1 disorder (SCZ, BD, or MDD). The remaining 7 TAU clinics offered no ALGO and thus served as controls (TAUnonALGO). To determine if ALGO for one disorder impacted care for another disorder within the same clinic ("culture effect"), additional TAU subjects were selected from 4 of the ALGO clinics offering ALGO for another disorder (TAUinALGO). Patient entry occurred over 13 months, beginning March 1998 and concluding with the final active patient visit in April 2000. Research outcomes assessed at baseline and periodically for at least 1 year included (1) symptoms, (2) functioning, (3) cognitive functioning (for SCZ), (4) medication side effects, (5) patient satisfaction, (6) physician satisfaction, (7) quality of life, (8) frequency of contacts with criminal justice and state welfare system, (9) mental health and medical service utilization and cost, and (10) alcohol and substance abuse and supplemental substance use information. Analyses were based on hierarchical linear models designed to test for initial changes and growth in differences between ALGO and TAU patients over time in this matched clinic design.
Continuity of MODIS and VIIRS Snow-Cover Maps during Snowmelt in the Catskill Mountains in New York
NASA Astrophysics Data System (ADS)
Hall, D. K.; Riggs, G. A., Jr.; Roman, M. O.; DiGirolamo, N. E.
2015-12-01
We investigate the local and regional differences and possible biases between the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible-Infrared Imager Radiometer Suite (VIIRS) snow-cover maps in the winter of 2012 during snowmelt conditions in the Catskill Mountains in New York using a time series of cloud-gap filled daily snow-cover maps. The MODIS Terra instrument has been providing daily global snow-cover maps since February 2000 (Riggs and Hall, 2015). Using the VIIRS instrument, launched in 2011, NASA snow products are being developed based on the heritage MODIS snow-mapping algorithms, and will soon be available to the science community. Continuity of the standard NASA MODIS and VIIRS snow-cover maps is essential to enable environmental-data records (EDR) to be developed for analysis of snow-cover trends using a consistent data record. For this work, we compare daily MODIS and VIIRS snow-cover maps of the Catskill Mountains from 29 February through 14 March 2012. The entire region was snow covered on 29 February and by 14 March the snow had melted; we therefore have a daily time series available to compare normalized difference snow index (NDSI), as an indicator of snow-cover fraction. The MODIS and VIIRS snow-cover maps have different spatial resolutions (500 m for MODIS and 375 m for VIIRS) and different nominal overpass times (10:30 AM for MODIS Terra and 2:30 PM for VIIRS) as well as different cloud masks. The results of this work will provide a quantitative assessment of the continuity of the snow-cover data records for use in development of an EDR of snow cover.http://modis-snow-ice.gsfc.nasa.gov/Riggs, G.A. and D.K. Hall, 2015: MODIS Snow Products User Guide to Collection 6, http://modis-snow-ice.gsfc.nasa.gov/?c=userguides
NASA Technical Reports Server (NTRS)
Goodman, Steven; Blakeslee, Richard; Koshak, William
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational. The mission objectives for the GLM are to 1) provide continuous,full-disk lightning measurements for storm warning and Nowcasting, 2) provide early warning of tornado activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight units is expected to begin in latter part of the year. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2B algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data provided to selected National Weather Service forecast offices in Southern and Eastern Region are also improving our understanding of the application of these data in the severe storm warning process and help to accelerate the development of the pre-launch algorithms and Nowcasting applications.
NASA Technical Reports Server (NTRS)
Goodman, Steven; Blakeslee, Richard; Koshak, William; Petersen, Walt; Buechler, Dennis; Krehbiel, Paul; Gatlin, Patrick; Zubrick, Steven
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational.The mission objectives for the GLM are to 1) provide continuous,full-disk lightning measurements for storm warning and Nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight units is expected to begin in latter part of the year. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2B algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) sate]lite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data provided to selected National Weather Service forecast offices in Southern and Eastern Region are also improving our understanding of the application of these data in the severe storm warning process and help to accelerate the development of the pre-launch algorithms and Nowcasting applications. Abstract for the 3 rd Conference on Meteorological
A feature-preserving hair removal algorithm for dermoscopy images.
Abbas, Qaisar; Garcia, Irene Fondón; Emre Celebi, M; Ahmad, Waqar
2013-02-01
Accurate segmentation and repair of hair-occluded information from dermoscopy images are challenging tasks for computer-aided detection (CAD) of melanoma. Currently, many hair-restoration algorithms have been developed, but most of these fail to identify hairs accurately and their removal technique is slow and disturbs the lesion's pattern. In this article, a novel hair-restoration algorithm is presented, which has a capability to preserve the skin lesion features such as color and texture and able to segment both dark and light hairs. Our algorithm is based on three major steps: the rough hairs are segmented using a matched filtering with first derivative of gaussian (MF-FDOG) with thresholding that generate strong responses for both dark and light hairs, refinement of hairs by morphological edge-based techniques, which are repaired through a fast marching inpainting method. Diagnostic accuracy (DA) and texture-quality measure (TQM) metrics are utilized based on dermatologist-drawn manual hair masks that were used as a ground truth to evaluate the performance of the system. The hair-restoration algorithm is tested on 100 dermoscopy images. The comparisons have been done among (i) linear interpolation, inpainting by (ii) non-linear partial differential equation (PDE), and (iii) exemplar-based repairing techniques. Among different hair detection and removal techniques, our proposed algorithm obtained the highest value of DA: 93.3% and TQM: 90%. The experimental results indicate that the proposed algorithm is highly accurate, robust and able to restore hair pixels without damaging the lesion texture. This method is fully automatic and can be easily integrated into a CAD system. © 2011 John Wiley & Sons A/S.
A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method
NASA Astrophysics Data System (ADS)
Zhan, Lei; Xiong, Juntao; Liu, Feng
2016-05-01
The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.
Management of Central Venous Access Device-Associated Skin Impairment: An Evidence-Based Algorithm.
Broadhurst, Daphne; Moureau, Nancy; Ullman, Amanda J
Patients relying on central venous access devices (CVADs) for treatment are frequently complex. Many have multiple comorbid conditions, including renal impairment, nutritional deficiencies, hematologic disorders, or cancer. These conditions can impair the skin surrounding the CVAD insertion site, resulting in an increased likelihood of skin damage when standard CVAD management practices are employed. Supported by the World Congress of Vascular Access (WoCoVA), developed an evidence- and consensus-based algorithm to improve CVAD-associated skin impairment (CASI) identification and diagnosis, guide clinical decision-making, and improve clinician confidence in managing CASI. A scoping review of relevant literature surrounding CASI management was undertaken March 2014, and results were distributed to an international advisory panel. A CASI algorithm was developed by an international advisory panel of clinicians with expertise in wounds, vascular access, pediatrics, geriatric care, home care, intensive care, infection control and acute care, using a 2-phase, modified Delphi technique. The algorithm focuses on identification and treatment of skin injury, exit site infection, noninfectious exudate, and skin irritation/contact dermatitis. It comprised 3 domains: assessment, skin protection, and patient comfort. External validation of the algorithm was achieved by prospective pre- and posttest design, using clinical scenarios and self-reported clinician confidence (Likert scale), and incorporating algorithm feasibility and face validity endpoints. The CASI algorithm was found to significantly increase participants' confidence in the assessment and management of skin injury (P = .002), skin irritation/contact dermatitis (P = .001), and noninfectious exudate (P < .01). A majority of participants reported the algorithm as easy to understand (24/25; 96%), containing all necessary information (24/25; 96%). Twenty-four of 25 (96%) stated that they would recommend the tool to guide management of CASI.
NASA Astrophysics Data System (ADS)
Ortega Culaciati, F. H.; Simons, M.; Minson, S. E.; Owen, S. E.; Moore, A. W.; Hetland, E. A.
2011-12-01
We aim to quantify the spatial distribution of after-slip following the Great 11 March 2011 Tohoku-Oki (Mw 9.0) earthquake and its implications for the occurrence of a future Great Earthquake, particularly in the Ibaraki region of Japan. We use a Bayesian approach (CATMIP algorithm), constrained by on-land Geonet GPS time series, to infer models of after-slip to date in the Japan megathrust. Unlike traditional inverse methods, in which a single optimum model is found, the Bayesian approach allows a complete characterization of the model parameter space by searching a-posteriori estimates of the range of plausible models. We use the Kullback-Liebler information divergence as a metric of the information gain on each subsurface slip patch, to quantify the extent to which land-based geodetic observations can constrain the upper parts of the megathrust, where the Great Tohoku-Oki earthquake took place. We aim to understand the relationships of spatial distribution of fault slip behavior in the different stages of the seismic cycle. We compare our post-seismic slip distributions to inter- and co-seismic slip distributions obtained through a Bayesian methodology as well as through traditional (optimization) inverse estimates in the published literature. We discuss implications of these analyses for the occurrence of a large earthquake in the Japan megathrust regions adjacent to the Great Tohoku-Oki earthquake.
NASA Astrophysics Data System (ADS)
Xu, B.; Jing, L.; Qinhuo, L.; Zeng, Y.; Yin, G.; Fan, W.; Zhao, J.
2015-12-01
Leaf area index (LAI) is a key parameter in terrestrial ecosystem models, and a series of global LAI products have been derived from satellite data. To effectively apply these LAI products, it is necessary to evaluate their accuracy reasonablely. The long-term LAI measurements from the global network sites are an important supplement to the product validation dataset. However, the spatial scale mismatch between the site measurement and the pixel grid hinders the utilization of these measurements in LAI product validation. In this study, a pragmatic approach based on the Bayesian linear regression between long-term LAI measurements and high-resolution images is presented for upscaling the point-scale measurements to the pixel-scale. The algorithm was evaluated using high-resolution LAI reference maps provided by the VALERI project at the Järvselja site and was implemented to upscale the long-term LAI measurements at the global network sites. Results indicate that the spatial scaling algorithm can reduce the root mean square error (RMSE) from 0.42 before upscaling to 0.21 after upscaling compared with the aggregated LAI reference maps at the pixel-scale. Meanwhile, the algorithm shows better reliability and robustness than the ordinary least square (OLS) method for upscaling some LAI measurements acquired at specific dates without high-resolution images. The upscaled LAI measurements were employed to validate three global LAI products, including MODIS, GLASS and GEOV1. Results indicate that (i) GLASS and GEOV1 show consistent temporal profiles over most sites, while MODIS exhibits temporal instability over a few forest sites. The RMSE of seasonality between products and upscaled LAI measurement is 0.25-1.72 for MODIS, 0.17-1.29 for GLASS and 0.36-1.35 for GEOV1 along with different sites. (ii) The uncertainty for products varies over different months. The lowest and highest uncertainty for MODIS are 0.67 in March and 1.53 in August, for GLASS are 0.67 in November and 0.99 in July, and for GEOV1 are 0.61 in March and 1.23 in August, respectively. (iii) The overall uncertainty for MODIS, GLASS and GEOV1 is 1.36, 0.90 and 0.99, respectively. According to this study, the long-term LAI measurements can be used to validate time series remote sensing products by spatial upscaling from point-scale to pixel-scale.
Computation of asymmetric supersonic flows around cones at large incidence
NASA Technical Reports Server (NTRS)
Degani, David
1987-01-01
The Schiff-Steger parabolized Navier-Stokes (PNS) code has been modified to allow computation of conical flowfields around cones at high incidence. The improved algorithm of Degani and Schiff has been incorporated with the PNS code. This algorithm adds the cross derivative and circumferential viscous terms to the original PNS code and modifies the algebraic eddy viscosity turbulence model to take into account regions of so called cross-flow separation. Assuming the flowfield is conical (but not necessarily symmetric) a marching stepback procedure is used: the solution is marched one step downstream using improved PNS code and the flow variables are then scaled to place the solution back to the original station. The process is repeated until no change in the flow variables is observed with further marching. The flow variables are then constant along rays of the flowfield. The experiments obtained by Bannik and Nebbeling were chosen as a test case. In these experiments a cone of 7.5 deg. half angle at Mach number 2.94 and Reynolds number 1.372 x 10(7) was tested up 34 deg. angle of attack. At high angle of attack nonconical asymmetric leeward side vortex patterns were observed. In the first set of computations, using an earlier obtained solution of the above cone for angle of attack of 22.6 deg. and at station x=0.5 as a starting solution, the angle of attack was gradually increased up to 34 deg. During this procedure the grid was carfully adjusted to capture the bow shock. A stable, converged symmetric solution was obtained. Since the numerical code converged to a symmetric solution which is not the physical one, the stability was tested by a random perturbation at each point. The possible effect of surface roughness or non perfect body shape was also investigated. It was concluded that although the assumption of conical viscous flows can be very useful for certain cases, it can not be used for the present case. Thus the second part of the investigation attempted to obtain a marching (in space) solution with the PNS method using the conical solution as initial data. Finally, the solution of the full Navier-Stokes equations was carried out.
Campaign gravity results From kilauea volcano, hawaii, 2009-2011
NASA Astrophysics Data System (ADS)
Wilkinson, S. K.; Poland, M. P.; Battaglia, M.
2011-12-01
The gravity and leveling networks at Kilauea's summit caldera consist of approximately 60 benchmarks that are measured with a gravimeter as well as leveled for elevation data. Gravity data were collected in December 2009, June 2010 and March 2011. Elevation data were collected in 2009 and 2010. For the gravity survey completed in March 2011, we use InSAR and GPS data to assess elevation changes at the time of the gravity survey. During December 2009-March 2011, Kilauea's summit was characterized by minor deflation, following trends established in mid-2007. In mid-2010, however, the summit began to inflate, with a rate that increased significantly in October 2010. This inflation was associated with a decrease in the effusion rate from the volcano's east rift zone eruptive vents, suggesting that Kilauea's magma plumbing system was backing up. On March 5, 2011, a 2-km-long fissure eruption began about 3 km west of Pu`u `O`o, causing rapid summit deflation as magma drained from beneath the summit to feed the new eruptive vents. The fissure eruption ended on March 9, at which time the summit began to reinflate. Preliminary analysis of gravity data collected before and after the fissure eruption indicates a complex pattern of mass flow beneath the summit caldera. Net summit deformation was negligible between December 2009 and June 2010, but there is a residual gravity high centered near Halema'uma'u Crater. For the December 2009 to March 2011 time period, the caldera shows net subsidence. A positive residual gravity anomaly is located southeast of Halema'uma'u Crater while a negative residual gravity anomaly exists north of Halema'uma'u Crater. These patterns are somewhat unexpected, given the sudden draining of magma from beneath the summit during the March 5-9 fissure eruption. We conclude that the campaign gravity data were not collected at the optimal times to "catch" this event. Nevertheless, the data can still be used to assess different aspects of Kilauea's magma system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Searles, D.B.
1993-03-01
The goal of the proposed work is the creation of a software system that will perform sophisticated pattern recognition and related functions at a level of abstraction and with expressive power beyond current general-purpose pattern-matching systems for biological sequences; and with a more uniform language, environment, and graphical user interface, and with greater flexibility, extensibility, embeddability, and ability to incorporate other algorithms, than current special-purpose analytic software.
ERIC Educational Resources Information Center
Agenbroad, James E.; And Others
Included in this volume of appendices to LI 000 979 are acquisitions flow charts; a current operations questionnaire; an algorithm for splitting the Library of Congress call number; analysis of the Machine-Readable Cataloging (MARC II) format; production problems and decisions; operating procedures for information transmittal in the New England…
2009-04-01
stress ratios of the order of R=-2, 7075T6 aluminium alloys possessed better fatigue properties than the 2024T3 series alloys . It was also possible...flight-by-flight damage tracking algorithms (S J Houghton, S K Campbell [RNZAF])...........................................8-67 8.5.2 CT-4E Usage ...exponential crack growth behaviour of cracks in F/A-18 7050-T7451 aluminium alloy structure, the Safe Life limits of many discrete locations could be
76 FR 12744 - National Heart, Lung, and Blood Institute; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... Institute Special Emphasis Panel; Inflammation and Cardiovascular Disease. Date: March 24, 2011. Time: 9 a.m... Disease. Date: March 30, 2011. Time: 1 p.m. to 3 p.m. Agenda: To review and evaluate grant applications..., National Center for Sleep Disorders Research; 93.837, Heart and Vascular Diseases Research; 93.838, Lung...
75 FR 6674 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-10
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review... Scientific Review Special Emphasis Panel, Microvascular Interactions. Date: March 3, 2010. Time: 3 p.m. to 5... Social Consequences of HIV/AIDS Study Section. Date: March 15-16, 2010. Time: 8 a.m. to 5 p.m. Agenda: To...
NASA Satellite Image of Japan Captured March 11, 2011
2017-12-08
NASA's Aqua satellite passed over Japan one hour and 41 minutes before the quake hit. At the time Aqua passed overhead, the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument captured a visible of Japan covered by clouds. The image was taken at 0405 UTC on March 11 (1:05 p.m. local time Japan /11:05 p.m. EST March 10). The quake hit at 2:46 p.m. local time/Japan. Satellite: Aqua Credit: NASA/GSFC/Aqua NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook
Xia, Dan; Qu, Liujing; Li, Ge; Hongdu, Beiqi; Xu, Chentong; Lin, Xin; Lou, Yaxin; He, Qihua; Ma, Dalong; Chen, Yingyu
2016-09-01
MARCH2 (membrane-associated RING-CH protein 2), an E3 ubiquitin ligase, is mainly associated with the vesicle trafficking. In the present study, for the first time, we demonstrated that MARCH2 negatively regulates autophagy. Our data indicated that overexpression of MARCH2 impaired autophagy, as evidenced by attenuated levels of LC3B-II and impaired degradation of endogenous and exogenous autophagic substrates. By contrast, loss of MARCH2 expression had the opposite effects. In vivo experiments demonstrate that MARCH2 knockout mediated autophagy results in an inhibition of tumorigenicity. Further investigation revealed that the induction of autophagy by MARCH2 deficiency was mediated through the PIK3CA-AKT-MTOR signaling pathway. Additionally, we found that MARCH2 interacts with CFTR (cystic fibrosis transmembrane conductance regulator), promotes the ubiquitination and degradation of CFTR, and inhibits CFTR-mediated autophagy in tumor cells. The functional PDZ domain of MARCH2 is required for the association with CFTR. Thus, our study identified a novel negative regulator of autophagy and suggested that the physical and functional connection between the MARCH2 and CFTR in different conditions will be elucidated in the further experiments.
Applying data mining techniques to improve diagnosis in neonatal jaundice.
Ferreira, Duarte; Oliveira, Abílio; Freitas, Alberto
2012-12-07
Hyperbilirubinemia is emerging as an increasingly common problem in newborns due to a decreasing hospital length of stay after birth. Jaundice is the most common disease of the newborn and although being benign in most cases it can lead to severe neurological consequences if poorly evaluated. In different areas of medicine, data mining has contributed to improve the results obtained with other methodologies.Hence, the aim of this study was to improve the diagnosis of neonatal jaundice with the application of data mining techniques. This study followed the different phases of the Cross Industry Standard Process for Data Mining model as its methodology.This observational study was performed at the Obstetrics Department of a central hospital (Centro Hospitalar Tâmega e Sousa--EPE), from February to March of 2011. A total of 227 healthy newborn infants with 35 or more weeks of gestation were enrolled in the study. Over 70 variables were collected and analyzed. Also, transcutaneous bilirubin levels were measured from birth to hospital discharge with maximum time intervals of 8 hours between measurements, using a noninvasive bilirubinometer.Different attribute subsets were used to train and test classification models using algorithms included in Weka data mining software, such as decision trees (J48) and neural networks (multilayer perceptron). The accuracy results were compared with the traditional methods for prediction of hyperbilirubinemia. The application of different classification algorithms to the collected data allowed predicting subsequent hyperbilirubinemia with high accuracy. In particular, at 24 hours of life of newborns, the accuracy for the prediction of hyperbilirubinemia was 89%. The best results were obtained using the following algorithms: naive Bayes, multilayer perceptron and simple logistic. The findings of our study sustain that, new approaches, such as data mining, may support medical decision, contributing to improve diagnosis in neonatal jaundice.
Theoretical and numerical studies of chaotic mixing
NASA Astrophysics Data System (ADS)
Kim, Ho Jun
Theoretical and numerical studies of chaotic mixing are performed to circumvent the difficulties of efficient mixing, which come from the lack of turbulence in microfluidic devices. In order to carry out efficient and accurate parametric studies and to identify a fully chaotic state, a spectral element algorithm for solution of the incompressible Navier-Stokes and species transport equations is developed. Using Taylor series expansions in time marching, the new algorithm employs an algebraic factorization scheme on multi-dimensional staggered spectral element grids, and extends classical conforming Galerkin formulations to nonconforming spectral elements. Lagrangian particle tracking methods are utilized to study particle dispersion in the mixing device using spectral element and fourth order Runge-Kutta discretizations in space and time, respectively. Comparative studies of five different techniques commonly employed to identify the chaotic strength and mixing efficiency in microfluidic systems are presented to demonstrate the competitive advantages and shortcomings of each method. These are the stirring index based on the box counting method, Poincare sections, finite time Lyapunov exponents, the probability density function of the stretching field, and mixing index inverse, based on the standard deviation of scalar species distribution. Series of numerical simulations are performed by varying the Peclet number (Pe) at fixed kinematic conditions. The mixing length (lm) is characterized as function of the Pe number, and lm ∝ ln(Pe) scaling is demonstrated for fully chaotic cases. Employing the aforementioned techniques, optimum kinematic conditions and the actuation frequency of the stirrer that result in the highest mixing/stirring efficiency are identified in a zeta potential patterned straight micro channel, where a continuous flow is generated by superposition of a steady pressure driven flow and time periodic electroosmotic flow induced by a stream-wise AC electric field. Finally, it is shown that the invariant manifold of hyperbolic periodic point determines the geometry of fast mixing zones in oscillatory flows in two-dimensional cavity.
Day 1 for the Integrated Multi-Satellite Retrievals for GPM (IMERG) Data Sets
NASA Astrophysics Data System (ADS)
Huffman, G. J.; Bolvin, D. T.; Braithwaite, D.; Hsu, K. L.; Joyce, R.; Kidd, C.; Sorooshian, S.; Xie, P.
2014-12-01
The Integrated Multi-satellitE Retrievals for GPM (IMERG) is designed to compute the best time series of (nearly) global precipitation from "all" precipitation-relevant satellites and global surface precipitation gauge analyses. IMERG was developed to use GPM Core Observatory data as a reference for the international constellation of satellites of opportunity that constitute the GPM virtual constellation. Computationally, IMERG is a unified U.S. algorithm drawing on strengths in the three contributing groups, whose previous work includes: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA); 2) the CPC Morphing algorithm with Kalman Filtering (K-CMORPH); and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS). We review the IMERG design, development, testing, and current status. IMERG provides 0.1°x0.1° half-hourly data, and will be run at multiple times, providing successively more accurate estimates: 4 hours, 8 hours, and 2 months after observation time. In Day 1 the spatial extent is 60°N-S, for the period March 2014 to the present. In subsequent reprocessing the data will extend to fully global, covering the period 1998 to the present. Both the set of input data set retrievals and the IMERG system are substantially different than those used in previous U.S. products. The input passive microwave data are all being produced with GPROF2014, which is substantially upgraded compared to previous versions. For the first time, this includes microwave sounders. Accordingly, there is a strong need to carefully check the initial test data sets for performance. IMERG output will be illustrated using pre-operational test data, including the variety of supporting fields, such as the merged-microwave and infrared estimates, and the precipitation type. Finally, we will summarize the expected release of various output products, and the subsequent reprocessing sequence.
Physiological and cognitive military related performances after 10-kilometer march.
Yanovich, Ran; Hadid, Amir; Erlich, Tomer; Moran, Daniel S; Heled, Yuval
2015-01-01
Prior operational activities such as marching in diverse environments, with heavy backloads may cause early fatigue and reduce the unit's readiness. The purpose of this preliminary study was to evaluate the effect of 10-kilometer (km) march on selected, military oriented, physiological and cognitive performances. Eight healthy young males (age 25 ± 3 years) performed a series of cognitive and physiological tests, first without any prior physiological strain and then after a 10 km march in comfort laboratory conditions (24°C, 50%RH) consisting a 5 km/h speed and 2-6% incline with backload weighing 30% of their body weight. We found that the subjects' time to exhaustion (TTE) after the march decreased by 27% with no changes in anaerobic performance. Cognitive performance showed a significant (20%) reduction in accuracy and a tendency to reduce reaction time after the march. We conclude that a moderate-intensity march under relatively comfort environmental conditions may differently decrease selected military related physical and cognitive abilities. This phenomenon is probably associated with the type and intensity of the pre-mission physical activity and the magnitude of the associated mental fatigue. We suggest that quantifying these effects, as was presented in this preliminary study, by adopting this practical scientific approach would assist in preserving the soldiers' performance and health during training and military operations.
Recent results of the Global Precipitation Measurement (GPM) mission in Japan
NASA Astrophysics Data System (ADS)
Kubota, Takuji; Oki, Riko; Furukawa, Kinji; Kaneko, Yuki; Yamaji, Moeka; Iguchi, Toshio; Takayabu, Yukari
2017-04-01
The Global Precipitation Measurement (GPM) mission is an international collaboration to achieve highly accurate and highly frequent global precipitation observations. The GPM mission consists of the GPM Core Observatory jointly developed by U.S. and Japan and Constellation Satellites that carry microwave radiometers and provided by the GPM partner agencies. The GPM Core Observatory, launched on February 2014, carries the Dual-frequency Precipitation Radar (DPR) by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT). JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and DPR-GMI combined Level2 algorithms. The Japan Meteorological Agency (JMA) started the DPR assimilation in the meso-scale Numerical Weather Prediction (NWP) system on March 24 2016. This was regarded as the world's first "operational" assimilation of spaceborne radar data in the NWP system of meteorological agencies. JAXA also develops the Global Satellite Mapping of Precipitation (GSMaP), as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. The GSMaP near-real-time version (GSMaP_NRT) product is available 4-hour after observation through the "JAXA Global Rainfall Watch" web site (http://sharaku.eorc.jaxa.jp/GSMaP) since 2008. The GSMaP_NRT product gives higher priority to data latency than accuracy, and has been used by various users for various purposes, such as rainfall monitoring, flood alert and warning, drought monitoring, crop yield forecast, and agricultural insurance. There is, however, a requirement for shortening of data latency time from GSMaP users. To reduce data latency, JAXA has developed the GSMaP realtime version (GSMaP_NOW) product for observation area of the geostationary satellite Himawari-8 operated by the Japan Meteorological Agency (JMA). GSMaP_NOW product was released to public in November 2, 2015 through the "JAXA Realtime Rainfall Watch" web site (http://sharaku.eorc.jaxa.jp/GSMaP_NOW/). All GPM standard products and the GPM-GSMaP product have been released to the public since September 2014 as Version 03. The GPM products can be downloaded via the internet through the JAXA G-Portal (https://www.gportal.jaxa.jp). On Mar. 2016, the DPR, the GMI, and the DPR-GMI combined algorithms were updated and the first GPM latent heating product (in the TRMM coverage) were released. Therefore, the GPM Version 04 standard products have been provided since Mar. 2016. Furthermore, the GPM-GSMaP algorithms were updated and the GPM-GSMaP Version 04 products have been provided since Jan. 2017.
Evaluation of Droplet Splashing Algorithm in LEWICE 3.0
NASA Technical Reports Server (NTRS)
Homenko, Hilary N.
2004-01-01
The Icing Branch at NASA Glenn Research has developed a computer program to simulate ice formation on the leading edge of an aircraft wing during flight through cold, moist air. As part of the branch's current research, members have developed software known as LEWICE. This program is capable of predicting the formation of ice under designated weather conditions. The success of LEWICE is an asset to airplane manufacturers, ice protection system manufacturers, and the airline industry. Simulations of ice formation conducted in the tunnel and in flight is costly and time consuming. However, the danger of in-flight icing continues to be a concern for both commercial and military pilots. The LEWICE software is a step towards inexpensive and time efficient prediction of ice collection. In the most recent version of the program, LEWICE contains an algorithm for droplet splashing. Droplet splashing is a natural occurrence that causes accumulation of ice on aircraft surfaces. At impingement water droplets lose a portion of their mass to splashing. With part of each droplet joining the airflow and failing to freeze, early versions of LEWICE without the splashing algorithm over-predicted the collection of ice on the leading edge. The objective of my project was to determine whether the revised version of LEWICE accurately reflected the ice collection data obtained from the Icing Research Tunnel (IRT). The experimental data from the IRT was collected by Mark Potapczuk in January, March and July of 2001 and April and December of 2002. Experimental data points were the result of ice tracings conducted shortly after testing in the tunnel. Run sheets, which included a record of velocity, temperature, liquid water content and droplet diameter, served as the input of the LEWICE computer program. Parameters identical to the tunnel conditions were used to run LEWICE 2.0 and LEWICE 3.0. The results from IRT and versions of LEWICE were compared graphically. After entering the raw experimental data and computer output into a spread sheet, I mapped each ice formation onto a clean airfoil. The LEWICE output provided the data points to graphically depict ice formations developed by the program. weather conditions of runs conducted in January 2001, it was evident that the splashing algorithm of LEWICE 3.0 predicts ice formations more accurately than LEWICE 2.0. Especially at conditions with droplet size between 80 and 160 microns, the splashing algorithm of the new LEWICE version compensated for the loss of droplet mass as a result of splashing. In contrast, LEWICE 2.0 consistently over-predicted the mass of the ice in conditions with droplet size exceeding 80 microns. This evidence confirms that changes made to algorithms of LEWICE 3.0 have increased the accuracy of predicting ice collection.
Code of Federal Regulations, 2010 CFR
2010-04-01
... I.R.B. 18, (relating to transfers by wire to the Treasury). (2) In general: After March 31, 1991 and before January 1, 1993. In the case of a calendar month which begins after March 31, 1991, if, at a time... of accumulated employee tax withheld after March 31, 1991, under section 3202 and employer tax...
NASA Astrophysics Data System (ADS)
Amalia; Budiman, M. A.; Sitepu, R.
2018-03-01
Cryptography is one of the best methods to keep the information safe from security attack by unauthorized people. At present, Many studies had been done by previous researchers to generate a more robust cryptographic algorithm to provide high security for data communication. To strengthen data security, one of the methods is hybrid cryptosystem method that combined symmetric and asymmetric algorithm. In this study, we observed a hybrid cryptosystem method contain Modification Playfair Cipher 16x16 algorithm as a symmetric algorithm and Knapsack Naccache-Stern as an asymmetric algorithm. We observe a running time of this hybrid algorithm with some of the various experiments. We tried different amount of characters to be tested which are 10, 100, 1000, 10000 and 100000 characters and we also examined the algorithm with various key’s length which are 10, 20, 30, 40 of key length. The result of our study shows that the processing time for encryption and decryption process each algorithm is linearly proportional, it means the longer messages character then, the more significant times needed to encrypt and decrypt the messages. The encryption running time of Knapsack Naccache-Stern algorithm takes a longer time than its decryption, while the encryption running time of modification Playfair Cipher 16x16 algorithm takes less time than its decryption.
Real-time Automatic Detectors of P and S Waves Using Singular Values Decomposition
NASA Astrophysics Data System (ADS)
Kurzon, I.; Vernon, F.; Rosenberger, A.; Ben-Zion, Y.
2013-12-01
We implement a new method for the automatic detection of the primary P and S phases using Singular Value Decomposition (SVD) analysis. The method is based on a real-time iteration algorithm of Rosenberger (2010) for the SVD of three component seismograms. Rosenberger's algorithm identifies the incidence angle by applying SVD and separates the waveforms into their P and S components. We have been using the same algorithm with the modification that we filter the waveforms prior to the SVD, and then apply SNR (Signal-to-Noise Ratio) detectors for picking the P and S arrivals, on the new filtered+SVD-separated channels. A recent deployment in San Jacinto Fault Zone area provides a very dense seismic network that allows us to test the detection algorithm in diverse setting, such as: events with different source mechanisms, stations with different site characteristics, and ray paths that diverge from the SVD approximation used in the algorithm, (e.g., rays propagating within the fault and recorded on linear arrays, crossing the fault). We have found that a Butterworth band-pass filter of 2-30Hz, with four poles at each of the corner frequencies, shows the best performance in a large variety of events and stations within the SJFZ. Using the SVD detectors we obtain a similar number of P and S picks, which is a rare thing to see in ordinary SNR detectors. Also for the actual real-time operation of the ANZA and SJFZ real-time seismic networks, the above filter (2-30Hz) shows a very impressive performance, tested on many events and several aftershock sequences in the region from the MW 5.2 of June 2005, through the MW 5.4 of July 2010, to MW 4.7 of March 2013. Here we show the results of testing the detectors on the most complex and intense aftershock sequence, the MW 5.2 of June 2005, in which in the very first hour there were ~4 events a minute. This aftershock sequence was thoroughly reviewed by several analysts, identifying 294 events in the first hour, located in a condensed cluster around the main shock. We used this hour of events to fine-tune the automatic SVD detection, association and location of the real-time system, reaching a 37% automatic identification and location of events, with a minimum of 10 stations per event, all events fall within the same condensed cluster and there are no false events or large offsets of their locations. An ordinary SNR detector did not exceed the 11% success with a minimum of 8 stations per event, 2 false events and a wider spread of events (not within the reviewed cluster). One of the main advantages of the SVD detectors for real-time operations is the actual separation between the P and S components, by that significantly reducing the noise of picks detected by ordinary SNR detectors. The new method has been applied for a significant amount of events within the SJFZ in the past 8 years, and is now in the final stage of real-time implementation in UCSD for the ANZA and SJFZ networks, tuned for automatic detection and location of local events.
Gog, Simon; Bader, Martin
2008-10-01
The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.
78 FR 15729 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-12
... Emphasis Panel; RFA Panel: Molecular and Cellular Substrates of Complex Brain Disorders. Date: March 29... Scientific Review Special Emphasis Panel; Member Conflict: Genetics of Disease. Date: March 29, 2013. Time: 1...
77 FR 12319 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-29
... Review Special Emphasis Panel; Topics in Bioengineering Sciences. Date: March 16, 2012. Time: 11 a.m. to... Related Research Integrated Review Group; AIDS Immunology and Pathogenesis Study Section. Date: March 19...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-28
.../Veterans Memorial Bridge across Point Pleasant Canal, at NJICW mile 3.0, in Point Pleasant, NJ. This..., 0, 0, and 4 times during the months of December 2008 to March 2009; and during the months of December 2009 to March 2010, the bridge opened for vessels 14, 0, 0, and 8 times, respectively. The Coast...
GPUs: An Emerging Platform for General-Purpose Computation
2007-08-01
programming; real-time cinematic quality graphics Peak stream (26) License required (limited time no- cost evaluation program) Commercially...folding.stanford.edu (accessed 30 March 2007). 2. Fan, Z.; Qiu, F.; Kaufman, A.; Yoakum-Stover, S. GPU Cluster for High Performance Computing. ACM/IEEE...accessed 30 March 2007). 8. Goodnight, N.; Wang, R.; Humphreys, G. Computation on Programmable Graphics Hardware. IEEE Computer Graphics and
Computational Study of Near-limit Propagation of Detonation in Hydrogen-air Mixtures
NASA Technical Reports Server (NTRS)
Yungster, S.; Radhakrishnan, K.
2002-01-01
A computational investigation of the near-limit propagation of detonation in lean and rich hydrogen-air mixtures is presented. The calculations were carried out over an equivalence ratio range of 0.4 to 5.0, pressures ranging from 0.2 bar to 1.0 bar and ambient initial temperature. The computations involved solution of the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing (TVD) scheme, and a point implicit, first-order-accurate, time marching algorithm. The hydrogen-air combustion was modeled with a 9-species, 19-step reaction mechanism. A multi-level, dynamically adaptive grid was utilized in order to resolve the structure of the detonation. The results of the computations indicate that when hydrogen concentrations are reduced below certain levels, the detonation wave switches from a high-frequency, low amplitude oscillation mode to a low frequency mode exhibiting large fluctuations in the detonation wave speed; that is, a 'galloping' propagation mode is established.
Nowcasting Ground Magnetic Perturbations with the Space Weather Modeling Framework
NASA Astrophysics Data System (ADS)
Welling, D. T.; Toth, G.; Singer, H. J.; Millward, G. H.; Gombosi, T. I.
2015-12-01
Predicting ground-based magnetic perturbations is a critical step towards specifying and predicting geomagnetically induced currents (GICs) in high voltage transmission lines. Currently, the Space Weather Modeling Framework (SWMF), a flexible modeling framework for simulating the multi-scale space environment, is being transitioned from research to operational use (R2O) by NOAA's Space Weather Prediction Center. Upon completion of this transition, the SWMF will provide localized B/t predictions using real-time solar wind observations from L1 and the F10.7 proxy for EUV as model input. This presentation describes the operational SWMF setup and summarizes the changes made to the code to enable R2O progress. The framework's algorithm for calculating ground-based magnetometer observations will be reviewed. Metrics from data-model comparisons will be reviewed to illustrate predictive capabilities. Early data products, such as regional-K index and grids of virtual magnetometer stations, will be presented. Finally, early successes will be shared, including the code's ability to reproduce the recent March 2015 St. Patrick's Day Storm.
Quasi-Static Viscoelastic Finite Element Model of an Aircraft Tire
NASA Technical Reports Server (NTRS)
Johnson, Arthur R.; Tanner, John A.; Mason, Angela J.
1999-01-01
An elastic large displacement thick-shell mixed finite element is modified to allow for the calculation of viscoelastic stresses. Internal strain variables are introduced at the element's stress nodes and are employed to construct a viscous material model. First order ordinary differential equations relate the internal strain variables to the corresponding elastic strains at the stress nodes. The viscous stresses are computed from the internal strain variables using viscous moduli which are a fraction of the elastic moduli. The energy dissipated by the action of the viscous stresses is included in the mixed variational functional. The nonlinear quasi-static viscous equilibrium equations are then obtained. Previously developed Taylor expansions of the nonlinear elastic equilibrium equations are modified to include the viscous terms. A predictor-corrector time marching solution algorithm is employed to solve the algebraic-differential equations. The viscous shell element is employed to computationally simulate a stair-step loading and unloading of an aircraft tire in contact with a frictionless surface.
Simulation of arthroscopic surgery using MRI data
NASA Technical Reports Server (NTRS)
Heller, Geoffrey; Genetti, Jon
1994-01-01
With the availability of Magnetic Resonance Imaging (MRI) technology in the medical field and the development of powerful graphics engines in the computer world the possibility now exists for the simulation of surgery using data obtained from an actual patient. This paper describes a surgical simulation system which will allow a physician or a medical student to practice surgery on a patient without ever entering an operating room. This could substantially lower the cost of medial training by providing an alternative to the use of cadavers. This project involves the use of volume data acquired by MRI which are converted to polygonal form using a corrected marching cubes algorithm. The data are then colored and a simulation of surface response based on springy structures is performed in real time. Control for the system is obtained through the use of an attached analog-to-digital unit. A remote electronic device is described which simulates an imaginary tool having features in common with both arthroscope and laparoscope.
Dzudie, Anastase; Kane, Abdoul; Kramoh, Euloge; Anzouan-Kacou, Jean-Baptiste; Damourou, Jean Marie; Allawaye, Lucien; Nzisabira, Jolis; Mousse, Latif; Balde, Dadier; Nouhom, Ouane; Nkoa, Jean Louis; Kaki, Kimbally; Djomou, Armel; Menanga, Alain; Nganou, Christ Nadege; Mipinda, Jean Bruno; Nebie, Lucie; Kuate, Liliane Mfeukeu; Kingue, Samuel; Ba, Serigne Abdou
The fourth Pan-African Society of Cardiology (PASCAR) hypertension taskforce meeting was held at the Yaoundé Hilton Hotel on 16 March 2016. Its main goals were to update and facilitate understanding of the PASCAR roadmap for the control of hypertension on the continent, to refine the PASCAR hypertension algorithm, and to discuss the next steps of the PASCAR hypertension policy, including how the PASCAR initiative can be customised at country level. The formation of the PASCAR coalition against hypertension, the writing group and the current status of the PASCAR hypertension policy document as well as the algorithm were presented to delegates representing 12 French-speaking countries. The urgency to finalise the continental policy was recognised and consensus was achieved by discussion on the main points and strategy. Relevant scientific issues were discussed and comments were received on all points, including how the algorithm could be simplified and made more accessible for implementation at primary healthcare centres.
NASA Technical Reports Server (NTRS)
Ellsworth, Joel C.
2017-01-01
During flight-testing of the National Aeronautics and Space Administration (NASA) Gulfstream III (G-III) airplane (Gulfstream Aerospace Corporation, Savannah, Georgia) SubsoniC Research Aircraft Testbed (SCRAT) between March 2013 and April 2015 it became evident that the sensor array used for stagnation point detection was not functioning as expected. The stagnation point detection system is a self calibrating hot-film array; the calibration was unknown and varied between flights, however, the channel with the lowest power consumption was expected to correspond with the point of least surface shear. While individual channels showed the expected behavior for the hot-film sensors, more often than not the lowest power consumption occurred at a single sensor (despite in-flight maneuvering) in the array located far from the expected stagnation point. An algorithm was developed to process the available system output and determine the stagnation point location. After multiple updates and refinements, the final algorithm was not sensitive to the failure of a single sensor in the array, but adjacent failures beneath the stagnation point crippled the algorithm.
Fast marching methods for the continuous traveling salesman problem
Andrews, June; Sethian, J. A.
2007-01-01
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points (“cities”) in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M·N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh. PMID:17220271
Fast marching methods for the continuous traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, J.; Sethian, J.A.
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ('cities') in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the Traveling Salesman Problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both amore » heuristic and an optimal solution to this problem. The order of the heuristic algorithm is at worst case M * N logN, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.« less
A Solution Method of Job-shop Scheduling Problems by the Idle Time Shortening Type Genetic Algorithm
NASA Astrophysics Data System (ADS)
Ida, Kenichi; Osawa, Akira
In this paper, we propose a new idle time shortening method for Job-shop scheduling problems (JSPs). We insert its method into a genetic algorithm (GA). The purpose of JSP is to find a schedule with the minimum makespan. We suppose that it is effective to reduce idle time of a machine in order to improve the makespan. The left shift is a famous algorithm in existing algorithms for shortening idle time. The left shift can not arrange the work to idle time. For that reason, some idle times are not shortened by the left shift. We propose two kinds of algorithms which shorten such idle time. Next, we combine these algorithms and the reversal of a schedule. We apply GA with its algorithm to benchmark problems and we show its effectiveness.
78 FR 11658 - National Institute of General Medical Sciences; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... Institute of General Medical Sciences Special Emphasis Panel; Biomedical Instrumentation 1. Date: March 12... Sciences Special Emphasis Panel; Biomedical Instrumentation 2. Date: March 13, 2013. Time: 8:30 a.m. to 5...
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Bettner, James L.
1991-01-01
The primary objective of this study was the development of a time-dependent three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The computer codes resulting from this study are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). This report is intended to serve as a computer program user's manual for the ADPAC developed under Task 2 of NASA Contract NAS3-25270, Unsteady Ducted Propfan Analysis. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. A time-accurate implicit residual smoothing operator was utilized for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C-grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted propfan flows. The solution scheme demonstrated efficiency and accuracy comparable with other schemes of this class.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille C.; Moxley, Katherine M.; Moore, Kathleen; Mannel, Robert S.; Cheng, Samuel; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2017-02-01
Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.
Asteroid mass estimation using Markov-Chain Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2016-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We will introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans, particularly in connection with ESA's Gaia mission.
Al-Jaishi, Ahmed A; Moist, Louise M; Oliver, Matthew J; Nash, Danielle M; Fleet, Jamie L; Garg, Amit X; Lok, Charmaine E
2018-03-01
We assessed the validity of physician billing codes and hospital admission using International Classification of Diseases 10th revision codes to identify vascular access placement, secondary patency, and surgical revisions in administrative data. We included adults (≥18 years) with a vascular access placed between 1 April 2004 and 31 March 2013 at the University Health Network, Toronto. Our reference standard was a prospective vascular access database (VASPRO) that contains information on vascular access type and dates of placement, dates for failure, and any revisions. We used VASPRO to assess the validity of different administrative coding algorithms by calculating the sensitivity, specificity, and positive predictive values of vascular access events. The sensitivity (95% confidence interval) of the best performing algorithm to identify arteriovenous access placement was 86% (83%, 89%) and specificity was 92% (89%, 93%). The corresponding numbers to identify catheter insertion were 84% (82%, 86%) and 84% (80%, 87%), respectively. The sensitivity of the best performing coding algorithm to identify arteriovenous access surgical revisions was 81% (67%, 90%) and specificity was 89% (87%, 90%). The algorithm capturing arteriovenous access placement and catheter insertion had a positive predictive value greater than 90% and arteriovenous access surgical revisions had a positive predictive value of 20%. The duration of arteriovenous access secondary patency was on average 578 (553, 603) days in VASPRO and 555 (530, 580) days in administrative databases. Administrative data algorithms have fair to good operating characteristics to identify vascular access placement and arteriovenous access secondary patency. Low positive predictive values for surgical revisions algorithm suggest that administrative data should only be used to rule out the occurrence of an event.
NASA Technical Reports Server (NTRS)
Hall, E. J.; Topp, D. A.; Delaney, R. A.
1996-01-01
The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.
NASA Astrophysics Data System (ADS)
Chao, Kevin; Peng, Zhigang; Hsu, Ya-Ju; Obara, Kazushige; Wu, Chunquan; Ching, Kuo-En; van der Lee, Suzan; Pu, Hsin-Chieh; Leu, Peih-Lin; Wech, Aaron
2017-07-01
Deep tectonic tremor, which is extremely sensitive to small stress variations, could be used to monitor fault zone processes during large earthquake cycles and aseismic processes before large earthquakes. In this study, we develop an algorithm for the automatic detection and location of tectonic tremor beneath the southern Central Range of Taiwan and examine the spatiotemporal relationship between tremor and the 4 March 2010 ML6.4 Jiashian earthquake, located about 20 km from active tremor sources. We find that tremor in this region has a relatively short duration, short recurrence time, and no consistent correlation with surface GPS data. We find a short-term increase in the tremor rate 19 days before the Jiashian main shock, and around the time when the tremor rate began to rise one GPS station recorded a flip in its direction of motion. We hypothesize that tremor is driven by a slow-slip event that preceded the occurrence of the shallower Jiashian main shock, even though the inferred slip is too small to be observed by all GPS stations. Our study shows that tectonic tremor may reflect stress variation during the prenucleation process of a nearby earthquake.
Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part I: numerical scheme
NASA Astrophysics Data System (ADS)
Rõõm, Rein; Männik, Aarne; Luhamaa, Andres
2007-10-01
Two-time-level, semi-implicit, semi-Lagrangian (SISL) scheme is applied to the non-hydrostatic pressure coordinate equations, constituting a modified Miller-Pearce-White model, in hybrid-coordinate framework. Neutral background is subtracted in the initial continuous dynamics, yielding modified equations for geopotential, temperature and logarithmic surface pressure fluctuation. Implicit Lagrangian marching formulae for single time-step are derived. A disclosure scheme is presented, which results in an uncoupled diagnostic system, consisting of 3-D Poisson equation for omega velocity and 2-D Helmholtz equation for logarithmic pressure fluctuation. The model is discretized to create a non-hydrostatic extension to numerical weather prediction model HIRLAM. The discretization schemes, trajectory computation algorithms and interpolation routines, as well as the physical parametrization package are maintained from parent hydrostatic HIRLAM. For stability investigation, the derived SISL model is linearized with respect to the initial, thermally non-equilibrium resting state. Explicit residuals of the linear model prove to be sensitive to the relative departures of temperature and static stability from the reference state. Relayed on the stability study, the semi-implicit term in the vertical momentum equation is replaced to the implicit term, which results in stability increase of the model.
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali
2015-11-01
We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.
NASA Astrophysics Data System (ADS)
Debnath, M.; Santoni, C.; Leonardi, S.; Iungo, G. V.
2017-03-01
The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator. This article is part of the themed issue 'Wind energy in complex terrains'.
Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph
2017-11-01
The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p < 0.01 for all). All iMAR reconstructed images showed significantly lower artefacts (p < 0.01) compared with the WFPB while there was no significant difference between the iMAR algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p < 0.01). All three iMAR algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.
Predicting marching capacity while carrying extremely heavy loads.
Koerhuis, Claudy L; Veenstra, Bertil J; van Dijk, Jos J; Delleman, Nico J
2009-12-01
The objective of this study was to establish the best prediction for endurance time of combat soldiers marching with extremely heavy loads. It was hypothesized that loads relative to individual characteristics (% maximal load carry capacity [MLCC], % body mass, % lean body mass) would better predict endurance time than load itself. Twenty-three male combat soldiers participated. MLCC was determined by increasing the load by 7.5 kg every 4 minutes until exhaustion. The marching velocity and gradient were 3 km.h(-1) and 5%, respectively. Endurance time was determined carrying 70, 80, and 90% of MLCC. MLCC was on average 102.6 kg +/- 11.6. Load expressed as % MLCC was the best predictor for endurance time (R2 = 0.45). Load expressed as % body mass, as % lean body mass, and absolute load predicted endurance time less well (R2 = 0.30, R2 = 0.24, and R2 = 0.23, respectively). On the basis of these results, it is recommended to assess the MLCC of individual combat soldiers.
Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien
2017-01-01
Background Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. Objective The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. Methods We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Results Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician’s ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. Conclusions AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. PMID:28951384
Acoustoelastic Lamb Wave Propagation in Biaxially Stressed Plates (Preprint)
2012-03-01
0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YY) 2. REPORT TYPE 3. DATES COVERED (From - To) March 2012...Journal Article 1 March 2012 – 1 March 2012 4. TITLE AND SUBTITLE ACOUSTOELASTIC LAMB WAVE PROPAGATION IN BIAXIALLY STRESSED PLATES (PREPRINT
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
NASA Technical Reports Server (NTRS)
Shakib, Farzin; Hughes, Thomas J. R.
1991-01-01
A Fourier stability and accuracy analysis of the space-time Galerkin/least-squares method as applied to a time-dependent advective-diffusive model problem is presented. Two time discretizations are studied: a constant-in-time approximation and a linear-in-time approximation. Corresponding space-time predictor multi-corrector algorithms are also derived and studied. The behavior of the space-time algorithms is compared to algorithms based on semidiscrete formulations.
2018-03-24
Expedition 55 flight engineer Ricky Arnold of NASA is seen after the hatches were opened between the Soyuz MS-08 spacecraft and the International Space Station on screens at the Moscow Mission Control Center in Korolev, Russia, Saturday, March 24, 2018, a few hours after the Soyuz MS-08 docked to the International Space Station. Hatches were opened at 5:48 p.m. Eastern time on March 23 (12:48 a.m. Moscow time on March 24) and Arnold, Oleg Artemyev of Roscosmos, and Drew Feustel of NASA joined Expedition 55 Commander Anton Shkaplerov of Roscosmos, Scott Tingle of NASA, and Norishige Kanai of the Japan Aerospace Exploration Agency (JAXA) onboard the orbiting laboratory. Photo Credit: (NASA/Joel Kowsky)
2018-03-24
Expedition 55 flight engineer Drew Feustel of NASA is seen after the hatches were opened between the Soyuz MS-08 spacecraft and the International Space Station on screens at the Moscow Mission Control Center in Korolev, Russia, Saturday, March 24, 2018, a few hours after the Soyuz MS-08 docked to the International Space Station. Hatches were opened at 5:48 p.m. Eastern time on March 23 (12:48 a.m. Moscow time on March 24) and Feustel, Oleg Artemyev of Roscosmos, and Ricky Arnold of NASA joined Expedition 55 Commander Anton Shkaplerov of Roscosmos, Scott Tingle of NASA, and Norishige Kanai of the Japan Aerospace Exploration Agency (JAXA) onboard the orbiting laboratory. Photo Credit: (NASA/Joel Kowsky)
2013-03-21
At the Cosmonaut Hotel crew quarters in Baikonur, Kazakhstan, Expedition 35-36 Flight Engineer Chris Cassidy of NASA (left) displays a flight data file book titled “Fast Rendezvous” March 21 as he, Soyuz Commander Pavel Vinogradov (center) and Flight Engineer Alexander Misurkin (right) train for launch to the International Space Station March 29, Kazakh time, in their Soyuz TMA-08M spacecraft from the Baikonur Cosmodrome for a 5 ½ month mission. The “fast rendezvous” refers to the expedited four-orbit, six-hour trip from the launch pad to reach the International Space Station March 29 through an accelerated rendezvous burn plan, the first time this approach will be used for crews flying to the international complex. NASA/Victor Zelentsov
78 FR 14839 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-07
... LEGAL SERVICES CORPORATION Sunshine Act Meeting DATE AND TIME: The Legal Services Corporation's Institutional Advancement Committee will meet telephonically on March 12, 2013 and March 26, 2013. Each meeting... Committee's agenda. LOCATION: F. William McCalpin Conference Center, Legal Services Corporation Headquarters...
Tribal Pesticide Program Council Meeting - March 8-10, 2017
The Tribal Pesticide Program Council (TPPC) will hold its next semiannual meeting on March 8 and 9, 2017, from 8:30 a.m. to 5:00 p.m. (Eastern Standard Time) in Crystal City, Virginia, at One Potomac Yard
Godbehere, Andrew; Le, Gem; El Ghaoui, Laurent; Sarkar, Urmimala
2016-01-01
Background It is difficult to synthesize the vast amount of textual data available from social media websites. Capturing real-world discussions via social media could provide insights into individuals’ opinions and the decision-making process. Objective We conducted a sequential mixed methods study to determine the utility of sparse machine learning techniques in summarizing Twitter dialogues. We chose a narrowly defined topic for this approach: cervical cancer discussions over a 6-month time period surrounding a change in Pap smear screening guidelines. Methods We applied statistical methodologies, known as sparse machine learning algorithms, to summarize Twitter messages about cervical cancer before and after the 2012 change in Pap smear screening guidelines by the US Preventive Services Task Force (USPSTF). All messages containing the search terms “cervical cancer,” “Pap smear,” and “Pap test” were analyzed during: (1) January 1–March 13, 2012, and (2) March 14–June 30, 2012. Topic modeling was used to discern the most common topics from each time period, and determine the singular value criterion for each topic. The results were then qualitatively coded from top 10 relevant topics to determine the efficiency of clustering method in grouping distinct ideas, and how the discussion differed before vs. after the change in guidelines . Results This machine learning method was effective in grouping the relevant discussion topics about cervical cancer during the respective time periods (~20% overall irrelevant content in both time periods). Qualitative analysis determined that a significant portion of the top discussion topics in the second time period directly reflected the USPSTF guideline change (eg, “New Screening Guidelines for Cervical Cancer”), and many topics in both time periods were addressing basic screening promotion and education (eg, “It is Cervical Cancer Awareness Month! Click the link to see where you can receive a free or low cost Pap test.”) Conclusions It was demonstrated that machine learning tools can be useful in cervical cancer prevention and screening discussions on Twitter. This method allowed us to prove that there is publicly available significant information about cervical cancer screening on social media sites. Moreover, we observed a direct impact of the guideline change within the Twitter messages. PMID:27288093
A real time microcomputer implementation of sensor failure detection for turbofan engines
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1989-01-01
An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.
2000-03-24
34, Proceedings of the 4th Int. Conf. on Computer-Aided Drafting, Design and Manufacturing Technology, Bejing , China , pp. 133-139, aug 1994. [4] C. J. Hsu...transmission and reflec- warped gyro-frequency. The prewarping opera - tion performance of a magnetized plasma slab tion conserves the d.c. gain and the...inverse NUDFT’s are obtained efficiently by the NUFFT algorithms with O(N log2 N) arithmetic opera - tions. Therefore the CG-NUFFT retains the
1987-03-31
processors . The symmetry-breaking algorithms give efficient ways to convert probabilistic algorithms to deterministic algorithms. Some of the...techniques have been applied to construct several efficient linear- processor algorithms for graph problems, including an O(lg* n)-time algorithm for (A + 1...On n-node graphs, the algorithm works in O(log 2 n) time using only n processors , in contrast to the previous best algorithm which used about n3
NASA Astrophysics Data System (ADS)
Brantut, Nicolas
2018-02-01
Acoustic emission and active ultrasonic wave velocity monitoring are often performed during laboratory rock deformation experiments, but are typically processed separately to yield homogenised wave velocity measurements and approximate source locations. Here I present a numerical method and its implementation in a free software to perform a joint inversion of acoustic emission locations together with the three-dimensional, anisotropic P-wave structure of laboratory samples. The data used are the P-wave first arrivals obtained from acoustic emissions and active ultrasonic measurements. The model parameters are the source locations and the P-wave velocity and anisotropy parameter (assuming transverse isotropy) at discrete points in the material. The forward problem is solved using the fast marching method, and the inverse problem is solved by the quasi-Newton method. The algorithms are implemented within an integrated free software package called FaATSO (Fast Marching Acoustic Emission Tomography using Standard Optimisation). The code is employed to study the formation of compaction bands in a porous sandstone. During deformation, a front of acoustic emissions progresses from one end of the sample, associated with the formation of a sequence of horizontal compaction bands. Behind the active front, only sparse acoustic emissions are observed, but the tomography reveals that the P-wave velocity has dropped by up to 15%, with an increase in anisotropy of up to 20%. Compaction bands in sandstones are therefore shown to produce sharp changes in seismic properties. This result highlights the potential of the methodology to image temporal variations of elastic properties in complex geomaterials, including the dramatic, localised changes associated with microcracking and damage generation.
Pulmonary artery segmentation and quantification in sickle cell associated pulmonary hypertension
NASA Astrophysics Data System (ADS)
Linguraru, Marius George; Mukherjee, Nisha; Van Uitert, Robert L.; Summers, Ronald M.; Gladwin, Mark T.; Machado, Roberto F.; Wood, Bradford J.
2008-03-01
Pulmonary arterial hypertension is a known complication associated with sickle-cell disease; roughly 75% of sickle cell disease-afflicted patients have pulmonary arterial hypertension at the time of death. This prospective study investigates the potential of image analysis to act as a surrogate for presence and extent of disease, and whether the size change of the pulmonary arteries of sickle cell patients could be linked to sickle-cell associated pulmonary hypertension. Pulmonary CT-Angiography scans from sickle-cell patients were obtained and retrospectively analyzed. Randomly selected pulmonary CT-Angiography studies from patients without sickle-cell anemia were used as negative controls. First, images were smoothed using anisotropic diffusion. Then, a combination of fast marching and geodesic active contours level sets were employed to segment the pulmonary artery. An algorithm based on fast marching methods was used to compute the centerline of the segmented arteries. From the centerline, the diameters at the pulmonary trunk and first branch of the pulmonary arteries were measured automatically. Arterial diameters were normalized to the width of the thoracic cavity, patient weight and body surface. Results show that the pulmonary trunk and first right and left pulmonary arterial branches at the pulmonary trunk junction are significantly larger in diameter with increased blood flow in sickle-cell anemia patients as compared to controls (p values of 0.0278 for trunk and 0.0007 for branches). CT with image processing shows great potential as a surrogate indicator of pulmonary hemodynamics or response to therapy, which could be an important tool for drug discovery and noninvasive clinical surveillance.
Ming, Xing; Li, Anan; Wu, Jingpeng; Yan, Cheng; Ding, Wenxiang; Gong, Hui; Zeng, Shaoqun; Liu, Qian
2013-01-01
Digital reconstruction of three-dimensional (3D) neuronal morphology from light microscopy images provides a powerful technique for analysis of neural circuits. It is time-consuming to manually perform this process. Thus, efficient computer-assisted approaches are preferable. In this paper, we present an innovative method for the tracing and reconstruction of 3D neuronal morphology from light microscopy images. The method uses a prediction and refinement strategy that is based on exploration of local neuron structural features. We extended the rayburst sampling algorithm to a marching fashion, which starts from a single or a few seed points and marches recursively forward along neurite branches to trace and reconstruct the whole tree-like structure. A local radius-related but size-independent hemispherical sampling was used to predict the neurite centerline and detect branches. Iterative rayburst sampling was performed in the orthogonal plane, to refine the centerline location and to estimate the local radius. We implemented the method in a cooperative 3D interactive visualization-assisted system named flNeuronTool. The source code in C++ and the binaries are freely available at http://sourceforge.net/projects/flneurontool/. We validated and evaluated the proposed method using synthetic data and real datasets from the Digital Reconstruction of Axonal and Dendritic Morphology (DIADEM) challenge. Then, flNeuronTool was applied to mouse brain images acquired with the Micro-Optical Sectioning Tomography (MOST) system, to reconstruct single neurons and local neural circuits. The results showed that the system achieves a reasonable balance between fast speed and acceptable accuracy, which is promising for interactive applications in neuronal image analysis.
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
Contour Connection Method for automated identification and classification of landslide deposits
NASA Astrophysics Data System (ADS)
Leshchinsky, Ben A.; Olsen, Michael J.; Tanyu, Burak F.
2015-01-01
Landslides are a common hazard worldwide that result in major economic, environmental and social impacts. Despite their devastating effects, inventorying existing landslides, often the regions at highest risk of reoccurrence, is challenging, time-consuming, and expensive. Current landslide mapping techniques include field inventorying, photogrammetric approaches, and use of bare-earth (BE) lidar digital terrain models (DTMs) to highlight regions of instability. However, many techniques do not have sufficient resolution, detail, and accuracy for mapping across landscape scale with the exception of using BE DTMs, which can reveal the landscape beneath vegetation and other obstructions, highlighting landslide features, including scarps, deposits, fans and more. Current approaches to landslide inventorying with lidar to create BE DTMs include manual digitizing, statistical or machine learning approaches, and use of alternate sensors (e.g., hyperspectral imaging) with lidar. This paper outlines a novel algorithm to automatically and consistently detect landslide deposits on a landscape scale. The proposed method is named as the Contour Connection Method (CCM) and is primarily based on bare earth lidar data requiring minimal user input such as the landslide scarp and deposit gradients. The CCM algorithm functions by applying contours and nodes to a map, and using vectors connecting the nodes to evaluate gradient and associated landslide features based on the user defined input criteria. Furthermore, in addition to the detection capabilities, CCM also provides an opportunity to be potentially used to classify different landscape features. This is possible because each landslide feature has a distinct set of metadata - specifically, density of connection vectors on each contour - that provides a unique signature for each landslide. In this paper, demonstrations of using CCM are presented by applying the algorithm to the region surrounding the Oso landslide in Washington (March 2014), as well as two 14,000 ha DTMs in Oregon, which were used as a comparison of CCM and manually delineated landslide deposits. The results show the capability of the CCM with limited data requirements and the agreement with manual delineation but achieving the results at a much faster time.
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.
Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan
2016-09-24
This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
NASA Astrophysics Data System (ADS)
Buongiorno, M. F.; Silvestri, M.; Musacchio, M.
2017-12-01
In this work a complete processing chain from the detection of the beginning of eruption to the estimation of lava flow temperature on active volcanoes using remote sensing data is presented showing the results for the Mt. Etna eruption on March 2017. The early detection of new eruption is based on the potentiality ensured by geostationary very low spatial resolution satellite (3x3 km in nadiral view), the hot spot/lava flow evolution is derived by S2 polar medium/high spatial resolution (20x20 mt) while the surface temperature is estimated by polar medium/low spatial resolution such as L8, ASTER and S3 (from 90 mt up to 1km).This approach merges two outcome derived by activity performed for monitoring purposes within INGV R&D activities and the results obtained by Geohazards Exploitation Platform ESA funded project (GEP) aimed to the development of shared platform for providing services based on EO data. Because the variety of phenomena to be analyzed a multi temporal multi scale approach has been used to implement suitable and robust algorithms for the different sensors. With the exception of Sentinel 2 (MSI) data, for which the algorithm used is based on NIR-SWIR bands, we exploit the MIR-TIR channels of L8, ASTER, S3 and SEVIRI for generating automatically the surface thermal state analysis. The developed procedure produces time series data and allows to extract information from each single co-registered pixel, to highlight variation of temperatures within specific areas. The final goal is to implement an easy tool which enables scientists and users to extract valuable information from satellite time series at different scales produced by ESA and EUMETSAT in the frame of Europe's Copernicus program and other Earth observation satellites programs such as LANDSAT (USGS) and GOES (NOAA).
Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu
2016-09-01
The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Gauster, A; Waddington, A; Jamieson, M A
2015-08-01
This study sought to analyze the effect of strategically timed local preventive education on reducing teen conception rates during known seasonal peaks in March and April. All teen conceptions (age ≤ 19) from March and April 2010, 2011, and 2012 were identified using medical records data. Teen conceptions occurring in January 2010, 2011, and 2012 were also identified to control for any new trends in the community. A city of 160,000 with 1 tertiary care centre. Pregnant adolescents (age ≤ 19). During the month of February 2012, preventive education and media awareness strategies were aimed at parents, teachers, and teens. Adolescent conceptions in March and April 2012. Conception rates in teens ≤18 years old were significantly reduced in March and April 2012 compared to March and April 2010 and 2011 (RR = 0.53, 95% CI = 0.32 - 0.88, P = .0132). There was an increase in conceptions in March and April 2012 compared to 2010 and 2011 among 19-year-olds (RR = 1.57, 95% CI = 0.84-2.9, P = .1500). Effect modification revealed our ≤18-year-old group and our 19-year-old group were distinct groups with different risk estimates (P = .0075). Educational sessions were poorly attended and contraception clinic volume was static. We propose increased parental supervision in response to media reminders as a possible explanation for the reduction in adolescent conceptions (≤18 years old) seen in March 2012. Copyright © 2015 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Far-field radiation patterns of aperture antennas by the Winograd Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Heisler, R.
1978-01-01
A more time-efficient algorithm for computing the discrete Fourier transform, the Winograd Fourier transform (WFT), is described. The WFT algorithm is compared with other transform algorithms. Results indicate that the WFT algorithm in antenna analysis appears to be a very successful application. Significant savings in cpu time will improve the computer turn around time and circumvent the need to resort to weekend runs.
Algorithms for Brownian first-passage-time estimation
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
Quantum algorithms for Gibbs sampling and hitting-time estimation
Chowdhury, Anirban Narayan; Somma, Rolando D.
2017-02-01
In this paper, we present quantum algorithms for solving two problems regarding stochastic processes. The first algorithm prepares the thermal Gibbs state of a quantum system and runs in time almost linear in √Nβ/Ζ and polynomial in log(1/ϵ), where N is the Hilbert space dimension, β is the inverse temperature, Ζ is the partition function, and ϵ is the desired precision of the output state. Our quantum algorithm exponentially improves the dependence on 1/ϵ and quadratically improves the dependence on β of known quantum algorithms for this problem. The second algorithm estimates the hitting time of a Markov chain. Formore » a sparse stochastic matrix Ρ, it runs in time almost linear in 1/(ϵΔ 3/2), where ϵ is the absolute precision in the estimation and Δ is a parameter determined by Ρ, and whose inverse is an upper bound of the hitting time. Our quantum algorithm quadratically improves the dependence on 1/ϵ and 1/Δ of the analog classical algorithm for hitting-time estimation. Finally, both algorithms use tools recently developed in the context of Hamiltonian simulation, spectral gap amplification, and solving linear systems of equations.« less
Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li
2014-01-01
Objective To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. Methods We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. Results A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88–4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. Conclusions All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement. PMID:24728385
Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li
2014-01-01
To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88-4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement.
Aissa, J; Thomas, C; Sawicki, L M; Caspers, J; Kröpil, P; Antoch, G; Boos, J
2017-05-01
To investigate the value of dedicated computed tomography (CT) iterative metal artefact reduction (iMAR) algorithms in patients after spinal instrumentation. Post-surgical spinal CT images of 24 patients performed between March 2015 and July 2016 were retrospectively included. Images were reconstructed with standard weighted filtered back projection (WFBP) and with two dedicated iMAR algorithms (iMAR-Algo1, adjusted to spinal instrumentations and iMAR-Algo2, adjusted to large metallic hip implants) using a medium smooth kernel (B30f) and a sharp kernel (B70f). Frequencies of density changes were quantified to assess objective image quality. Image quality was rated subjectively by evaluating the visibility of critical anatomical structures including the central canal, the spinal cord, neural foramina, and vertebral bone. Both iMAR algorithms significantly reduced artefacts from metal compared with WFBP (p<0.0001). Results of subjective image analysis showed that both iMAR algorithms led to an improvement in visualisation of soft-tissue structures (median iMAR-Algo1=3; interquartile range [IQR]:1.5-3; iMAR-Algo2=4; IQR: 3.5-4) and bone structures (iMAR-Algo1=3; IQR:3-4; iMAR-Algo2=4; IQR:4-5) compared to WFBP (soft tissue: median 2; IQR: 0.5-2 and bone structures: median 2; IQR: 1-3; p<0.0001). Compared with iMAR-Algo1, objective artefact reduction and subjective visualisation of soft-tissue and bone structures were improved with iMAR-Algo2 (p<0.0001). Both iMAR algorithms reduced artefacts compared with WFBP, however, the iMAR algorithm with dedicated settings for large metallic implants was superior to the algorithm specifically adjusted to spinal implants. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
75 FR 11894 - National Cancer Institute; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-12
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health National Cancer Institute.... App.), notice is hereby given of a meeting of the National Cancer Institute Director's Consumer... Committee: National Cancer Institute Director's Consumer Liaison Group. Date: March 24-26, 2010. Time: March...
9. Historic American Buildings Survey, Photographed by Daniel Cathcart March ...
9. Historic American Buildings Survey, Photographed by Daniel Cathcart March 8th, 1934 INTERIOR VIEW OF WINDOW IN GARAGE LEFT UNFINISHED AT THE TIME OF RENOVATION SHOWING ORIGINAL ADOBE CONSTRUCTION. - Casa de los Cerritos, 4600 American Avenue, Long Beach, Los Angeles County, CA
Education--An Annotated Bibliography of Current Issues, January-March 1990.
ERIC Educational Resources Information Center
Wardell, David
This annotated bibliography lists articles about educational issues that were published in Japanese periodicals from January through March 1990. The vast majority of articles are taken from the "Japan Times,""Daily Yomiuri,""Mainichi Daily," and "Asahi Evening." However, articles are also taken from…
HerMES: point source catalogues from Herschel-SPIRE observations II
NASA Astrophysics Data System (ADS)
Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.
2014-11-01
The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).
An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.
ERIC Educational Resources Information Center
Gonzales, Michael G.
1984-01-01
Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
NASA Technical Reports Server (NTRS)
Baumeister, K. J.; Kreider, K. L.
1996-01-01
An explicit finite difference iteration scheme is developed to study harmonic sound propagation in ducts. To reduce storage requirements for large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable, time is introduced into the Fourier transformed (steady-state) acoustic potential field as a parameter. Under a suitable transformation, the time dependent governing equation in frequency space is simplified to yield a parabolic partial differential equation, which is then marched through time to attain the steady-state solution. The input to the system is the amplitude of an incident harmonic sound source entering a quiescent duct at the input boundary, with standard impedance boundary conditions on the duct walls and duct exit. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1996-01-01
An explicit finite difference iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable, time is introduced into the Fourier transformed (steady-state) acoustic potential field as a parameter. Under a suitable transformation, the time dependent governing equation in frequency space is simplified to yield a parabolic partial differential equation, which is then marched through time to attain the steady-state solution. The input to the system is the amplitude of an incident harmonic sound source entering a quiescent duct at the input boundary, with standard impedance boundary conditions on the duct walls and duct exit. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
Birkegård, Anna Camilla; Andersen, Vibe Dalhoff; Halasa, Tariq; Jensen, Vibeke Frøkjær; Toft, Nils; Vigre, Håkan
2017-10-01
Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure as the number of Animal Defined Daily Doses for treatment of one kg pig in each of the rearing periods. Thus, the antimicrobial purchase data at farm level are translated into antimicrobial exposure estimates at batch level. A batch of pigs is defined here as pigs sent to slaughter at the same day from the same farm. In this study we present, validate, and optimise a computational algorithm that calculate the lifetime exposure of antimicrobials for slaughter pigs. The algorithm was evaluated by comparing the computed estimates to data on antimicrobial usage from farm records in 15 farm units. We found a good positive correlation between the two estimates. The algorithm was run for Danish slaughter pigs sent to slaughter in January to March 2015 from farms with more than 200 finishers to estimate the proportion of farms that it was applicable for. In the final process, the algorithm was successfully run for batches of pigs originating from 3026 farms with finisher units (77% of the initial population). This number can be increased if more accurate register data can be obtained. The algorithm provides a systematic and repeatable approach to estimating the antimicrobial exposure throughout the rearing period, independent of rearing site for finisher batches, as a lifetime exposure measurement. Copyright © 2017 Elsevier B.V. All rights reserved.
A comparison of kinematic algorithms to estimate gait events during overground running.
Smith, Laura; Preece, Stephen; Mason, Duncan; Bramah, Christopher
2015-01-01
The gait cycle is frequently divided into two distinct phases, stance and swing, which can be accurately determined from ground reaction force data. In the absence of such data, kinematic algorithms can be used to estimate footstrike and toe-off. The performance of previously published algorithms is not consistent between studies. Furthermore, previous algorithms have not been tested at higher running speeds nor used to estimate ground contact times. Therefore the purpose of this study was to both develop a new, custom-designed, event detection algorithm and compare its performance with four previously tested algorithms at higher running speeds. Kinematic and force data were collected on twenty runners during overground running at 5.6m/s. The five algorithms were then implemented and estimated times for footstrike, toe-off and contact time were compared to ground reaction force data. There were large differences in the performance of each algorithm. The custom-designed algorithm provided the most accurate estimation of footstrike (True Error 1.2 ± 17.1 ms) and contact time (True Error 3.5 ± 18.2 ms). Compared to the other tested algorithms, the custom-designed algorithm provided an accurate estimation of footstrike and toe-off across different footstrike patterns. The custom-designed algorithm provides a simple but effective method to accurately estimate footstrike, toe-off and contact time from kinematic data. Copyright © 2014 Elsevier B.V. All rights reserved.
Marzocchini, Manrico; Tatàno, Fabio; Moretti, Michela Simona; Antinori, Caterina; Orilisi, Stefano
2018-06-05
A possible approach for determining soil and groundwater quality criteria for contaminated sites is the comparative risk assessment. Originating from but not limited to Italian interest in a decentralised (regional) implementation of comparative risk assessment, this paper first addresses the proposal of an original methodology called CORIAN REG-M , which was created with initial attention to the context of potentially contaminated sites in the Marche Region (Central Italy). To deepen the technical-scientific knowledge and applicability of the comparative risk assessment, the following characteristics of the CORIAN REG-M methodology appear to be relevant: the simplified but logical assumption of three categories of factors (source and transfer/transport of potential contamination, and impacted receptors) within each exposure pathway; the adaptation to quality and quantity of data that are available or derivable at the given scale of concern; the attention to a reliable but unsophisticated modelling; the achievement of a conceptual linkage to the absolute risk assessment approach; and the potential for easy updating and/or refining of the methodology. Further, the application of the CORIAN REG-M methodology to some case-study sites located in the Marche Region indicated the following: a positive correlation can be expected between air and direct contact pathway scores, as well as between individual pathway scores and the overall site scores based on a root-mean-square algorithm; the exposure pathway, which presents the highest variability of scores, tends to be dominant at sites with the highest computed overall site scores; and the adoption of a root-mean-square algorithm can be expected to emphasise the overall site scoring.
Spread F in the Midlatitude Ionosphere According to DPS-4 Ionosonde Data
NASA Astrophysics Data System (ADS)
Panchenko, V. A.; Telegin, V. A.; Vorob'ev, V. G.; Zhbankov, G. A.; Yagodkina, O. I.; Rozhdestvenskaya, V. I.
2018-03-01
The results of studying spread F obtained from the DPS-4 ionosonde data at the observatory of the Pushkov Institute of Terrestrial Magnetism, Ionosphere, and Radio Wave Propagation (Moscow) are presented. The methodical questions that arise during the study of a spread F phenomenon in the ionosphere are considered; the current results of terrestrial observations are compared with previously published data and the results of sounding onboard an Earth-satellite vehicle. The automated algorithm for estimation of the intensity of frequency spread F, which was developed by the authors and was successfully verified via comparison of the data of the digisonde DPS-4 and the results of manual processing, is described. The algorithm makes it possible to quantify the intensity of spread F in megahertz (the dFs parameter) and in the number of points (0, 1, 2, 3). The strongest spread (3 points) is shown to be most likely around midnight, while the weakest spread (0 points) is highly likely to occur during the daytime. The diurnal distribution of a 1-2 point spread F in the winter indicates the presence of additional maxima at 0300-0600 UT and 1400-1700 UT, which may appear due to the terminator. Despite the large volume of processed data, we can not definitively state that the appearance of spread F depends on the magnetic activity indices Kp, Dst, and AL, although the values of the dFs frequency spread interval strongly increased both at day and night during the magnetic storm of March 17-22, 2015, especially in the phase of storm recovery on March 20-22.
75 FR 3243 - National Cancer Institute; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-20
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health National Cancer Institute.... App.), notice is hereby given of a meeting of the National Cancer Institute Board of Scientific... Committee: National Cancer Institute Board of Scientific Advisors. Date: March 8-9, 2010. Time: March 8...
NASA Astrophysics Data System (ADS)
Swaraj Pati, Mythili N.; Korde, Pranav; Dey, Pallav
2017-11-01
The purpose of this paper is to introduce an optimised variant to the round robin scheduling algorithm. Every algorithm works in its own way and has its own merits and demerits. The proposed algorithm overcomes the shortfalls of the existing scheduling algorithms in terms of waiting time, turnaround time, throughput and number of context switches. The algorithm is pre-emptive and works based on the priority of the associated processes. The priority is decided on the basis of the remaining burst time of a particular process, that is; lower the burst time, higher the priority and higher the burst time, lower the priority. To complete the execution, a time quantum is initially specified. In case if the burst time of a particular process is less than 2X of the specified time quantum but more than 1X of the specified time quantum; the process is given high priority and is allowed to execute until it completes entirely and finishes. Such processes do not have to wait for their next burst cycle.
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien; Hwang, Juey-Jen; Ho, Yi-Lwun
2017-09-26
Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician's ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. ©Ying-Hsien Chen, Chi-Sheng Hung, Ching-Chang Huang, Yu-Chien Hung, Juey-Jen Hwang, Yi-Lwun Ho. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 26.09.2017.
Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath
2010-03-01
This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.
A Computerized Decision Support System for Depression in Primary Care
Kurian, Benji T.; Trivedi, Madhukar H.; Grannemann, Bruce D.; Claassen, Cynthia A.; Daly, Ella J.; Sunderajan, Prabha
2009-01-01
Objective: In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. Method: This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS17) evaluated by an independent rater. Results: Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS17, than patients treated with usual care (P < .001). Conclusions: The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. Trial Registration: clinicaltrials.gov Identifier: NCT00551083 PMID:19750065
A computerized decision support system for depression in primary care.
Kurian, Benji T; Trivedi, Madhukar H; Grannemann, Bruce D; Claassen, Cynthia A; Daly, Ella J; Sunderajan, Prabha
2009-01-01
In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS(17)) evaluated by an independent rater. Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS(17), than patients treated with usual care (P < .001). The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. clinicaltrials.gov Identifier: NCT00551083.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
A robustness test of the braided device foreshortening algorithm
NASA Astrophysics Data System (ADS)
Moyano, Raquel Kale; Fernandez, Hector; Macho, Juan M.; Blasco, Jordi; San Roman, Luis; Narata, Ana Paula; Larrabide, Ignacio
2017-11-01
Different computational methods have been recently proposed to simulate the virtual deployment of a braided stent inside a patient vasculature. Those methods are primarily based on the segmentation of the region of interest to obtain the local vessel morphology descriptors. The goal of this work is to evaluate the influence of the segmentation quality on the method named "Braided Device Foreshortening" (BDF). METHODS: We used the 3DRA images of 10 aneurysmatic patients (cases). The cases were segmented by applying a marching cubes algorithm with a broad range of thresholds in order to generate 10 surface models each. We selected a braided device to apply the BDF algorithm to each surface model. The range of the computed flow diverter lengths for each case was obtained to calculate the variability of the method against the threshold segmentation values. RESULTS: An evaluation study over 10 clinical cases indicates that the final length of the deployed flow diverter in each vessel model is stable, shielding maximum difference of 11.19% in vessel diameter and maximum of 9.14% in the simulated stent length for the threshold values. The average coefficient of variation was found to be 4.08 %. CONCLUSION: A study evaluating how the threshold segmentation affects the simulated length of the deployed FD, was presented. The segmentation algorithm used to segment intracranial aneurysm 3D angiography images presents small variation in the resulting stent simulation.
Syphilis testing in antenatal care: Policies and practices among laboratories in the Americas.
Luu, Minh; Ham, Cal; Kamb, Mary L; Caffe, Sonja; Hoover, Karen W; Perez, Freddy
2015-06-01
To asses laboratory syphilis testing policies and practices among laboratories in the Americas. Laboratory directors or designees from PAHO member countries were invited to participate in a structured, electronically-delivered survey between March and August, 2014. Data on syphilis tests, algorithms, and quality control (QC) practices were analyzed, focusing on laboratories receiving specimens from antenatal clinics (ANCs). Surveys were completed by 69 laboratories representing 30 (86%) countries. Participating laboratories included 36 (52%) national or regional reference labs and 33 (48%) lower-level laboratories. Most (94%) were public sector facilities and 71% reported existence of a national algorithm for syphilis testing in pregnancy, usually involving both treponemal and non-treponemal testing (72%). Less than half (41%) used rapid syphilis tests (RSTs); and only seven laboratories representing five countries reported RSTs were included in the national algorithm for pregnant women. Most (83%) laboratories serving ANCs reported using some type of QC system; 68% of laboratories reported participation in external QC. Only 36% of laboratories reported data to national/local surveillance. Half of all laboratories serving ANC settings reported a stockout of one or more essential supplies during the previous year (median duration, 30days). Updating laboratory algorithms, improving testing standards, integrating data into existing surveillance, and improved procurement and distribution of commodities may be needed to ensure elimination of MTCT of syphilis in the Americas. Copyright © 2015. Published by Elsevier Ireland Ltd.
Sorting on STAR. [CDC computer algorithm timing comparison
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
3D Numerical Simulation on the Rockslide Generated Tsunamis
NASA Astrophysics Data System (ADS)
Chuang, M.; Wu, T.; Wang, C.; Chu, C.
2013-12-01
The rockslide generated tsunami is one of the most devastating nature hazards. However, the involvement of the moving obstacle and dynamic free-surface movement makes the numerical simulation a difficult task. To describe both the fluid motion and solid movement at the same time, we newly developed a two-way fully-coupled moving solid algorithm with 3D LES turbulent model. The free-surface movement is tracked by volume of fluid (VOF) method. The two-step projection method is adopted to solve the Navier-Stokes type government equations. In the new moving solid algorithm, a fictitious body force is implicitly prescribed in MAC correction step to make the cell-center velocity satisfied with the obstacle velocity. We called this method the implicit velocity method (IVM). Because no extra terms are added to the pressure Poission correction, the pressure field of the fluid part is stable, which is the key of the two-way fluid-solid coupling. Because no real solid material is presented in the IVM, the time marching step is not restricted to the smallest effective grid size. Also, because the fictitious force is implicitly added to the correction step, the resulting velocity is accurate and fully coupled with the resulting pressure field. We validated the IVM by simulating a floating box moving up and down on the free-surface. We presented the time-history obstacle trajectory and compared it with the experimental data. Very accurate result can be seen in terms of the oscillating amplitude and the period (Fig. 1). We also presented the free-surface comparison with the high-speed snapshots. At the end, the IVM was used to study the rock-slide generated tsunamis (Liu et al., 2005). Good validations on the slide trajectory and the free-surface movement will be presented in the full paper. From the simulation results (Fig. 2), we observed that the rockslide generated waves are manly caused by the rebounding waves from two sides of the sliding rock after the water is dragging down by the solid downward motion. We also found that the turbulence has minor effect to the main flow field. The rock size, rock density, and the steepness of the slope were analyzed to understand their effects to the maximum runup height. The detailed algorithm of IVM, the validation, the simulation and analysis of rockslide tsunami will be presented in the full paper. Figure 1. Time-history trajectory of obstacle for the floating obstacle simulation. Figure 2. Snapshots of the free-surface elevation with streamlines for the rockslide tsunami simulation.
2018-03-21
A firefighter is seen in front of the Soyuz rocket as teams await the arrival of Expedition 55 crew members Oleg Artemyev of Roscosmos, Ricky Arnold of NASA, and Drew Feustel of NASA, Wednesday, March 21, 2018 at the Baikonur Cosmodrome in Kazakhstan. Arnold, Artemyev, and Feustel launched aboard the Soyuz MS-08 spacecraft at 1:44 p.m. Eastern time (11:44 p.m. Baikonur time) on March 21 to begin their journey to the International Space Station. Photo Credit: (NASA/Joel Kowsky)
Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, A.; Henderson, T.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less
49 CFR Appendix A to Part 5 - Appendix A to Part 5
Code of Federal Regulations, 2010 CFR
2010-10-01
... the Act of March 19, 1918, ch. 24, as amended (15 U.S.C. 261-264); the Uniform Time Act of 1966 (80... interstate or foreign commerce, and, under section 2 of the Act of March 19, 1918, ch. 24, as amended (15 U.S...
Talking Stick. Volume 29, Number 4, March-April 2012
ERIC Educational Resources Information Center
Baumann, James A., Ed.
2012-01-01
The "Talking Stick" is published bimonthly, six times a year in January/February, March/April, May/June, July/August, September/October, and November/December by the Association of College and University Housing Officers-International. Each issue is divided into three sections: Features, Columns, and Departments. These sections contain articles…
2011-03-11
At the Kremlin Wall in Moscow March 11, 2011, Russian cosmonaut Andrey Borisenko lays flowers in honor of fallen icons as part of the ceremonial activities leading to the scheduled launch of Expedition 27 to the International Space Station, scheduled for March 30 (Kazakhstan time) in the Soyuz TMA-21 spacecraft. Photo credit: NASA/Mark Polansky
2011-03-11
At the Kremlin Wall in Moscow March 11, 2011, Russian cosmonaut Alexander Samokutyaev lays flowers in honor of fallen icons as part of the ceremonial activities leading to the scheduled launch of Expedition 27 to the International Space Station, scheduled for March 30 (Kazakhstan time) in the Soyuz TMA-21 spacecraft. Photo credit: NASA/Mark Polansky
2011-03-11
At the Kremlin Wall in Moscow March 11, 2011, NASA astronaut Ron Garan lays flowers in honor of fallen icons as part of the ceremonial activities leading to the scheduled launch of Expedition 27 to the International Space Station, scheduled for March 30 (Kazakhstan time) in the Soyuz TMA-21 spacecraft. Photo credit: NASA/Mark Polansky
Platinum Publications, March 1–March 30, 2017 | Poster
Platinum Publications are selected from articles by NCI at Frederick scientists published in 42 prestigious science journals. This list represents articles published during the time period shown above, as generated from PubMed. Articles designated as Platinum Highlights are noteworthy articles selected from among the most recently published Platinum Publications.
75 FR 10795 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-09
... FEDERAL RESERVE SYSTEM Sunshine Act Meeting AGENCY HOLDING THE MEETING: Board of Governors of the Federal Reserve System. TIME AND DATE: 12 p.m., Monday, March 15, 2010. PLACE: Marriner S. Eccles Federal... Federal Reserve System, March 5, 2010. Robert deV. Frierson, Deputy Secretary of the Board. [FR Doc. 2010...
Talking Stick. Volume 28, Number 4, March-April 2011
ERIC Educational Resources Information Center
Baumann, James A., Ed.
2011-01-01
The "Talking Stick" is published bimonthly, six times a year in January/February, March/April, May/June, July/August, September/October, and November/December by the Association of College and University Housing Officers-International. Each issue is divided into three sections, namely: Features, Columns, and Departments. These sections contain…
Apprentices & Trainees: Early Trend Estimates. March Quarter, 2012
ERIC Educational Resources Information Center
National Centre for Vocational Education Research (NCVER), 2012
2012-01-01
This publication presents early estimates of apprentice and trainee commencements for the March quarter 2012. Indicative information about this quarter is presented here; the most recent figures are estimated, taking into account reporting lags that occur at the time of data collection. The early trend estimates are derived from the National…
Talking Stick. Volume 27, Number 4, March-April 2010
ERIC Educational Resources Information Center
Baumann, James A., Ed.
2010-01-01
The "Talking Stick" is published bimonthly, six times a year in January/February, March/April, May/June, July/August, September/October, and November/December by the Association of College and University Housing Officers-International. Each issue is divided into three sections: Features, Columns, and Departments. These sections contain…
75 FR 10795 - Sunshine Act; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-09
... Investment Performance Report. c. Legislative Report. Parts Closed to the Public 3. Proprietary Data. 4.... (Eastern Time) March 15, 2010. Place: 4th Floor Conference Room, 1250 H Street, NW., Washington, DC 20005... Affairs, (202) 942-1640. Dated: March 4, 2010. Thomas K. Emswiler, Secretary, Federal Retirement Thrift...
77 FR 13159 - Sunshine Act Meeting Notice: Board of Governors
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-05
... POSTAL SERVICE Sunshine Act Meeting Notice: Board of Governors Dates and Times: Wednesday, March 21, 2012, at 10 a.m. Place: Washington, DC, at U.S. Postal Service Headquarters, 475 L'Enfant Plaza SW., in the Benjamin Franklin Room. Status: Closed. Matters To Be Considered Wednesday, March 21, at...
Generalized Jaynes-Cummings model as a quantum search algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanelli, A.
2009-07-15
We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.
NASA Astrophysics Data System (ADS)
Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia
The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Digital signal processing algorithms for automatic voice recognition
NASA Technical Reports Server (NTRS)
Botros, Nazeih M.
1987-01-01
The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.
Exact and Heuristic Algorithms for Runway Scheduling
NASA Technical Reports Server (NTRS)
Malik, Waqar A.; Jung, Yoon C.
2016-01-01
This paper explores the Single Runway Scheduling (SRS) problem with arrivals, departures, and crossing aircraft on the airport surface. Constraints for wake vortex separations, departure area navigation separations and departure time window restrictions are explicitly considered. The main objective of this research is to develop exact and heuristic based algorithms that can be used in real-time decision support tools for Air Traffic Control Tower (ATCT) controllers. The paper provides a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the SRS problem, but may prove unusable for application in real-time environment due to large computation times for moderate sized problems. We next propose a second algorithm that uses heuristics to restrict the search space for the DP based algorithm. A third algorithm based on a combination of insertion and local search (ILS) heuristics is then presented. Simulation conducted for the east side of Dallas/Fort Worth International Airport allows comparison of the three proposed algorithms and indicates that the ILS algorithm performs favorably in its ability to find efficient solutions and its computation times.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning. PMID:29209191
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.
Mercado-Crespo, Melissa C; Sumner, Steven A; Spelke, M Bridget; Sugerman, David E; Stanley, Christina
2014-06-20
During November 2013-March 2014, twice as many all-intent drug overdose deaths were reported in Rhode Island as were reported during the same period in previous years. Most deaths were among injection-drug users, and a large percentage involved fentanyl, a synthetic opioid that is 50-100 times more potent than morphine. Clusters of fentanyl-related deaths have been reported recently in several states. From April 2005 to March 2007, time-limited active surveillance from CDC and the Drug Enforcement Administration identified 1,013 deaths caused by illicit fentanyl use in New Jersey; Maryland; Chicago, Illinois; Detroit, Michigan; and Philadelphia, Pennsylvania. Acetyl fentanyl, an illegally produced fentanyl analog, caused a cluster of overdose deaths in northern Rhode Island in 2013.
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
Validation of accelerometer wear and nonwear time classification algorithm.
Choi, Leena; Liu, Zhouwen; Matthews, Charles E; Buchowski, Maciej S
2011-02-01
the use of movement monitors (accelerometers) for measuring physical activity (PA) in intervention and population-based studies is becoming a standard methodology for the objective measurement of sedentary and active behaviors and for the validation of subjective PA self-reports. A vital step in PA measurement is the classification of daily time into accelerometer wear and nonwear intervals using its recordings (counts) and an accelerometer-specific algorithm. the purpose of this study was to validate and improve a commonly used algorithm for classifying accelerometer wear and nonwear time intervals using objective movement data obtained in the whole-room indirect calorimeter. we conducted a validation study of a wear or nonwear automatic algorithm using data obtained from 49 adults and 76 youth wearing accelerometers during a strictly monitored 24-h stay in a room calorimeter. The accelerometer wear and nonwear time classified by the algorithm was compared with actual wearing time. Potential improvements to the algorithm were examined using the minimum classification error as an optimization target. the recommended elements in the new algorithm are as follows: 1) zero-count threshold during a nonwear time interval, 2) 90-min time window for consecutive zero or nonzero counts, and 3) allowance of 2-min interval of nonzero counts with the upstream or downstream 30-min consecutive zero-count window for detection of artifactual movements. Compared with the true wearing status, improvements to the algorithm decreased nonwear time misclassification during the waking and the 24-h periods (all P values < 0.001). the accelerometer wear or nonwear time algorithm improvements may lead to more accurate estimation of time spent in sedentary and active behaviors.
Desert Dust Satellite Retrieval Intercomparison
NASA Technical Reports Server (NTRS)
Carboni, E.; Thomas, G. E.; Sayer, A. M.; Siddans, R.; Poulsen, C. A.; Grainger, R. G.; Ahn, C.; Antoine, D.; Bevan, S.; Braak, R.;
2012-01-01
This work provides a comparison of satellite retrievals of Saharan desert dust aerosol optical depth (AOD) during a strong dust event through March 2006. In this event, a large dust plume was transported over desert, vegetated, and ocean surfaces. The aim is to identify and understand the differences between current algorithms, and hence improve future retrieval algorithms. The satellite instruments considered are AATSR, AIRS, MERIS, MISR, MODIS, OMI, POLDER, and SEVIRI. An interesting aspect is that the different algorithms make use of different instrument characteristics to obtain retrievals over bright surfaces. These include multi-angle approaches (MISR, AATSR), polarisation measurements (POLDER), single-view approaches using solar wavelengths (OMI, MODIS), and the thermal infrared spectral region (SEVIRI, AIRS). Differences between instruments, together with the comparison of different retrieval algorithms applied to measurements from the same instrument, provide a unique insight into the performance and characteristics of the various techniques employed. As well as the intercomparison between different satellite products, the AODs have also been compared to co-located AERONET data. Despite the fact that the agreement between satellite and AERONET AODs is reasonably good for all of the datasets, there are significant differences between them when compared to each other, especially over land. These differences are partially due to differences in the algorithms, such as as20 sumptions about aerosol model and surface properties. However, in this comparison of spatially and temporally averaged data, at least as significant as these differences are sampling issues related to the actual footprint of each instrument on the heterogeneous aerosol field, cloud identification and the quality control flags of each dataset.
Sorting signed permutations by inversions in O(nlogn) time.
Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E
2010-03-01
The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.
Improvements on a privacy-protection algorithm for DNA sequences with generalization lattices.
Li, Guang; Wang, Yadong; Su, Xiaohong
2012-10-01
When developing personal DNA databases, there must be an appropriate guarantee of anonymity, which means that the data cannot be related back to individuals. DNA lattice anonymization (DNALA) is a successful method for making personal DNA sequences anonymous. However, it uses time-consuming multiple sequence alignment and a low-accuracy greedy clustering algorithm. Furthermore, DNALA is not an online algorithm, and so it cannot quickly return results when the database is updated. This study improves the DNALA method. Specifically, we replaced the multiple sequence alignment in DNALA with global pairwise sequence alignment to save time, and we designed a hybrid clustering algorithm comprised of a maximum weight matching (MWM)-based algorithm and an online algorithm. The MWM-based algorithm is more accurate than the greedy algorithm in DNALA and has the same time complexity. The online algorithm can process data quickly when the database is updated. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Minimum-Time Consensus-Based Approach for Power System Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Sun, Yannan
2016-02-01
This paper presents minimum-time consensus based distributed algorithms for power system applications, such as load shedding and economic dispatch. The proposed algorithms are capable of solving these problems in a minimum number of time steps instead of asymptotically as in most of existing studies. Moreover, these algorithms are applicable to both undirected and directed communication networks. Simulation results are used to validate the proposed algorithms.
Fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, R.
1986-01-01
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
Yiannakou, Marinos; Trimikliniotis, Michael; Yiallouras, Christos; Damianou, Christakis
2016-02-01
Due to the heating in the pre-focal field the delay between successive movements in high intensity focused ultrasound (HIFU) are sometimes as long as 60s, resulting to treatment time in the order of 2-3h. Because there is generally a requirement to reduce treatment time, we were motivated to explore alternative transducer motion algorithms in order to reduce pre-focal heating and treatment time. A 1 MHz single element transducer with 4 cm diameter and 10 cm focal length was used. A simulation model was developed that estimates the temperature, thermal dose and lesion development in the pre-focal field. The simulated temperature history that was combined with the motion algorithms produced thermal maps in the pre-focal region. Polyacrylimde gel phantom was used to evaluate the induced pre-focal heating for each motion algorithm used, and also was used to assess the accuracy of the simulation model. Three out of the six algorithms having successive steps close to each other, exhibited severe heating in the pre-focal field. Minimal heating was produced with the algorithms having successive steps apart from each other (square, square spiral and random). The last three algorithms were improved further (with small cost in time), thus eliminating completely the pre-focal heating and reducing substantially the treatment time as compared to traditional algorithms. Out of the six algorithms, 3 were successful in eliminating the pre-focal heating completely. Because these 3 algorithms required no delay between successive movements (except in the last part of the motion), the treatment time was reduced by 93%. Therefore, it will be possible in the future, to achieve treatment time of focused ultrasound therapies shorter than 30 min. The rate of ablated volume achieved with one of the proposed algorithms was 71 cm(3)/h. The intention of this pilot study was to demonstrate that the navigation algorithms play the most important role in reducing pre-focal heating. By evaluating in the future, all commercially available geometries, it will be possible to reduce the treatment time, for thermal ablation protocols intended for oncological targets. Copyright © 2015 Elsevier B.V. All rights reserved.
Hu, Jianguo; Zhang, Luo; Mei, Zhiqiang; Jiang, Yuan; Yi, Yuan; Liu, Li; Meng, Ying; Zhou, Lili; Zeng, Jianhua; Wu, Huan; Jiang, Xingwei
2018-05-22
Ubiquitin E3 ligase MARCH7 plays an important role in T cell proliferation and neuronal development. But its role in ovarian cancer remains unclear. This study aimed to investigate the role of Ubiquitin E3 ligase MARCH7 in ovarian cancer. Real-time PCR, immunohistochemistry and western blotting analysis were performed to determine the expression of MARCH7, MALAT1 and ATG7 in ovarian cancer cell lines and clinical specimens. The role of MARCH7 in maintaining ovarian cancer malignant phenotype was examined by Wound healing assay, Matrigel invasion assays and Mouse orthotopic xenograft model. Luciferase reporter assay, western blot analysis and ChIP assay were used to determine whether MARCH7 activates TGF-β-smad2/3 pathway by interacting with TGFβR2. MARCH7 interacted with MALAT1 by miR-200a (microRNA-200a). MARCH7 may function as a competing endogenous RNA (ceRNA) to regulate the expression of ATG7 by competing with miR-200a. MARCH7 regulated TGF-β-smad2/3 pathway by interacting with TGFβR2. Inhibition of TGF-β-smad2/3 pathway downregulated MARCH7, MALAT1 and ATG7. MiR-200a regulated TGF-β induced autophagy, invasion and metastasis of SKOV3 cells by targeting MARCH7. MARCH7 silencing inhibited autophagy invasion and metastasis of SKOV3 cells both in vitro and in vivo. In contrast, MARCH7 overexpression promoted TGF-β induced autophagy, invasion and metastasis of A2780 cells in vitro by depending on MALAT1 and ATG7. We also found that TGF-β-smad2/3 pathway regulated MARCH7 and ATG7 through MALAT1. These findings suggested that TGFβR2-Smad2/3-MALAT1/MARCH7/ATG7 feedback loop mediated autophagy, migration and invasion in ovarian cancer. © 2018 The Author(s). Published by S. Karger AG, Basel.
Goldstone field test activities: Target search
NASA Technical Reports Server (NTRS)
Tarter, J.
1986-01-01
In March of this year prototype SETI equipment was installed at DSS13, the 26 meter research and development antenna at NASA's Goldstone complex of satellite tracking dishes. The SETI equipment will remain at this site at least through the end of the summer so that the hardware and software developed for signal detection and recognition can be fully tested in a dynamic observatory environment. The field tests are expected to help understand which strategies for observing and which signal recognition algorithms perform best in the presence of strong man-made interfering signals (RFI) and natural astronomical sources.
NASA Technical Reports Server (NTRS)
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.
Impedance computed tomography using an adaptive smoothing coefficient algorithm.
Suzuki, A; Uchiyama, A
2001-01-01
In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.
2018-03-21
Expedition 55 Soyuz Commander Oleg Artemyev of Roscosmos takes a picture with a cell phone after having his Russian Sokol suit pressure checked in preparation for launch aboard the Soyuz MS-08 spacecraft, Wednesday, March 21, 2018 at the Baikonur Cosmodrome in Kazakhstan. Artemyev and flight engineers Ricky Arnold and Drew Feustel of NASA launched aboard the Soyuz MS-08 spacecraft at 1:44 p.m. Eastern time (11:44 p.m. Baikonur time) on March 21 to begin their journey to the International Space Station. Photo Credit: (NASA/Joel Kowsky)
LANDING - STS-3 - NORTHRUP STRIP, NM
1982-03-31
S82-28838 (30 March 1982) --- The space shuttle Columbia (STS-3) touches down on the Northrup Strip at White Sands Missile Range, New Mexico, marking the first time in its three-flight history that it has touched New Mexico soil. T-38 chase plane passenger, Mission Specialist-Astronaut Ronald E. McNair, who also shot some launch photography this flight, recorded a number of frames on 70mm film. Touchdown was shortly after 9 a.m. Mountain Standard Time, March 30, 1982. Photo credit: NASA
Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter
NASA Astrophysics Data System (ADS)
Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.
2018-04-01
Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.
NASA Astrophysics Data System (ADS)
Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong
2018-01-01
An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.
2003-04-01
La détection de cibles mobiles terrestres est l’un des objectifs primaires de la télédétection depuis la terre. Cependant, les retours de cible...Proc. EUSAR’96, 26-28 March 1996, Koenigswinter, Germany, pp. 49-52 ( VDE Publishers) [13] Ender, J., ”Detection and Estimation of Moving Target...Signals by Multi-Channel SAR”, Proc. EUSAR’96, 26-28 March 1996, Koenigswinter, Germany, pp. 411-417, ( VDE Publishers). Also: AEU, Vol. 50, March 1996, pp
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Pressure spectra from single-snapshot tomographic PIV
NASA Astrophysics Data System (ADS)
Schneiders, Jan F. G.; Avallone, Francesco; Pröbsting, Stefan; Ragni, Daniele; Scarano, Fulvio
2018-03-01
The power spectral density and coherence of temporal pressure fluctuations are obtained from low-repetition-rate tomographic PIV measurements. This is achieved by extension of recent single-snapshot pressure evaluation techniques based upon the Taylor's hypothesis (TH) of frozen turbulence and vortex-in-cell (VIC) simulation. Finite time marching of the measured instantaneous velocity fields is performed using TH and VIC. Pressure is calculated from the resulting velocity time series. Because of the theoretical limitations, the finite time marching can be performed until the measured flow structures are convected out of the measurement volume. This provides a lower limit of resolvable frequency range. An upper limit is given by the spatial resolution of the measurements. Finite time-marching approaches are applied to low-repetition-rate tomographic PIV data of the flow past a straight trailing edge at 10 m/s. Reference results of the power spectral density and coherence are obtained from surface pressure transducers. In addition, the results are compared to state-of-the-art experimental data obtained from time-resolved tomographic PIV performed at 10 kHz. The time-resolved approach suffers from low spatial resolution and limited maximum acquisition frequency because of hardware limitations. Additionally, these approaches strongly depend upon the time kernel length chosen for pressure evaluation. On the other hand, the finite time-marching approaches make use of low-repetition-rate tomographic PIV measurements that offer higher spatial resolution. Consequently, increased accuracy of the power spectral density and coherence of pressure fluctuations are obtained in the high-frequency range, in comparison to the time-resolved measurements. The approaches based on TH and VIC are found to perform similarly in the high-frequency range. At lower frequencies, TH is found to underestimate coherence and intensity of the pressure fluctuations in comparison to time-resolved PIV and the microphone reference data. The VIC-based approach, on the other hand, returns results on the order of the reference.
Real-time algorithm for acoustic imaging with a microphone array.
Huang, Xun
2009-05-01
Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.
Backfilling with guarantees granted upon job submission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, Vitus Joseph; Bunde, David P.; Lindsay, Alexander M.
2011-01-01
In this paper, we present scheduling algorithms that simultaneously support guaranteed starting times and favor jobs with system desired traits. To achieve the first of these goals, our algorithms keep a profile with potential starting times for every unfinished job and never move these starting times later, just as in Conservative Backfilling. To achieve the second, they exploit previously unrecognized flexibility in the handling of holes opened in this profile when jobs finish early. We find that, with one choice of job selection function, our algorithms can consistently yield a lower average waiting time than Conservative Backfilling while still providingmore » a guaranteed start time to each job as it arrives. In fact, in most cases, the algorithms give a lower average waiting time than the more aggressive EASY backfilling algorithm, which does not provide guaranteed start times. Alternately, with a different choice of job selection function, our algorithms can focus the benefit on the widest submitted jobs, the reason for the existence of parallel systems. In this case, these jobs experience significantly lower waiting time than Conservative Backfilling with minimal impact on other jobs.« less
Queue and stack sorting algorithm optimization and performance analysis
NASA Astrophysics Data System (ADS)
Qian, Mingzhu; Wang, Xiaobao
2018-04-01
Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.
NASA Technical Reports Server (NTRS)
Chance, Kelly
2003-01-01
This grant is an extension to our previous NASA Grant NAG5-3461, providing incremental funding to continue GOME (Global Ozone Monitoring Experiment) and SCIAMACHY (SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY) studies. This report summarizes research done under these grants through December 31, 2002. The research performed during this reporting period includes development and maintenance of scientific software for the GOME retrieval algorithms, consultation on operational software development for GOME, consultation and development for SCIAMACHY near-real-time (NRT) and off-line (OL) data products, and participation in initial SCIAMACHY validation studies. The Global Ozone Monitoring Experiment was successfully launched on the ERS-2 satellite on April 20, 1995, and remains working in normal fashion. SCIAMACHY was launched March 1, 2002 on the ESA Envisat satellite. Three GOME-2 instruments are now scheduled to fly on the Metop series of operational meteorological satellites (Eumetsat). K. Chance is a member of the reconstituted GOME Scientific Advisory Group, which will guide the GOME-2 program as well as the continuing ERS-2 GOME program.
Short-term Forecasting Ground Magnetic Perturbations with the Space Weather Modeling Framework
NASA Astrophysics Data System (ADS)
Welling, Daniel; Toth, Gabor; Gombosi, Tamas; Singer, Howard; Millward, George
2016-04-01
Predicting ground-based magnetic perturbations is a critical step towards specifying and predicting geomagnetically induced currents (GICs) in high voltage transmission lines. Currently, the Space Weather Modeling Framework (SWMF), a flexible modeling framework for simulating the multi-scale space environment, is being transitioned from research to operational use (R2O) by NOAA's Space Weather Prediction Center. Upon completion of this transition, the SWMF will provide localized dB/dt predictions using real-time solar wind observations from L1 and the F10.7 proxy for EUV as model input. This presentation describes the operational SWMF setup and summarizes the changes made to the code to enable R2O progress. The framework's algorithm for calculating ground-based magnetometer observations will be reviewed. Metrics from data-model comparisons will be reviewed to illustrate predictive capabilities. Early data products, such as regional-K index and grids of virtual magnetometer stations, will be presented. Finally, early successes will be shared, including the code's ability to reproduce the recent March 2015 St. Patrick's Day Storm.
NASA Astrophysics Data System (ADS)
Chance, Kelly
2003-02-01
This grant is an extension to our previous NASA Grant NAG5-3461, providing incremental funding to continue GOME (Global Ozone Monitoring Experiment) and SCIAMACHY (SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY) studies. This report summarizes research done under these grants through December 31, 2002. The research performed during this reporting period includes development and maintenance of scientific software for the GOME retrieval algorithms, consultation on operational software development for GOME, consultation and development for SCIAMACHY near-real-time (NRT) and off-line (OL) data products, and participation in initial SCIAMACHY validation studies. The Global Ozone Monitoring Experiment was successfully launched on the ERS-2 satellite on April 20, 1995, and remains working in normal fashion. SCIAMACHY was launched March 1, 2002 on the ESA Envisat satellite. Three GOME-2 instruments are now scheduled to fly on the Metop series of operational meteorological satellites (Eumetsat). K. Chance is a member of the reconstituted GOME Scientific Advisory Group, which will guide the GOME-2 program as well as the continuing ERS-2 GOME program.
U.S. Participation in the GOME and SCIAMACHY Projects
NASA Technical Reports Server (NTRS)
Chance, K. V.
1996-01-01
This report summarizes research done under NASA Grant NAGW-2541 from April 1, 1996 through March 31, 1997. The research performed during this reporting period includes development and maintenance of scientific software for the GOME retrieval algorithms, consultation on operational software development for GOME, consultation and development for SCIAMACHY near-real-time (NRT) and off-line (OL) data products, and development of infrared line-by-line atmospheric modeling and retrieval capability for SCIAMACHY. SAO also continues to participate in GOME validation studies, to the limit that can be accomplished at the present level of funding. The Global Ozone Monitoring Experiment was successfully launched on the ERS-2 satellite on April 20, 1995, and remains working in normal fashion. SCIAMACHY is currently in instrument characterization. The first two European ozone monitoring instruments (OMI), to fly on the Metop series of operational meteorological satellites being planned by Eumetsat, have been selected to be GOME-type instruments (the first, in fact, will be the refurbished GOME flight spare). K. Chance is the U.S. member of the OMI Users Advisory Group.
A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications
NASA Technical Reports Server (NTRS)
Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco
2015-01-01
This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.
Ozone determinations with the NOAA SBUV/2 system
NASA Technical Reports Server (NTRS)
Planet, Walter G.; Lienesch, James H.; Bowman, Harold D.; Miller, Alvin J.; Nagatani, Ronald M.
1994-01-01
The NOAA satellite ozone monitoring program was initiated by the National Environmental Satellite Data and Information Service (NESDIS) in December 1984, with the launch of the NOAA-9 spacecraft carrying the first operational Solar Backscatter Ultraviolet Spectrometer (SBUV/2). This instrument and its successor on NOAA-11, launched in 1988, are similar to the SBUV instrument launched by the NASA in 1978 on the Nimbus-7 research spacecraft. Measurements by the SBUV and SBUV/2 instruments overlap beginning in 1985. These instruments use measurements of the reflected ultraviolet solar radiation from the atmosphere to derive total ozone amounts and ozone vertical profiles. Since launch, the NOAA instruments and the derived products have been undergoing extensive evaluation by scientists of NOAA and NASA. Measurements obtained with these instruments are processed in real time by the NESDIS. These are reprocessed as the SBUV/2 instrument characterization is refined and as the retrieval algorithm for processing the data is improved. The NOAA-9 ozone data archive begins in March 1985 and continues through October 1990. The archive of NOAA-11 data begins in January 1989 and the data continues to be acquired in 1992.
PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting
Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693
PRESEE: an MDL/MML algorithm to time-series stream segmenting.
Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.
A joint equalization algorithm in high speed communication systems
NASA Astrophysics Data System (ADS)
Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin
2018-02-01
This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.
77 FR 22359 - Meetings of Humanities Panel
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-13
... Institutes grant program, submitted to the Division of Education Programs, at the March 1, 2012 deadline. 2... The Summer Seminars and Institutes grant program, submitted to the Division of Education Programs, at... Division of Education Programs, at the March 1, 2012 deadline. 4. Date: May 7, 2012. Time: 9 a.m. to 5 p.m...
76 FR 80338 - Secretarial India Infrastructure Business Development Mission, March 25-30, 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
.../ from consumers on a near real-time basis and improve system reliability Moving to a smart grid to... technologies in India. The real challenge in the power sector in India lies in managing the upgrading of the....export.gov/newsletter/march2008/initiatives.html for additional information). Expenses for travel...
43 CFR 2522.3 - Act of March 28, 1908.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Act of March 28, 1908. 2522.3 Section 2522.3 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) DESERT-LAND ENTRIES Extensions of Time To Make...
43 CFR 2522.3 - Act of March 28, 1908.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Act of March 28, 1908. 2522.3 Section 2522.3 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) DESERT-LAND ENTRIES Extensions of Time To Make...
43 CFR 2522.3 - Act of March 28, 1908.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Act of March 28, 1908. 2522.3 Section 2522.3 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) DESERT-LAND ENTRIES Extensions of Time To Make...
43 CFR 2522.3 - Act of March 28, 1908.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Act of March 28, 1908. 2522.3 Section 2522.3 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) DESERT-LAND ENTRIES Extensions of Time To Make...
77 FR 13367 - Advisory Committee for International Science and Engineering; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-06
... NATIONAL SCIENCE FOUNDATION Advisory Committee for International Science and Engineering; Notice... Science and Engineering (25104). Date and Time: March 19, 2012, 8:30 a.m.-5 p.m. March 20, 2012, 8:30 a.m.... Type of Meeting: Open. Contact Person: Robert Webber, Office of International Science and Engineering...
78 FR 13384 - Advisory Committee for International Science and Engineering; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-27
... NATIONAL SCIENCE FOUNDATION Advisory Committee for International Science and Engineering; Notice... Science and Engineering (25104). Date/Time: March 14, 2013 9:30 a.m.-5:00 p.m. March 15, 2013 8:30 a.m.-12... of International Science and Engineering, National Science Foundation, 4201 Wilson Blvd., Arlington...
75 FR 6063 - Committee on Equal Opportunities in Science and Engineering (CEOSE); Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-05
... NATIONAL SCIENCE FOUNDATION Committee on Equal Opportunities in Science and Engineering (CEOSE... Opportunities in Science and Engineering (1173). Dates/Time: March 8, 2010, 8:30 a.m.-5:30 p.m.; March 9, 2010... concerning broadening participation in science and engineering. Agenda Primary Focus of This Meeting...
76 FR 8752 - National Heart, Lung, and Blood Institute; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-15
...; Mentored Career Development Award to Promote Faculty Diversity/Re-Entry in Biomedical Research. Date: March... Institute Special Emphasis Panel; Short-Term Research Education Program to Increase Diversity in Health-Related Research. Date: March 3, 2011. Time: 8:30 a.m. to 4 p.m. Agenda: To review and evaluate grant...
ERIC Educational Resources Information Center
Stuart, Reginald
2012-01-01
When fans of intercollegiate basketball see the month of March approach, they know it's time for the near-marathon round of March Madness when the best of the nation's college basketball teams square off in a battle to the finish for the NCAA Division I championship. It's the month when basketball powerhouses seek to reaffirm their status and…
75 FR 74071 - Sunshine Act Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-30
... INTER-AMERICAN FOUNDATION BOARD MEETING Sunshine Act Meetings TIME AND DATE: December 13, 2010, 9... Considered [dec221] Approval of the Minutes of the March 29, 2010, Meeting of the Board of Directors. [dec221... Public [dec221] Approval of the Minutes of the March 29, 2010, Meeting of the Board of Directors. [dec221...
Mining User Dwell Time for Personalized Web Search Re-Ranking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Jiang, Hao; Lau, Francis
We propose a personalized re-ranking algorithm through mining user dwell times derived from a user's previously online reading or browsing activities. We acquire document level user dwell times via a customized web browser, from which we then infer conceptword level user dwell times in order to understand a user's personal interest. According to the estimated concept word level user dwell times, our algorithm can estimate a user's potential dwell time over a new document, based on which personalized webpage re-ranking can be carried out. We compare the rankings produced by our algorithm with rankings generated by popular commercial search enginesmore » and a recently proposed personalized ranking algorithm. The results clearly show the superiority of our method. In this paper, we propose a new personalized webpage ranking algorithmthrough mining dwell times of a user. We introduce a quantitative model to derive concept word level user dwell times from the observed document level user dwell times. Once we have inferred a user's interest over the set of concept words the user has encountered in previous readings, we can then predict the user's potential dwell time over a new document. Such predicted user dwell time allows us to carry out personalized webpage re-ranking. To explore the effectiveness of our algorithm, we measured the performance of our algorithm under two conditions - one with a relatively limited amount of user dwell time data and the other with a doubled amount. Both evaluation cases put our algorithm for generating personalized webpage rankings to satisfy a user's personal preference ahead of those by Google, Yahoo!, and Bing, as well as a recent personalized webpage ranking algorithm.« less
Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984
Sipkin, S.A.
1987-01-01
The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.
Debnath, M; Santoni, C; Leonardi, S; Iungo, G V
2017-04-13
The dynamics of the velocity field resulting from the interaction between the atmospheric boundary layer and a wind turbine array can affect significantly the performance of a wind power plant and the durability of wind turbines. In this work, dynamics in wind turbine wakes and instabilities of helicoidal tip vortices are detected and characterized through modal decomposition techniques. The dataset under examination consists of snapshots of the velocity field obtained from large-eddy simulations (LES) of an isolated wind turbine, for which aerodynamic forcing exerted by the turbine blades on the atmospheric boundary layer is mimicked through the actuator line model. Particular attention is paid to the interaction between the downstream evolution of the helicoidal tip vortices and the alternate vortex shedding from the turbine tower. The LES dataset is interrogated through different modal decomposition techniques, such as proper orthogonal decomposition and dynamic mode decomposition. The dominant wake dynamics are selected for the formulation of a reduced order model, which consists in a linear time-marching algorithm where temporal evolution of flow dynamics is obtained from the previous temporal realization multiplied by a time-invariant operator.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Decadal predictability of winter windstorm frequency in Eastern Europe
NASA Astrophysics Data System (ADS)
Höschel, Ines; Grieger, Jens; Ulbrich, Uwe
2017-04-01
Winter windstorms are one of the most impact relevant extreme-weather events in Europe. This study is focussed on windstorm frequency in Eastern Europe at multi-year time scale. Individual storms are identified by using 6-hourly 10m-wind-fields. The impact-oriented tracking algorithm is based on the exceedance of the local 98 percentile of wind speed and a minimum duration of 18 hours. Here, storm frequency is the number of 1000km-footprints of identified windstorms touching the location during extended boreal winter from October to March. The temporal development of annual storm frequencies in Eastern Europe shows variations on a six to fifteen years period. Higher than normal windstorm frequency occurred end of the 1950s and in beginning of the seventies, while lower than normal frequency were around 1960 and in the forties, for example. The correlation between bandpass filtered storm frequency and North Atlantic sea surface temperature shows a significant pattern with a positive correlation in the subtropical East Atlantic and significant negative correlations in the Gulfstream region. The relationship between these multi-year variations and predictability on decadal time scales is discussed. The resulting skill of winter wind storms in the German decadal prediction system MiKlip, based on the numerical earth system model MPI-ESM, will be presented.
Arbelle, Assaf; Reyes, Jose; Chen, Jia-Yun; Lahav, Galit; Riklin Raviv, Tammy
2018-04-22
We present a novel computational framework for the analysis of high-throughput microscopy videos of living cells. The proposed framework is generally useful and can be applied to different datasets acquired in a variety of laboratory settings. This is accomplished by tying together two fundamental aspects of cell lineage construction, namely cell segmentation and tracking, via a Bayesian inference of dynamic models. In contrast to most existing approaches, which aim to be general, no assumption of cell shape is made. Spatial, temporal, and cross-sectional variation of the analysed data are accommodated by two key contributions. First, time series analysis is exploited to estimate the temporal cell shape uncertainty in addition to cell trajectory. Second, a fast marching (FM) algorithm is used to integrate the inferred cell properties with the observed image measurements in order to obtain image likelihood for cell segmentation, and association. The proposed approach has been tested on eight different time-lapse microscopy data sets, some of which are high-throughput, demonstrating promising results for the detection, segmentation and association of planar cells. Our results surpass the state of the art for the Fluo-C2DL-MSC data set of the Cell Tracking Challenge (Maška et al., 2014). Copyright © 2018 Elsevier B.V. All rights reserved.
Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun
2017-11-10
In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
NASA Astrophysics Data System (ADS)
Roberge, S.; Chokmani, K.; De Sève, D.
2012-04-01
The snow cover plays an important role in the hydrological cycle of Quebec (Eastern Canada). Consequently, evaluating its spatial extent interests the authorities responsible for the management of water resources, especially hydropower companies. The main objective of this study is the development of a snow-cover mapping strategy using remote sensing data and ensemble based systems techniques. Planned to be tested in a near real-time operational mode, this snow-cover mapping strategy has the advantage to provide the probability of a pixel to be snow covered and its uncertainty. Ensemble systems are made of two key components. First, a method is needed to build an ensemble of classifiers that is diverse as much as possible. Second, an approach is required to combine the outputs of individual classifiers that make up the ensemble in such a way that correct decisions are amplified, and incorrect ones are cancelled out. In this study, we demonstrate the potential of ensemble systems to snow-cover mapping using remote sensing data. The chosen classifier is a sequential thresholds algorithm using NOAA-AVHRR data adapted to conditions over Eastern Canada. Its special feature is the use of a combination of six sequential thresholds varying according to the day in the winter season. Two versions of the snow-cover mapping algorithm have been developed: one is specific for autumn (from October 1st to December 31st) and the other for spring (from March 16th to May 31st). In order to build the ensemble based system, different versions of the algorithm are created by varying randomly its parameters. One hundred of the versions are included in the ensemble. The probability of a pixel to be snow, no-snow or cloud covered corresponds to the amount of votes the pixel has been classified as such by all classifiers. The overall performance of ensemble based mapping is compared to the overall performance of the chosen classifier, and also with ground observations at meteorological stations.
NASA Technical Reports Server (NTRS)
Ziemke, J. R.; Kramarova, N. A.; Bhartia, P. K.; Degenstein, D. A.; Deland, M. T.
2016-01-01
Since October 2004 the Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) onboard the Aura satellite have provided over 11 years of continuous tropospheric ozone measurements. These OMI/MLS measurements have been used in many studies to evaluate dynamical and photochemical effects caused by ENSO, the Madden-Julian Oscillation (MJO) and shorter timescales, as well as long-term trends and the effects of deep convection on tropospheric ozone. Given that the OMI and MLS instruments have now extended well beyond their expected lifetimes, our goal is to continue their long record of tropospheric ozone using recent Ozone Mapping Profiler Suite (OMPS) measurements. The OMPS onboard the Suomi National Polar-orbiting Partnership NPP satellite was launched on October 28, 2011 and is comprised of three instruments: the nadir mapper, the nadir profiler, and the limb profiler. Our study combines total column ozone from the OMPS nadir mapper with stratospheric column ozone from the OMPS limb profiler to measure tropospheric ozone residual. The time period for the OMPS measurements is March 2012 present. For the OMPS limb profiler retrievals, the OMPS v2 algorithm from Goddard is tested against the University of Saskatchewan (USask) Algorithm. The retrieved ozone profiles from each of these algorithms are evaluated with ozone profiles from both ozonesondes and the Aura Microwave Limb Sounder (MLS). Effects on derived OMPS tropospheric ozone caused by the 2015-2016 El Nino event are highlighted. This recent El Nino produced anomalies in tropospheric ozone throughout the tropical Pacific involving increases of approximately 10 DU over Indonesia and decreases approximately 5-10 DU in the eastern Pacific. These changes in ozone due to El Nino were predominantly dynamically-induced, caused by the eastward shift in sea-surface temperature and convection from the western to the eastern Pacific.
Geostationary Lightning Mapper for GOES-R
NASA Technical Reports Server (NTRS)
Goodman, Steven; Blakeslee, Richard; Koshak, William
2007-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR optical detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch in 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fully operational. The mission objectives for the GLM are to 1) provide continuous, full-disk lightning measurements for storm warning and Nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 11 year data record of global lightning activity. Instrument formulation studies begun in January 2006 will be completed in March 2007, with implementation expected to begin in September 2007. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite, airborne science missions (e.g., African Monsoon Multi-disciplinary Analysis, AMMA), and regional test beds (e.g, Lightning Mapping Arrays) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data now being provided to selected forecast offices will lead to improved understanding of the application of these data in the severe storm warning process and accelerate the development of the pre-launch algorithms and Nowcasting applications. Proxy data combined with MODIS and Meteosat Second Generation SEVERI observations will also lead to new applications (e.g., multi-sensor precipitation algorithms blending the GLM with the Advanced Baseline Imager, convective cloud initiation and identification, early warnings of lightning threat, storm tracking, and data assimilation).
Geostationary Lightning Mapper for GOES-R and Beyond
NASA Technical Reports Server (NTRS)
Goodman, Steven J.; Blakeslee, R. J.; Koshak, W.
2008-01-01
The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk as part of a 3-axis stabilized, geostationary weather satellite system. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series with a planned launch readiness in December 2014 will carry a GLM that will provide continuous day and night observations of lightning from the west coast of Africa (GOES-E) to New Zealand (GOES-W) when the constellation is fUlly operational. The mission objectives for the GLM are to 1) provide continuous, full-disk lightning measurements for storm warning and nowcasting, 2) provide early warning of tornadic activity, and 3) accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997-Present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. Instrument formulation studies were completed in March 2007 and the implementation phase to develop a prototype model and up to four flight models will be underway in the latter part of 2007. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama and the Washington DC Metropolitan area) are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. Real time lightning mapping data are being provided in an experimental mode to selected National Weather Service (NWS) forecast offices in Southern and Eastern Region. This effort is designed to help improve our understanding of the application of these data in operational settings.
Mao, Qingqing; Jay, Melissa; Calvert, Jacob; Barton, Christopher; Shimabukuro, David; Shieh, Lisa; Chettipally, Uli; Fletcher, Grant; Kerem, Yaniv; Zhou, Yifan; Das, Ritankar
2018-01-01
Objectives We validate a machine learning-based sepsis-prediction algorithm (InSight) for the detection and prediction of three sepsis-related gold standards, using only six vital signs. We evaluate robustness to missing data, customisation to site-specific data using transfer learning and generalisability to new settings. Design A machine-learning algorithm with gradient tree boosting. Features for prediction were created from combinations of six vital sign measurements and their changes over time. Setting A mixed-ward retrospective dataset from the University of California, San Francisco (UCSF) Medical Center (San Francisco, California, USA) as the primary source, an intensive care unit dataset from the Beth Israel Deaconess Medical Center (Boston, Massachusetts, USA) as a transfer-learning source and four additional institutions’ datasets to evaluate generalisability. Participants 684 443 total encounters, with 90 353 encounters from June 2011 to March 2016 at UCSF. Interventions None. Primary and secondary outcome measures Area under the receiver operating characteristic (AUROC) curve for detection and prediction of sepsis, severe sepsis and septic shock. Results For detection of sepsis and severe sepsis, InSight achieves an AUROC curve of 0.92 (95% CI 0.90 to 0.93) and 0.87 (95% CI 0.86 to 0.88), respectively. Four hours before onset, InSight predicts septic shock with an AUROC of 0.96 (95% CI 0.94 to 0.98) and severe sepsis with an AUROC of 0.85 (95% CI 0.79 to 0.91). Conclusions InSight outperforms existing sepsis scoring systems in identifying and predicting sepsis, severe sepsis and septic shock. This is the first sepsis screening system to exceed an AUROC of 0.90 using only vital sign inputs. InSight is robust to missing data, can be customised to novel hospital data using a small fraction of site data and retains strong discrimination across all institutions. PMID:29374661
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011
Algorithm for Training a Recurrent Multilayer Perceptron
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.
2004-01-01
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Efficient Record Linkage Algorithms Using Complete Linkage Clustering.
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.
An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming
2017-02-01
In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.
Efficient Record Linkage Algorithms Using Complete Linkage Clustering
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604
Ukimura, Osamu; Magi-Galluzzi, Cristina; Gill, Inderbir S
2006-04-01
We evaluated whether intraoperative real-time TRUS navigation during LRP can decrease the incidence of positive surgical margins. Since March 2001, 294 patients with clinically organ confined prostate cancer undergoing LRP have been retrospectively divided into 2 groups, including group 1-217 who underwent LRP without TRUS from March 2001 to February 2003 and group 2-77 who have undergone LRP with TRUS since March 2003. Various baseline parameters were similar between the groups. Before March 2001 the senior surgeon had already performed more than 50 cases of LRP, thus, gaining reasonable familiarity with the technique. Compared to group 1, group 2 had a significantly decreased rate of positive surgical margins in patients with pT3 disease (57% vs 18%, p = 0.002). Positive margin rates also decreased in our overall experience (29% vs 9%, p = 0.0002). Intraoperative TRUS correctly predicted pT2 and pT3 disease in 85% and 86% of patients, respectively. Of the 54 TRUS visualized hypoechoic lesions at sites corresponding to biopsy proven cancer extracapsular extension was suspected in 31, leading to a real-time recommendation of calibrated wider, site specific dissection to achieve negative surgical margins. Intraoperative TRUS monitoring during LRP allows individualized, precise dissection tailored to the specific prostate contour anatomy, thus, compensating for the muted tactile feedback of laparoscopy. In what is to our knowledge the initial experience real-time TRUS guidance significantly decreased the incidence of positive surgical margins during LRP. In the future this concept of rectum based, intraoperative real-time navigation may facilitate a more sophisticated performance of radical prostatectomy.
van Holle, Lionel; Bauchau, Vincent
2014-01-01
Purpose Disproportionality methods measure how unexpected the observed number of adverse events is. Time-to-onset (TTO) methods measure how unexpected the TTO distribution of a vaccine-event pair is compared with what is expected from other vaccines and events. Our purpose is to compare the performance associated with each method. Methods For the disproportionality algorithms, we defined 336 combinations of stratification factors (sex, age, region and year) and threshold values of the multi-item gamma Poisson shrinker (MGPS). For the TTO algorithms, we defined 18 combinations of significance level and time windows. We used spontaneous reports of adverse events recorded for eight vaccines. The vaccine product labels were used as proxies for true safety signals. Algorithms were ranked according to their positive predictive value (PPV) for each vaccine separately; amedian rank was attributed to each algorithm across vaccines. Results The algorithm with the highest median rank was based on TTO with a significance level of 0.01 and a time window of 60 days after immunisation. It had an overall PPV 2.5 times higher than for the highest-ranked MGPS algorithm, 16th rank overall, which was fully stratified and had a threshold value of 0.8. A TTO algorithm with roughly the same sensitivity as the highest-ranked MGPS had better specificity but longer time-to-detection. Conclusions Within the scope of this study, the majority of the TTO algorithms presented a higher PPV than for any MGPS algorithm. Considering the complementarity of TTO and disproportionality methods, a signal detection strategy combining them merits further investigation. PMID:24038719
NASA Astrophysics Data System (ADS)
Brantut, Nicolas
2018-06-01
Acoustic emission (AE) and active ultrasonic wave velocity monitoring are often performed during laboratory rock deformation experiments, but are typically processed separately to yield homogenized wave velocity measurements and approximate source locations. Here, I present a numerical method and its implementation in a free software to perform a joint inversion of AE locations together with the 3-D, anisotropic P-wave structure of laboratory samples. The data used are the P-wave first arrivals obtained from AEs and active ultrasonic measurements. The model parameters are the source locations and the P-wave velocity and anisotropy parameter (assuming transverse isotropy) at discrete points in the material. The forward problem is solved using the fast marching method, and the inverse problem is solved by the quasi-Newton method. The algorithms are implemented within an integrated free software package called FaATSO (Fast Marching Acoustic Emission Tomography using Standard Optimisation). The code is employed to study the formation of compaction bands in a porous sandstone. During deformation, a front of AEs progresses from one end of the sample, associated with the formation of a sequence of horizontal compaction bands. Behind the active front, only sparse AEs are observed, but the tomography reveals that the P-wave velocity has dropped by up to 15 per cent, with an increase in anisotropy of up to 20 per cent. Compaction bands in sandstones are therefore shown to produce sharp changes in seismic properties. This result highlights the potential of the methodology to image temporal variations of elastic properties in complex geomaterials, including the dramatic, localized changes associated with microcracking and damage generation.
Transition zone structure beneath Ethiopia from 3-D fast marching pseudo-migration stacking
NASA Astrophysics Data System (ADS)
Benoit, M. H.; Lopez, A.; Levin, V.
2008-12-01
Several models for the origin of the Afar hotspot have been put forth over the last decade, but much ambiguity remains as to whether the hotspot tectonism found there is due to a shallow or deeply seated feature. Additionally, there has been much debate as to whether the hotspot owes its existence to a 'classic' mantle plume feature or if it is part of the African Superplume complex. To further understand the origin of the hotspot, we employ a new receiver function stacking method that incorporates a fast-marching three- dimensional ray tracing algorithm to improve upon existing studies of the mantle transition zone structure. Using teleseismic data from the Ethiopia Broadband Seismic Experiment and the EAGLE (Ethiopia Afar Grand Lithospheric Experiment) experiment, we stack receiver functions using a three-dimensional pseudo- migration technique to examine topography on the 410 and 660 km discontinuities. Previous methods of receiver function pseudo-migration incorporated ray tracing methods that were not able to ray trace through highly complicated 3-D structure, or the ray tracing techniques only produced 3-D time perturbations associated 1-D rays in a 3-D velocity medium. These previous techniques yielded confusing and incomplete results for when applied to the exceedingly complicated mantle structure beneath Ethiopia. Indeed, comparisons of the 1-D versus 3-D ray tracing techniques show that the 1-D technique mislocated structure laterally in the mantle by over 100 km. Preliminary results using our new technique show a shallower then average 410 km discontinuity and a deeper than average 660 km discontinuity over much of the region, suggested that the hotspot has a deep seated origin.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times
NASA Astrophysics Data System (ADS)
Li, Xin; Fung, Richard Y. K.
2018-02-01
This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.
Two-dimensional fast marching for geometrical optics.
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore
2014-11-03
We develop an approach for the fast and accurate determination of geometrical optics solutions to Maxwell's equations in inhomogeneous 2D media and for TM polarized electric fields. The eikonal equation is solved by the fast marching method. Particular attention is paid to consistently discretizing the scatterers' boundaries and matching the discretization to that of the computational domain. The ray tracing is performed, in a direct and inverse way, by using a technique introduced in computer graphics for the fast and accurate generation of textured images from vector fields. The transport equation is solved by resorting only to its integral form, the transport of polarization being trivial for the considered geometry and polarization. Numerical results for the plane wave scattering of two perfectly conducting circular cylinders and for a Luneburg lens prove the accuracy of the algorithm. In particular, it is shown how the approach is capable of properly accounting for the multiple scattering occurring between the two metallic cylinders and how inverse ray tracing should be preferred to direct ray tracing in the case of the Luneburg lens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harker, Brian J.; Pevtsov, Alexei A., E-mail: bharker@nso.edu, E-mail: apevtsov@nso.edu
NOAA 11429 was the source of an M7.9 X-ray flare at the western solar limb (N18° W63°) on 2012 March 13 at 17:12 UT. Observations of the line-of-sight magnetic flux and the Stokes I and V profiles from which it is derived were carried out by the Solar Dynamics Observatory Helioseismic and Magnetic Imager (SDO/HMI) with a 45 s cadence over the full disk, at a spatial sampling of 0.''5. During flare onset, a transient patch of negative flux can be observed in SDO/HMI magnetograms to rapidly appear within the positive polarity penumbra of NOAA 11429. We present here amore » detailed study of this magnetic transient and offer interpretations as to whether this highly debated phenomenon represents a 'real' change in the structure of the magnetic field at the site of the flare, or is instead a product of instrumental/algorithmic artifacts related to particular SDO/HMI data reduction techniques.« less
NASA Technical Reports Server (NTRS)
Frolov, A. D.; Thompson, A. M.; Hudson, R. D.; Browell, E. V.; Oltmans, S. J.; Witte, J. C.; Bhartia, P. K. (Technical Monitor)
2002-01-01
Over the past several years, we have developed two new tropospheric ozone retrievals from the TOMS (Total Ozone Mapping Spectrometer) satellite instrument that are of sufficient resolution to follow pollution episodes. The modified-residual technique uses v. 7 TOMS total ozone and is applicable to tropical regimes in which the wave-one pattern in total ozone is observed. The TOMS-direct method ('TDOT' = TOMS Direct Ozone in the Troposphere) represents a new algorithm that uses TOMS radiances directly to extract tropospheric ozone in regions of constant stratospheric ozone. It is not geographically restricted, using meteorological regimes as the basis for classifying TOMS radiances and for selecting appropriate comparison data. TDOT is useful where tropospheric ozone displays high mixing ratios and variability characteristic of pollution. Some of these episodes were observed downwind of Asian biomass burning during the TRACE-P (Transport and Atmospheric Chemical Evolution-Pacific) field experiment in March 2001. This paper features comparisons among TDOT tropospheric ozone column depth, integrated uv-DIAL measurements made from NASA's DC-8, and ozonesonde data.
NASA Astrophysics Data System (ADS)
Lan, Ma; Xiao, Wen; Chen, Zonghui; Hao, Hongliang; Pan, Feng
2018-01-01
Real-time micro-vibration measurement is widely used in engineering applications. It is very difficult for traditional optical detection methods to achieve real-time need in a relatively high frequency and multi-spot synchronous measurement of a region at the same time,especially at the nanoscale. Based on the method of heterodyne interference, an experimental system of real-time measurement of micro - vibration is constructed to satisfy the demand in engineering applications. The vibration response signal is measured by combing optical heterodyne interferometry and a high-speed CMOS-DVR image acquisition system. Then, by extracting and processing multiple pixels at the same time, four digital demodulation technique are implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. Different kinds of demodulation algorithms are analyzed and the results show that these four demodulation algorithms are suitable for different interference signals. Both autocorrelation algorithm and cross-correlation algorithm meet the needs of real-time measurements. The autocorrelation algorithm demodulates the frequency more accurately, while the cross-correlation algorithm is more accurate in solving the amplitude.
Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags
NASA Astrophysics Data System (ADS)
ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu
2017-05-01
Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.
AVE/VAS 3: 25-mb sounding data
NASA Technical Reports Server (NTRS)
Sienkiewicz, M. E.
1982-01-01
The rawinsonde sounding program for the AVE/VAS 3 experiment is described. Tabulated data are presented at 25-mb intervals for the 24 National Weather Service stations and 14 special stations participating in the experiment. Soundings were taken at 3-hr intervals, beginning at 1200 GMT on March 27, 1982, and ending at 0600 GMT on March 28, 1982 (7 sounding times). An additional sounding was taken at the National Weather Service stations at 1200 GMT on March 28, 1982, at the normal synoptic observation time. The method of processing soundings is briefly discussed, estimates of the RMS errors in the data are presented, and an example of contact data is given. Termination pressures of soundings taken in the mesos-beta-scale network are tabulated, as are observations of ground temperature at a depth of 2 cm.
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Encryption and decryption algorithm using algebraic matrix approach
NASA Astrophysics Data System (ADS)
Thiagarajan, K.; Balasubramanian, P.; Nagaraj, J.; Padmashree, J.
2018-04-01
Cryptographic algorithms provide security of data against attacks during encryption and decryption. However, they are computationally intensive process which consume large amount of CPU time and space at time of encryption and decryption. The goal of this paper is to study the encryption and decryption algorithm and to find space complexity of the encrypted and decrypted data by using of algorithm. In this paper, we encrypt and decrypt the message using key with the help of cyclic square matrix provides the approach applicable for any number of words having more number of characters and longest word. Also we discussed about the time complexity of the algorithm. The proposed algorithm is simple but difficult to break the process.
The Research and Test of Fast Radio Burst Real-time Search Algorithm Based on GPU Acceleration
NASA Astrophysics Data System (ADS)
Wang, J.; Chen, M. Z.; Pei, X.; Wang, Z. Q.
2017-03-01
In order to satisfy the research needs of Nanshan 25 m radio telescope of Xinjiang Astronomical Observatory (XAO) and study the key technology of the planned QiTai radio Telescope (QTT), the receiver group of XAO studied the GPU (Graphics Processing Unit) based real-time FRB searching algorithm which developed from the original FRB searching algorithm based on CPU (Central Processing Unit), and built the FRB real-time searching system. The comparison of the GPU system and the CPU system shows that: on the basis of ensuring the accuracy of the search, the speed of the GPU accelerated algorithm is improved by 35-45 times compared with the CPU algorithm.
NASA Astrophysics Data System (ADS)
Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.
2018-05-01
For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.
Universal single level implicit algorithm for gasdynamics
NASA Technical Reports Server (NTRS)
Lombard, C. K.; Venkatapthy, E.
1984-01-01
A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.
Roadside IED detection using subsurface imaging radar and rotary UAV
NASA Astrophysics Data System (ADS)
Qin, Yexian; Twumasi, Jones O.; Le, Viet Q.; Ren, Yu-Jiun; Lai, C. P.; Yu, Tzuyang
2016-05-01
Modern improvised explosive device (IED) and mine detection sensors using microwave technology are based on ground penetrating radar operated by a ground vehicle. Vehicle size, road conditions, and obstacles along the troop marching direction limit operation of such sensors. This paper presents a new conceptual design using a rotary unmanned aerial vehicle (UAV) to carry subsurface imaging radar for roadside IED detection. We have built a UAV flight simulator with the subsurface imaging radar running in a laboratory environment and tested it with non-metallic and metallic IED-like targets. From the initial lab results, we can detect the IED-like target 10-cm below road surface while carried by a UAV platform. One of the challenges is to design the radar and antenna system for a very small payload (less than 3 lb). The motion compensation algorithm is also critical to the imaging quality. In this paper, we also demonstrated the algorithm simulation and experimental imaging results with different IED target materials, sizes, and clutters.
Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.
Latha, Indu; Reichenbach, Stephen E; Tao, Qingping
2011-09-23
Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.
Optimizing Approximate Weighted Matching on Nvidia Kepler K40
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh
Matching is a fundamental graph problem with numerous applications in science and engineering. While algorithms for computing optimal matchings are difficult to parallelize, approximation algorithms on the other hand generally compute high quality solutions and are amenable to parallelization. In this paper, we present efficient implementations of the current best algorithm for half-approximate weighted matching, the Suitor algorithm, on Nvidia Kepler K-40 platform. We develop four variants of the algorithm that exploit hardware features to address key challenges for a GPU implementation. We also experiment with different combinations of work assigned to a warp. Using an exhaustive set ofmore » $269$ inputs, we demonstrate that the new implementation outperforms the previous best GPU algorithm by $10$ to $$100\\times$$ for over $100$ instances, and from $100$ to $$1000\\times$$ for $15$ instances. We also demonstrate up to $$20\\times$$ speedup relative to $2$ threads, and up to $$5\\times$$ relative to $16$ threads on Intel Xeon platform with $16$ cores for the same algorithm. The new algorithms and implementations provided in this paper will have a direct impact on several applications that repeatedly use matching as a key compute kernel. Further, algorithm designs and insights provided in this paper will benefit other researchers implementing graph algorithms on modern GPU architectures.« less
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1991-01-01
An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.