Science.gov

Sample records for algorithm takes full

  1. A parallel Full-CI algorithm

    NASA Astrophysics Data System (ADS)

    Ansaloni, Roberto; Bendazzoli, Gian Luigi; Evangelisti, Stefano; Rossi, Elda

    2000-06-01

    A Full Configuration Interaction (Full-CI) algorithm is described. It is an integral-driven approach, with on-the-fly computation of the string-excitation lists that realize the application of the Hamiltonian to the Full-CI vector. The algorithm has been implemented on vector and parallel architectures, both of shared and distributed-memory type. This gave us the possibility of performing large benchmark calculations, with a Full-CI space dimension up to almost ten billion of symmetry-adapted Slater determinants.

  2. Full motion video geopositioning algorithm integrated test bed

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Braun, Aaron; Theiss, Henry; Gurson, Adam

    2015-05-01

    In order to better understand the issues associated with Full Motion Video (FMV) geopositioning and to develop corresponding strategies and algorithms, an integrated test bed is required. It is used to evaluate the performance of various candidate algorithms associated with registration of the video frames and subsequent geopositioning using the registered frames. Major issues include reliable error propagation or predicted solution accuracy, optimal vs. suboptimal vs. divergent solutions, robust processing in the presence of poor or non-existent a priori estimates of sensor metadata, difficulty in the measurement of tie points between adjacent frames, poor imaging geometry including small field-of-view and little vertical relief, and no control (points). The test bed modules must be integrated with appropriate data flows between them. The test bed must also ingest/generate real and simulated data and support evaluation of corresponding performance based on module-internal metrics as well as comparisons to real or simulated "ground truth". Selection of the appropriate modules and algorithms must be both operator specifiable and specifiable as automatic. An FMV test bed has been developed and continues to be improved with the above characteristics. The paper describes its overall design as well as key underlying algorithms, including a recent update to "A matrix" generation, which allows for the computation of arbitrary inter-frame error cross-covariance matrices associated with Kalman filter (KF) registration in the presence of dynamic state vector definition, necessary for rigorous error propagation when the contents/definition of the KF state vector changes due to added/dropped tie points. Performance of a tested scenario is also presented.

  3. Full design of fuzzy controllers using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Homaifar, Abdollah; Mccormick, ED

    1992-01-01

    This paper examines the applicability of genetic algorithms (GA) in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.

  4. Calibration and imaging algorithms for full-Stokes optical interferometry

    NASA Astrophysics Data System (ADS)

    Elias, Nicholas M.; Mozurkewich, David; Schmidt, Luke M.; Jurgenson, Colby A.; Edel, Stanislav S.; Jones, Carol E.; Halonen, Robert J.; Schmitt, Henrique R.; Jorgensen, Anders M.; Hutter, Donald J.

    2012-07-01

    Optical interferometry and polarimetry have separately provided new insights into stellar astronomy, especially in the fields of fundamental parameters and atmospheric models. Optical interferometers will eventually add full-Stokes polarization measuring capabilities, thus combining both techniques. In this paper, we: 1) list the observables, calibration quantities, and data acquisition strategies for both limited and full optical interferometric polarimetry (OIP); 2) describe the masking interferometer AMASING and its polarization measuring enhancement called AMASING-POL; 3) show how a radio interferometry imaging package, CASA, can be used for optical interferometry data reduction; and 4) present imaging simulations for Be stars.

  5. Full tensor gravity gradiometry data inversion: Performance analysis of parallel computing algorithms

    NASA Astrophysics Data System (ADS)

    Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu

    2015-09-01

    We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.

  6. The wavenumber algorithm for full-matrix imaging using an ultrasonic array.

    PubMed

    Hunter, Alan J; Drinkwater, Bruce W; Wilcox, Paul D

    2008-11-01

    Ultrasonic imaging using full-matrix capture, e.g., via the total focusing method (TFM), has been shown to increase angular inspection coverage and improve sensitivity to small defects in nondestructive evaluation. In this paper, we develop a Fourier-domain approach to full-matrix imaging based on the wavenumber algorithm used in synthetic aperture radar and sonar. The extension to the wavenumber algorithm for full-matrix data is described and the performance of the new algorithm compared with the TFM, which we use as a representative benchmark for the time-domain algorithms. The wavenumber algorithm provides a mathematically rigorous solution to the inverse problem for the assumed forward wave propagation model, whereas the TFM employs heuristic delay-and-sum beamforming. Consequently, the wavenumber algorithm has an improved point-spread function and provides better imagery. However, the major advantage of the wavenumber algorithm is its superior computational performance. For large arrays and images, the wavenumber algorithm is several orders of magnitude faster than the TFM. On the other hand, the key advantage of the TFM is its flexibility. The wavenumber algorithm requires a regularly sampled linear array, while the TFM can handle arbitrary imaging geometries. The TFM and the wavenumber algorithm are compared using simulated and experimental data. PMID:19049924

  7. Are we taking full advantage of the growing number of pharmacological treatment options for osteoporosis?

    PubMed Central

    Jepsen, Karl J.; Schlecht, Stephen H.; Kozloff, Kenneth M.

    2014-01-01

    We are becoming increasingly aware that the manner in which our skeleton ages is not uniform within and between populations. Pharmacological treatment options with the potential to combat age-related reductions in skeletal strength continue to become available on the market, notwithstanding our current inability to fully utilize these treatments by accounting for an individual’s unique biomechanical needs. Revealing new molecular mechanisms that improve the targeted delivery of pharmaceuticals is important; however, this only addresses one part of the solution for differential age-related bone loss. To improve current treatment regimes, we must also consider specific biomechanical mechanisms that define how these molecular pathways ultimately impact whole bone fracture resistance. By improving our understanding of the relationship between molecular and biomechanical mechanisms, clinicians will be better equipped to take full advantage of the mounting pharmacological treatments available. Ultimately this will enable us to reduce fracture risk among the elderly more strategically, more effectively, and more economically. In this interest, the following review summarizes the biomechanical basis of current treatment strategies while defining how different biomechanical mechanisms lead to reduced fracture resistance. It is hoped that this may serve as a template for the identification of new targets for pharmacological treatments that will enable clinicians to personalize care so that fracture incidence may be globally reduced. PMID:24747363

  8. A conservative implicit finite difference algorithm for the unsteady transonic full potential equation

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Caradonna, F. X.

    1980-01-01

    An implicit finite difference procedure is developed to solve the unsteady full potential equation in conservation law form. Computational efficiency is maintained by use of approximate factorization techniques. The numerical algorithm is first order in time and second order in space. A circulation model and difference equations are developed for lifting airfoils in unsteady flow; however, thin airfoil body boundary conditions have been used with stretching functions to simplify the development of the numerical algorithm.

  9. Application of a Chimera Full Potential Algorithm for Solving Aerodynamic Problems

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1997-01-01

    A numerical scheme utilizing a chimera zonal grid approach for solving the three dimensional full potential equation is described. Special emphasis is placed on describing the spatial differencing algorithm around the chimera interface. Results from two spatial discretization variations are presented; one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The presentation is highlighted with a number of transonic wing flow field computations.

  10. Uniform convergence estimates for multigrid V-cycle algorithms with less than full elliptic regularity

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.

    1992-03-01

    In this paper, we provide uniform estimates for V-cycle algorithms with one smoothing on each level. This theory is based on some elliptic regularity but does not require a smoother interaction hypothesis (sometimes referred to as a strengthened Cauchy Schwarz inequality) assumed in other theories. Thus, it is a natural extension of the full regularity V-cycle estimates provided by Braess and Hackbush.

  11. An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.

    PubMed

    K, Manasa; Channappayya, Sumohana S

    2016-06-01

    We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index. PMID:27093720

  12. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  13. Newton-Krylov-Schwarz algorithms for the 2D full potential equation

    SciTech Connect

    Cai, Xiao-Chuan; Gropp, W.D.; Keyes, D.E.

    1996-12-31

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The main algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, can be made robust for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report favorable choices for numerical convergence rate and overall execution time on a distributed-memory parallel computer.

  14. Characterization of the effects of the FineView algorithm for full field digital mammography.

    PubMed

    Urbanczyk, H; McDonagh, E; Marshall, N W; Castellano, I

    2012-04-01

    The aim of this study was to characterize the effect of an image processing algorithm (FineView) on both quantitative image quality parameters and the threshold contrast detail response of the GE Senographe DS full-field digital mammography system. The system was characterized using signal transfer property, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE) of the system. An algorithmic modulation transfer function (MTF(a)) was calculated from images acquired at a reduced detector air kerma (DAK) and with the FineView algorithm enabled. Two sets of beam conditions were used: Mo/Mo/28 kV and Rh/Rh/29 kV, both with 2 mm added Al filtration at the x-ray tube. Images were acquired with and without FineView at four DAK levels from 14 to 378 µGy. The threshold contrast detail response was assessed using the CDMAM contrast-detail test object which was imaged under standard clinical conditions with and without FineView at three DAK levels from 24 to 243 µGy. The images were scored by both human observers and by automated scoring software. Results indicated an improvement of up to 125% at 5 mm⁻¹ in MTF(a) when FineView was activated, particularly at high DAK levels. A corresponding increase of up to 425% at 5 mm⁻¹ was also seen in the NNPS, again with the same DAK dependence. FineView did not influence DQE, an indication that the signal to noise ratio transfer of the system remained unchanged. FineView did not affect the threshold contrast detectability of the system, a result that is consistent with the DQE results. PMID:22429938

  15. Characterization of the effects of the FineView algorithm for full field digital mammography

    NASA Astrophysics Data System (ADS)

    Urbanczyk, H.; McDonagh, E.; Marshall, N. W.; Castellano, I.

    2012-04-01

    The aim of this study was to characterize the effect of an image processing algorithm (FineView) on both quantitative image quality parameters and the threshold contrast detail response of the GE Senographe DS full-field digital mammography system. The system was characterized using signal transfer property, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE) of the system. An algorithmic modulation transfer function (MTFa) was calculated from images acquired at a reduced detector air kerma (DAK) and with the FineView algorithm enabled. Two sets of beam conditions were used: Mo/Mo/28 kV and Rh/Rh/29 kV, both with 2 mm added Al filtration at the x-ray tube. Images were acquired with and without FineView at four DAK levels from 14 to 378 µGy. The threshold contrast detail response was assessed using the CDMAM contrast-detail test object which was imaged under standard clinical conditions with and without FineView at three DAK levels from 24 to 243 µGy. The images were scored by both human observers and by automated scoring software. Results indicated an improvement of up to 125% at 5 mm-1 in MTFa when FineView was activated, particularly at high DAK levels. A corresponding increase of up to 425% at 5 mm-1 was also seen in the NNPS, again with the same DAK dependence. FineView did not influence DQE, an indication that the signal to noise ratio transfer of the system remained unchanged. FineView did not affect the threshold contrast detectability of the system, a result that is consistent with the DQE results.

  16. Full-vectorial finite element method based eigenvalue algorithm for the analysis of 2D photonic crystals with arbitrary 3D anisotropy.

    PubMed

    Hsu, Sen-Ming; Chang, Hung-Chun

    2007-11-26

    A full-vectorial finite element method based eigenvalue algorithm is developed to analyze the band structures of two-dimensional (2D) photonic crystals (PCs) with arbitray 3D anisotropy for in-planewave propagations, in which the simple transverse-electric (TE) or transverse-magnetic (TM) modes may not be clearly defined. By taking all the field components into consideration simultaneously without decoupling of the wave modes in 2D PCs into TE and TM modes, a full-vectorial matrix eigenvalue equation, with the square of the wavenumber as the eigenvalue, is derived. We examine the convergence behaviors of this algorithm and analyze 2D PCs with arbitrary anisotropy using this algorithm to demonstrate its correctness and usefulness by explaining the numerical results theoretically. PMID:19550864

  17. Some algorithmic issues in full-waveform inversion of teleseismic data for high-resolution lithospheric imaging

    NASA Astrophysics Data System (ADS)

    Monteiller, Vadim; Beller, Stephen; Nolet, Guust; Operto, Stephane; Brossier, Romain; Métivier, Ludovic; Paul, Anne; Virieux, Jean

    2014-05-01

    The current development of dense seismic arrays and high performance computing make feasible today application of full-waveform inversion (FWI) on teleseismic data for high-resolution lithospheric imaging. In teleseismic configuration, the source is to first-order a plane-wave that impinges the base of the lithospheric target located below the receiver array. In this setting, FWI aims to exploit not only the forward-scattered waves propagating up to the receiver but also second-order arrivals that are back-scattered from the free-surface and the reflectors before their recordings on the surface. FWI requires using full-wave methods modeling such as finite-difference or finite-element methods. In this framework, careful design of FWI algorithms is topical to mitigate as much as possible the computational burden of multi-source full-waveform modeling. In this presentation, we review some key specifications that might be considered for versatile FWI implementation. An abstraction level between the forward and inverse problems that allows for the interfacing of different modeling engines with the inversion. This requires the subsurface meshings that are used to perform seismic modeling and update the subsurface models during inversion to be fully independent through some back-and-forth projection processes. The subsurface parameterization should be carefully chosen during multi-parameter FWI as it controls the trade-off between parameters of different nature. A versatile FWI algorithm should be designed such that different subsurface parameterizations for the model update can be easily implemented. The gradient of the misfit function should be computed as easily as possible with the adjoint-state method in parallel environment. This first requires the gradient to be independent to the discretization method that is used to perform seismic modeling. Second, the incident and adjoint wavefields should be computed with the same numerical scheme, even if the forward problem

  18. Chlorophyll fluorescence: implementation in the full physics RemoTeC algorithm

    NASA Astrophysics Data System (ADS)

    Hahne, Philipp; Frankenberg, Christian; Hasekamp, Otto; Landgraf, Jochen; Butz, André

    2014-05-01

    Several operating and future satellite missions are dedicated to enhancing our understanding of the carbon cycle. They infer the atmospheric concentrations of carbon dioxide and methane from shortwave infrared absorption spectra of sunlight backscattered from Earth's atmosphere and surface. Exhibiting high spatial and temporal resolution, the inferred gas concentration databases provide valuable information for inverse modelling of source and sink processes at the Earth's surface. However, the inversion of sources and sinks requires highly accurate total column CO2 (XCO2) and CH4 (XCH4) measurements, which remains a challenge. Recently, Frankenberg et al., 2012, showed that - beside XCO2 and XCH4 - chlorophyll fluorescence can be retrieved from sounders such as GOSAT exploiting Fraunhofer lines in the vicinity of the O2 A-band. This has two implications: a) chlorophyll fluorescence itself being a proxy for photosynthetic activity yields new information on carbon cycle processes and b) the neglect of the fluorescence signal can induce errors in the retrieved greenhouse gas concentrations. Our RemoTeC full physics algorithm iteratively retrieves the target gas concentrations XCO2 and XCH4 along with atmospheric scattering properties and other auxiliary parameters. The radiative transfer model (RTM) LINTRAN provides RemoTeC with the single and multiple scattered intensity field and its analytically calculated derivatives. Here, we report on the implementation of a fluorescence light source at the lower boundary of our RTM. Processing three years of GOSAT data, we evaluate the performance of the refined retrieval method. To this end, we compare different retrieval configurations, using the s- and p-polarization detectors independently and combined, and validate to independent data sources.

  19. Full Glowworm Swarm Optimization Algorithm for Whole-Set Orders Scheduling in Single Machine

    PubMed Central

    Yu, Zhang; Yang, Xiaomei

    2013-01-01

    By analyzing the characteristics of whole-set orders problem and combining the theory of glowworm swarm optimization, a new glowworm swarm optimization algorithm for scheduling is proposed. A new hybrid-encoding schema combining with two-dimensional encoding and random-key encoding is given. In order to enhance the capability of optimal searching and speed up the convergence rate, the dynamical changed step strategy is integrated into this algorithm. Furthermore, experimental results prove its feasibility and efficiency. PMID:24294135

  20. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  1. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Astrophysics Data System (ADS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  2. Full-scale engine demonstration of an advanced sensor failure detection isolation, and accommodation algorithm - Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  3. Design and analysis of an Euler transformation algorithm applied to full-polarimetric ISAR imagery

    NASA Astrophysics Data System (ADS)

    Baird, Christopher Stanford

    2007-12-01

    Use of an Inverse Synthetic Aperture Radar (ISAR) enables the construction of spatial images of an object's electromagnetic backscattering properties. A set of fully polarimetric ISAR images contains sufficient information to construct the coherent scattering matrix for each resolution cell in the image. A diagonalization of the scattering matrix is equivalent to a transformation to a common basis, which allows the extraction of phenomenological parameters. These phenomenological scattering parameters, referred to as Euler parameters, better quantify the physical scattering properties of the object than the original polarization parameters. The accuracy and meaning of the Euler parameters are shown to be degraded by transform ambiguities as well as by azimuthal nonpersistence. The transform ambiguities are shown to be removed by a case-wise characterization and redefinition of the Euler parameters. The azimuthal nonpersistence is shown to be a result of multiple scattering centers occupying the same cell. An optimized Euler transformation algorithm is presented that removes transform ambiguities and minimizes the impact of cells containing multiple scattering centers. The accuracy of the algorithm is analyzed by testing its effectiveness in Automatic Target Recognition (ATR) using polarimetric scattering signatures obtained at the University of Massachusetts Lowell Submillimeter-Wave Technology Laboratory and the U.S. Army National Ground Intelligence Center. Finally, a complete ATR algorithm is presented and analyzed which uses the optimized Euler transformation without any previous knowledge and without human intervention. The algorithm is shown to enable successful automatic target recognition.

  4. Full-Featured Search Algorithm for Negative Electron-Transfer Dissociation.

    PubMed

    Riley, Nicholas M; Bern, Marshall; Westphall, Michael S; Coon, Joshua J

    2016-08-01

    Negative electron-transfer dissociation (NETD) has emerged as a premier tool for peptide anion analysis, offering access to acidic post-translational modifications and regions of the proteome that are intractable with traditional positive-mode approaches. Whole-proteome scale characterization is now possible with NETD, but proper informatic tools are needed to capitalize on advances in instrumentation. Currently only one database search algorithm (OMSSA) can process NETD data. Here we implement NETD search capabilities into the Byonic platform to improve the sensitivity of negative-mode data analyses, and we benchmark these improvements using 90 min LC-MS/MS analyses of tryptic peptides from human embryonic stem cells. With this new algorithm for searching NETD data, we improved the number of successfully identified spectra by as much as 80% and identified 8665 unique peptides, 24 639 peptide spectral matches, and 1338 proteins in activated-ion NETD analyses, more than doubling identifications from previous negative-mode characterizations of the human proteome. Furthermore, we reanalyzed our recently published large-scale, multienzyme negative-mode yeast proteome data, improving peptide and peptide spectral match identifications and considerably increasing protein sequence coverage. In all, we show that new informatics tools, in combination with recent advances in data acquisition, can significantly improve proteome characterization in negative-mode approaches. PMID:27402189

  5. Analysis of full charge reconstruction algorithms for x-ray pixelated detectors

    SciTech Connect

    Baumbaugh, A.; Carini, G.; Deptuch, G.; Grybos, P.; Hoff, J.; Siddons, P., Maj.; Szczygiel, R.; Trimpl, M.; Yarema, R.; /Fermilab

    2011-11-01

    Existence of the natural diffusive spread of charge carriers on the course of their drift towards collecting electrodes in planar, segmented detectors results in a division of the original cloud of carriers between neighboring channels. This paper presents the analysis of algorithms, implementable with reasonable circuit resources, whose task is to prevent degradation of the detective quantum efficiency in highly granular, digital pixel detectors. The immediate motivation of the work is a photon science application requesting simultaneous timing spectroscopy and 2D position sensitivity. Leading edge discrimination, provided it can be freed from uncertainties associated with the charge sharing, is used for timing the events. Analyzed solutions can naturally be extended to the amplitude spectroscopy with pixel detectors.

  6. Analysis of Full Charge Reconstruction Algorithms for X-Ray Pixelated Detectors

    SciTech Connect

    Baumbaugh, A.; Carini, G.; Deptuch, G.; Grybos, P.; Hoff, J.; Siddons, P., Maj.; Szczygiel, R.; Trimpl, M.; Yarema, R.; /Fermilab

    2012-05-21

    Existence of the natural diffusive spread of charge carriers on the course of their drift towards collecting electrodes in planar, segmented detectors results in a division of the original cloud of carriers between neighboring channels. This paper presents the analysis of algorithms, implementable with reasonable circuit resources, whose task is to prevent degradation of the detective quantum efficiency in highly granular, digital pixel detectors. The immediate motivation of the work is a photon science application requesting simultaneous timing spectroscopy and 2D position sensitivity. Leading edge discrimination, provided it can be freed from uncertainties associated with the charge sharing, is used for timing the events. Analyzed solutions can naturally be extended to the amplitude spectroscopy with pixel detectors.

  7. Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2015-03-01

    The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.

  8. Optimized MPPT algorithm for boost converters taking into account the environmental variables

    NASA Astrophysics Data System (ADS)

    Petit, Pierre; Sawicki, Jean-Paul; Saint-Eve, Frédéric; Maufay, Fabrice; Aillerie, Michel

    2016-07-01

    This paper presents a study on the specific behavior of the Boost DC-DC converters generally used for powering conversion of PV panels connected to a HVDC (High Voltage Direct Current) Bus. It follows some works pointing out that converter MPPT (Maximum Power Point Tracker) is severely perturbed by output voltage variations due to physical dependency of parameters as the input voltage, the output voltage and the duty cycle of the PWM switching control of the MPPT. As a direct consequence many converters connected together on a same load perturb each other because of the output voltage variations induced by fluctuations on the HVDC bus essentially due to a not insignificant bus impedance. In this paper we show that it is possible to include an internal computed variable in charge to compensate local and external variations to take into account the environment variables.

  9. Full Waveform 3D Synthetic Seismic Algorithm for 1D Layered Anelastic Models

    NASA Astrophysics Data System (ADS)

    Schwaiger, H. F.; Aldridge, D. F.; Haney, M. M.

    2007-12-01

    Numerical calculation of synthetic seismograms for 1D layered earth models remains a significant aspect of amplitude-offset investigations, surface wave studies, microseismic event location approaches, and reflection interpretation or inversion processes. Compared to 3D finite-difference algorithms, memory demand and execution time are greatly reduced, enabling rapid generation of seismic data within workstation or laptop computational environments. We have developed a frequency-wavenumber forward modeling algorithm adapted to realistic 1D geologic media, for the purpose of calculating seismograms accurately and efficiently. The earth model consists of N layers bounded by two halfspaces. Each layer/halfspace is a homogeneous and isotropic anelastic (attenuative and dispersive) solid, characterized by a rectangular relaxation spectrum of absorption mechanisms. Compressional and shear phase speeds and quality factors are specified at a particular reference frequency. Solution methodology involves 3D Fourier transforming the three coupled, second- order, integro-differential equations for particle displacements to the frequency-horizontal wavenumber domain. An analytic solution of the resulting ordinary differential system is obtained. Imposition of welded interface conditions (continuity of displacement and stress) at all interfaces, as well as radiation conditions in the two halfspaces, yields a system of 6(N+1) linear algebraic equations for the coefficients in the ODE solution. An optimized inverse 2D Fourier transform to the space domain gives the seismic wavefield on a horizontal plane. Finally, three-component seismograms are obtained by accumulating frequency spectra at designated receiver positions on this plane, followed by a 1D inverse FFT from angular frequency ω to time. Stress-free conditions may be applied at the top or bottom interfaces, and seismic waves are initiated by force or moment density sources. Examples reveal that including attenuation

  10. Improving chemical mapping algorithm and visualization in full-field hard x-ray spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong

    2013-12-01

    X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.

  11. Benchmark analysis of algorithms for determining and quantifying full-length mRNA splice forms from RNA-seq data

    PubMed Central

    Hayer, Katharina E.; Pizarro, Angel; Lahens, Nicholas F.; Hogenesch, John B.; Grant, Gregory R.

    2015-01-01

    Motivation: Because of the advantages of RNA sequencing (RNA-Seq) over microarrays, it is gaining widespread popularity for highly parallel gene expression analysis. For example, RNA-Seq is expected to be able to provide accurate identification and quantification of full-length splice forms. A number of informatics packages have been developed for this purpose, but short reads make it a difficult problem in principle. Sequencing error and polymorphisms add further complications. It has become necessary to perform studies to determine which algorithms perform best and which if any algorithms perform adequately. However, there is a dearth of independent and unbiased benchmarking studies. Here we take an approach using both simulated and experimental benchmark data to evaluate their accuracy. Results: We conclude that most methods are inaccurate even using idealized data, and that no method is highly accurate once multiple splice forms, polymorphisms, intron signal, sequencing errors, alignment errors, annotation errors and other complicating factors are present. These results point to the pressing need for further algorithm development. Availability and implementation: Simulated datasets and other supporting information can be found at http://bioinf.itmat.upenn.edu/BEERS/bp2 Supplementary information: Supplementary data are available at Bioinformatics online. Contact: hayer@upenn.edu PMID:26338770

  12. Evaluation of EIT systems and algorithms for handling full void fraction range in two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Jia, Jiabin; Wang, Mi; Faraj, Yousef

    2015-01-01

    In the aqueous-based two-phase flow, if the void fraction of dispersed phase exceeds 0.25, conventional electrical impedance tomography (EIT) produces a considerable error due to the linear approximation of the sensitivity back-projection (SBP) method, which limits the EIT’s wider application in the process industry. In this paper, an EIT sensing system which is able to handle full void fraction range in two-phase flow is reported. This EIT system employs a voltage source, conducts true mutual impedance measurement and reconstructs an online image with the modified sensitivity back-projection (MSBP) algorithm. The capability of the Maxwell relationship to convey full void fraction is investigated. The limitation of the linear sensitivity back-projection method is analysed. The MSBP algorithm is used to derive relative conductivity change in the evaluation. A series of static and dynamic experiments demonstrating the mean void fraction obtained using this EIT system has a good agreement with reference void fractions over the range from 0 to 1. The combination of the new EIT system and MSBP algorithm would significantly extend the applications of EIT in industrial process measurement.

  13. Full-wave algorithm to model effects of bedding slopes on the response of subsurface electromagnetic geophysical sensors near unconformities

    NASA Astrophysics Data System (ADS)

    Sainath, Kamalesh; Teixeira, Fernando L.

    2016-05-01

    We propose a full-wave pseudo-analytical numerical electromagnetic (EM) algorithm to model subsurface induction sensors, traversing planar-layered geological formations of arbitrary EM material anisotropy and loss, which are used, for example, in the exploration of hydrocarbon reserves. Unlike past pseudo-analytical planar-layered modeling algorithms that impose parallelism between the formation's bed junctions, our method involves judicious employment of Transformation Optics techniques to address challenges related to modeling relative slope (i.e., tilting) between said junctions (including arbitrary azimuth orientation of each junction). The algorithm exhibits this flexibility, both with respect to loss and anisotropy in the formation layers as well as junction tilting, via employing special planar slabs that coat each "flattened" (i.e., originally tilted) planar interface, locally redirecting the incident wave within the coating slabs to cause wave fronts to interact with the flattened interfaces as if they were still tilted with a specific, user-defined orientation. Moreover, since the coating layers are homogeneous rather than exhibiting continuous material variation, a minimal number of these layers must be inserted and hence reduces added simulation time and computational expense. As said coating layers are not reflectionless however, they do induce artificial field scattering that corrupts legitimate field signatures due to the (effective) interface tilting. Numerical results, for two half-spaces separated by a tilted interface, quantify error trends versus effective interface tilting, material properties, transmitter/receiver spacing, sensor position, coating slab thickness, and transmitter and receiver orientation, helping understand the spurious scattering's effect on reliable (effective) tilting this algorithm can model. Under the effective tilting constraints suggested by the results of said error study, we finally exhibit responses of sensors

  14. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  15. Quantum dot ternary-valued full-adder: Logic synthesis by a multiobjective design optimization based on a genetic algorithm

    SciTech Connect

    Klymenko, M. V.; Remacle, F.

    2014-10-28

    A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables for the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.

  16. Industrial experience of process identification and set-point decision algorithm in a full-scale treatment plant.

    PubMed

    Yoo, Changkyoo; Kim, Min Han

    2009-06-01

    This paper presents industrial experience of process identification, monitoring, and control in a full-scale wastewater treatment plant. The objectives of this study were (1) to apply and compare different process-identification methods of proportional-integral-derivative (PID) autotuning for stable dissolved oxygen (DO) control, (2) to implement a process monitoring method that estimates the respiration rate simultaneously during the process-identification step, and (3) to propose a simple set-point decision algorithm for determining the appropriate set point of the DO controller for optimal operation of the aeration basin. The proposed method was evaluated in the industrial wastewater treatment facility of an iron- and steel-making plant. Among the process-identification methods, the control signal of the controller's set-point change was best for identifying low-frequency information and enhancing the robustness to low-frequency disturbances. Combined automatic control and set-point decision method reduced the total electricity consumption by 5% and the electricity cost by 15% compared to the fixed gain PID controller, when considering only the surface aerators. Moreover, as a result of improved control performance, the fluctuation of effluent quality decreased and overall effluent water quality was better. PMID:19428173

  17. Full Monte Carlo and measurement-based overall performance assessment of improved clinical implementation of eMC algorithm with emphasis on lower energy range.

    PubMed

    Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo

    2016-06-01

    New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23. PMID:27189311

  18. [Validation of the modified algorithm for predicting host susceptibility to viruses taking into account susceptibility parameters of primary target cell cultures and natural immunity factors].

    PubMed

    Zhukov, V A; Shishkina, L N; Safatov, A S; Sergeev, A A; P'iankov, O V; Petrishchenko, V A; Zaĭtsev, B N; Toporkov, V S; Sergeev, A N; Nesvizhskiĭ, Iu V; Vorob'ev, A A

    2010-01-01

    The paper presents results of testing a modified algorithm for predicting virus ID50 values in a host of interest by extrapolation from a model host taking into account immune neutralizing factors and thermal inactivation of the virus. The method was tested for A/Aichi/2/68 influenza virus in SPF Wistar rats, SPF CD-1 mice and conventional ICR mice. Each species was used as a host of interest while the other two served as model hosts. Primary lung and trachea cells and secretory factors of the rats' airway epithelium were used to measure parameters needed for the purpose of prediction. Predicted ID50 values were not significantly different (p = 0.05) from those experimentally measured in vivo. The study was supported by ISTC/DARPA Agreement 450p. PMID:20608042

  19. A New Lidar Data Processing Algorithm Including Full Uncertainty Budget and Standardized Vertical Resolution for use Within the NDACC and GRUAN Networks

    NASA Astrophysics Data System (ADS)

    Leblanc, T.; Haefele, A.; Sica, R. J.; van Gijsel, A.

    2014-12-01

    A new lidar data processing algorithm for the retrieval of ozone, temperature and water vapor has been developed for centralized use within the Network for the Detection of Atmospheric Composition Change (NDACC) and the GCOS Reference Upper Air Network (GRUAN). The program is written with the objective that raw data from a large number of lidar instruments can be analyzed consistently. The uncertainty budget includes 13 sources of uncertainty that are explicitly propagated taking into account vertical and inter-channel dependencies. Several standardized definitions of vertical resolution can be used, leading to a maximum flexibility, and to the production of tropospheric ozone, stratospheric ozone, middle atmospheric temperature and tropospheric water vapor profiles optimized for multiple user needs such as long-term monitoring, process studies and model and satellite validation. A review of the program's functionalities as well as the first retrieved products will be presented.

  20. About the use of the Monte-Carlo code based tracing algorithm and the volume fraction method for S n full core calculations

    SciTech Connect

    Gurevich, M. I.; Oleynik, D. S.; Russkov, A. A.; Voloschenko, A. M.

    2006-07-01

    The tracing algorithm that is implemented in the geometrical module of Monte-Carlo transport code MCU is applied to calculate the volume fractions of original materials by spatial cells of the mesh that overlays problem geometry. In this way the 3D combinatorial geometry presentation of the problem geometry, used by MCU code, is transformed to the user defined 2D or 3D bit-mapped ones. Next, these data are used in the volume fraction (VF) method to approximate problem geometry by introducing additional mixtures for spatial cells, where a few original materials are included. We have found that in solving realistic 2D and 3D core problems a sufficiently fast convergence of the VF method takes place if the spatial mesh is refined. Virtually, the proposed variant of implementation of the VF method seems as a suitable geometry interface between Monte-Carlo and S{sub n} transport codes. (authors)

  1. A novel mosaicking algorithm for in vivo full-field thickness mapping of the human tympanic membrane using low coherence interferometry (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pande, Paritosh; Shelton, Ryan L.; Monroy, Guillermo L.; Nolan, Ryan M.; Boppart, Stephen A.

    2016-02-01

    Tympanic membrane (TM) thickness can provide crucial information for diagnosing several middle ear pathologies. An imaging system integrating low coherence interferometry (LCI) with the standard video otoscope has been shown as a promising tool for quantitative assessment of in-vivo TM thickness. The small field-of-view (FOV) of TM surface images acquired by the combined LCI-otoscope system, however, makes the spatial registration of the LCI imaging sites and their location on the TM difficult to achieve. It is therefore desirable to have a tool that can map the imaged points on to an anatomically accurate full-field surface image of the TM. To this end, we propose a novel automated mosaicking algorithm for generating a full-field surface image of the TM with co-registered LCI imaging sites from a sequence of multiple small FOV images and corresponding LCI data. Traditional image mosaicking techniques reported in the biomedical literature, mostly for retinal imaging, are not directly applicable to TM image mosaicking because unlike retinal images, which have several distinctive features, TM images contain large homogeneous areas lacking in sharp features. The proposed algorithm overcomes these challenges of TM image mosaicking by following a two-step approach. In the first step, a coarse registration based on the correlation of gross image features is performed. Subsequently, in the second step, the coarsely registered images are used to perform a finer intensity-based co-registration. The proposed algorithm is used to generate, for the first time, full-field thickness distribution maps of in-vivo human TMs.

  2. Experimental implementation of a robust damped-oscillation control algorithm on a full-sized, two-degree-of-freedom, AC induction motor-driven crane

    NASA Astrophysics Data System (ADS)

    Kress, R. L.; Jansen, J. F.; Noakes, M. W.

    When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered and/or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller.

  3. Frequency domain full waveform elastic inversion of marine seismic data from the Alba field using a Bayesian trans-dimensional algorithm

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Sekar, Anusha; Hoversten, G. Michael; Albertin, Uwe

    2016-05-01

    We present an algorithm to recover the Bayesian posterior model probability density function of subsurface elastic parameters, as required by the full pressure field recorded at an ocean bottom cable due to an impulsive seismic source. Both the data noise and source wavelet are estimated by our algorithm, resulting in robust estimates of subsurface velocity and density. In contrast to purely gradient based approaches, our method avoids model regularization entirely and produces an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. Our algorithm is trans-dimensional and performs model selection, sampling over a wide range of model parametrizations. We follow a frequency domain approach and derive the corresponding likelihood in the frequency domain. We present first a synthetic example of a reservoir at 2 km depth with minimal acoustic impedance contrast, which is difficult to study with conventional seismic amplitude versus offset changes. Finally, we apply our methodology to survey data collected over the Alba field in the North Sea, an area which is known to show very little lateral heterogeneity but nevertheless presents challenges for conventional post migration seismic amplitude versus offset analysis.

  4. Frequency domain full waveform elastic inversion of marine seismic data from the Alba field using a Bayesian trans-dimensional algorithm

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Sekar, Anusha; Hoversten, G. Michael; Albertin, Uwe

    2016-02-01

    We present an algorithm to recover the Bayesian posterior model probability density function of subsurface elastic parameters, as required by the full pressure field recorded at an ocean bottom cable due to an impulsive seismic source. Both the data noise and source wavelet are estimated by our algorithm, resulting in robust estimates of subsurface velocity and density. In contrast to purely gradient based approaches, our method avoids model regularization entirely and produces an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. Our algorithm is trans-dimensional and performs model selection, sampling over a wide range of model parametrizations. We follow a frequency domain approach, and derive the corresponding likelihood in the frequency domain. We present first a synthetic example of a reservoir at 2 km depth with minimal acoustic impedance contrast, which is difficult to study with conventional seismic amplitude vs offset changes. Finally, we apply our methodology to survey data collected over the Alba field in the North Sea, an area which is known to show very little lateral heterogeneity but nevertheless presents challenges for conventional post migration seismic amplitude vs offset analysis.

  5. Experimental implementation of a robust damped-oscillation control algorithm on a full-sized, two-degree-of-freedom, AC induction motor-driven crane

    SciTech Connect

    Kress, R.L.; Jansen, J.F.; Noakes, M.W.

    1994-05-01

    When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller.

  6. Synthetic aperture radar imaging algorithm customized for programmable optronic processor in the application of full-scene synthetic aperture radar image formation

    NASA Astrophysics Data System (ADS)

    Sheng, Hui; Gao, Yesheng; Zhu, Bingqi; Wang, Kaizhi; Liu, Xingzhao

    2015-01-01

    With the high programmability of a spatial light modulator (SLM), a newly developed synthetic aperture radar (SAR) optronic processor is capable of focusing SAR data with different parameters. The embedded SLM, encoding SAR data into light signal in the processor, has a limited loading resolution of 1920×1080. When the dimension of processed SAR data increases to tens of thousands in either range or azimuth direction, SAR data should be input and focused block by block. And then, part of the imaging results is mosaicked to offer a full-scene SAR image. In squint mode, however, Doppler centroid will shift signal spectrum in the azimuth direction and make phase filters, loaded by another SLM, unable to cover the entire signal spectrum. It brings about a poor imaging result. Meanwhile, the imaging result, shifted away from the center of light output, will cause difficulties in subsequent image mosaic. We present an SAR image formation algorithm designed to solve these problems when processing SAR data of a large volume in low-squint case. It could not only obtain high-quality imaging results, but also optimize the subsequent process of image mosaic with optimal system cost and efficiency. Experimental results validate the performance of this proposed algorithm in optical full-scene SAR imaging.

  7. Full Genome Virus Detection in Fecal Samples Using Sensitive Nucleic Acid Preparation, Deep Sequencing, and a Novel Iterative Sequence Classification Algorithm

    PubMed Central

    Cotten, Matthew; Oude Munnink, Bas; Canuti, Marta; Deijs, Martin; Watson, Simon J.; Kellam, Paul; van der Hoek, Lia

    2014-01-01

    We have developed a full genome virus detection process that combines sensitive nucleic acid preparation optimised for virus identification in fecal material with Illumina MiSeq sequencing and a novel post-sequencing virus identification algorithm. Enriched viral nucleic acid was converted to double-stranded DNA and subjected to Illumina MiSeq sequencing. The resulting short reads were processed with a novel iterative Python algorithm SLIM for the identification of sequences with homology to known viruses. De novo assembly was then used to generate full viral genomes. The sensitivity of this process was demonstrated with a set of fecal samples from HIV-1 infected patients. A quantitative assessment of the mammalian, plant, and bacterial virus content of this compartment was generated and the deep sequencing data were sufficient to assembly 12 complete viral genomes from 6 virus families. The method detected high levels of enteropathic viruses that are normally controlled in healthy adults, but may be involved in the pathogenesis of HIV-1 infection and will provide a powerful tool for virus detection and for analyzing changes in the fecal virome associated with HIV-1 progression and pathogenesis. PMID:24695106

  8. Calculations of electron stopping powers for 41 elemental solids over the 50 eV to 30 keV range with the full Penn algorithm

    NASA Astrophysics Data System (ADS)

    Shinotsuka, H.; Tanuma, S.; Powell, C. J.; Penn, D. R.

    2012-01-01

    We present mass collision electron stopping powers (SPs) for 41 elemental solids (Li, Be, graphite, diamond, glassy C, Na, Mg, Al, Si, K, Sc, Ti, V, Cr, Fe, Co, Ni, Cu, Ge, Y, Nb, Mo, Ru, Rh, Pd, Ag, In, Sn, Cs, Gd, Tb, Dy, Hf, Ta, W, Re, Os, Ir, Pt, Au, and Bi) that were calculated from experimental energy-loss-function data with the full Penn algorithm for electron energies between 50 eV and 30 keV. Improved sets of energy-loss functions were used for 19 solids. Comparisons were made of these SPs with SPs calculated with the single-pole approximation, previous SP calculations, and experimental SPs. Generally satisfactory agreement was found with SPs from the single-pole approximation for energies above 100 eV, with other calculated SPs, and with measured SPs.

  9. Simulations for Full Unit-memory and Partial Unit-memory Convolutional Codes with Real-time Minimal-byte-error Probability Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Vo, Q. D.

    1984-01-01

    A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.

  10. Taking Full Advantage of Children's Literature

    ERIC Educational Resources Information Center

    Serafini, Frank

    2012-01-01

    Teachers need a deeper understanding of the texts being discussed, in particular the various textual and visual aspects of picturebooks themselves, including the images, written text and design elements, to support how readers made sense of these texts. As teachers become familiar with aspects of literary criticism, art history, visual grammar,…

  11. A parallel algorithm for 2D visco-acoustic frequency-domain full-waveform inversion: application to a dense OBS data set

    NASA Astrophysics Data System (ADS)

    Sourbier, F.; Operto, S.; Virieux, J.

    2006-12-01

    We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor

  12. Taking antacids

    MedlinePlus

    ... magnesium may cause diarrhea. Brands with calcium or aluminum may cause constipation. Rarely, brands with calcium may ... you take large amounts of antacids that contain aluminum, you may be at risk for calcium loss, ...

  13. Taking Time

    ERIC Educational Resources Information Center

    Perry, Tonya

    2004-01-01

    The opportunity for students to successfully complete the material increases when teachers take time and care about what they are reading. Students can read the contents of a text successfully if they keep their thoughts moving and ideas developing.

  14. Double Take

    ERIC Educational Resources Information Center

    Educational Leadership, 2011

    2011-01-01

    This paper begins by discussing the results of two studies recently conducted in Australia. According to the two studies, taking a gap year between high school and college may help students complete a degree once they return to school. The gap year can involve such activities as travel, service learning, or work. Then, the paper presents links to…

  15. Taking Turns

    ERIC Educational Resources Information Center

    Hopkins, Brian

    2010-01-01

    Two people take turns selecting from an even number of items. Their relative preferences over the items can be described as a permutation, then tools from algebraic combinatorics can be used to answer various questions. We describe each person's optimal selection strategies including how each could make use of knowing the other's preferences. We…

  16. A full field, 3-D velocimeter for microgravity crystallization experiments

    NASA Technical Reports Server (NTRS)

    Brodkey, Robert S.; Russ, Keith M.

    1991-01-01

    The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.

  17. Classification based on full decision trees

    NASA Astrophysics Data System (ADS)

    Genrikhov, I. E.; Djukova, E. V.

    2012-04-01

    The ideas underlying a series of the authors' studies dealing with the design of classification algorithms based on full decision trees are further developed. It is shown that the decision tree construction under consideration takes into account all the features satisfying a branching criterion. Full decision trees with an entropy branching criterion are studied as applied to precedent-based pattern recognition problems with real-valued data. Recognition procedures are constructed for solving problems with incomplete data (gaps in the feature descriptions of the objects) in the case when the learning objects are nonuniformly distributed over the classes. The authors' basic results previously obtained in this area are overviewed.

  18. Double Take

    ERIC Educational Resources Information Center

    Educational Leadership, 2011

    2011-01-01

    More than 1.5 million K-12 students in the United States engage in online or blended learning, according to a recent report. As of the end of 2010, 38 states had state virtual schools or state-led online initiatives; 27 states plus Washington, D.C., had full-time online schools; and 20 states offered both supplemental and full-time online learning…

  19. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  20. Unlimited full configuration interaction calculations

    NASA Astrophysics Data System (ADS)

    Knowles, Peter J.; Handy, Nicholas C.

    1989-08-01

    In very large full configuration interaction (full CI), nearly all of the CI coefficients are very small. Calculations, using a newly developed algorithm which exploits this fact, on NH3 with a DZP basis are reported, involving 2×108 Slater determinants. Such calculations are impossible with other existing full CI codes. The new algorithm opens up the opportunity of full CI calculations which are unlimited in size.

  1. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )

    1989-01-01

    This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.

  2. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  3. Evaluation of TCP congestion control algorithms.

    SciTech Connect

    Long, Robert Michael

    2003-12-01

    Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.

  4. Full Multigrid Flow Solver

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris

    2005-01-01

    FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.

  5. The Take Action Project

    ERIC Educational Resources Information Center

    Boudreau, Sue

    2010-01-01

    The Take Action Project (TAP) was created to help middle school students take informed and effective action on science-related issues. The seven steps of TAP ask students to (1) choose a science-related problem of interest to them, (2) research their problem, (3) select an action to take on the problem, (4) plan that action, (5) take action, (6)…

  6. Taking multiple medicines safely

    MedlinePlus

    ... medlineplus.gov/ency/patientinstructions/000883.htm Taking multiple medicines safely To use the sharing features on this ... directed. Why you may Need More Than one Medicine You may take more than one medicine to ...

  7. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  8. Take Charge of Your Career

    ERIC Educational Resources Information Center

    Brown, Marshall A.

    2013-01-01

    Today's work world is full of uncertainty. Every day, people hear about another organization going out of business, downsizing, or rightsizing. To prepare for these uncertain times, one must take charge of their own career. This article presents some tips for surviving in today's world of work: (1) Be self-managing; (2) Know what you…

  9. Personal Pronouns and Perspective Taking in Toddlers.

    ERIC Educational Resources Information Center

    Ricard, Marcelle; Girouard, Pascale C.; Gouin Decarie, Therese

    1999-01-01

    Examined the evolution of visual perspective-taking skills in relation to comprehension and production of first, second, and third person pronouns among French and English speaking toddlers. Some perceptual perspective-taking capacities were well developed by the time children acquired a full mastery of personal pronouns. Full pronoun acquisition…

  10. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  11. A determinant based full configuration interaction program

    NASA Astrophysics Data System (ADS)

    Knowles, Peter J.; Handy, Nicholas C.

    1989-04-01

    The program FCI solves the Full Configuration Interaction (Full CI) problem of quantum chemistry, in which the electronic Schrödinger equation is solved exactly within a given one particle basis set. The Slater determinant based algorithm leads to highly efficient implementation on a vector computer, and has enabled Full CI calculations of dimension more than 10 7 to be performed.

  12. Taking the Long View

    ERIC Educational Resources Information Center

    Bennett, Robert B., Jr.

    2010-01-01

    Legal studies faculty need to take the long view in their academic and professional lives. Taking the long view would seem to be a cliched piece of advice, but too frequently legal studies faculty, like their students, get focused on meeting the next short-term hurdle--getting through the next class, grading the next stack of papers, making it…

  13. Give/Take

    2007-09-12

    Give and Take are set of companion utilities that allow a secure transfer of files from one user to another without exposing the files to third parties. The named files are copied to a spool area. The reciever can retrieve the files by running the "take" program. Ownership of the files remains with the giver until they are taken. Certain users may be limited to take files only from specific givers. For these users, filesmore » may only be taken from givers who are members of the gt-uid-group where uid is the UNIX id of the limited user.« less

  14. Take Your Medicines Safely

    MedlinePlus Videos and Cool Tools

    ... better, the antibiotic is working in killing the bacteria, but it might not completely give what they call a "bactericidal effect." That means taking the bacteria completely out of the system. It might be ...

  15. Implicit, nonswitching, vector-oriented algorithm for steady transonic flow

    NASA Technical Reports Server (NTRS)

    Lottati, I.

    1983-01-01

    A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.

  16. Very large full configuration interaction calculations

    NASA Astrophysics Data System (ADS)

    Knowles, Peter J.

    1989-03-01

    The extreme sparsity of the solution of the full configuration interaction (full CI) secular equations is exploited in a new algorithm. For very large problems, the high speed memory, disk storage, and CPU requirements are reduced considerably, compared to previous techniques. This allows the possibility of full CI calculations with more than 10 8 Slater determinants. The power of the method is demonstrated in preliminary full CI calculations for the NH molecule, including up to 27901690 determinants.

  17. Learning to take actions

    SciTech Connect

    Khardon, R.

    1996-12-31

    We formalize a model for supervised learning of action strategies in dynamic stochastic domains, and show that pac-learning results on Occam algorithms hold in this model as well. We then identify a particularly useful bias for action strategies based on production rule systems. We show that a subset of production rule systems, including rules in predicate calculus style, small hidden state, and unobserved support predicates, is properly learnable. The bias we introduce enables the learning algorithm to invent the recursive support predicates which are used in the action strategy, and to reconstruct the internal state of the strategy. It is also shown that hierarchical strategies are learnable if a helpful teacher is available, but that otherwise the problem is computationally hard.

  18. Take Pride in America.

    ERIC Educational Resources Information Center

    Indiana State Dept. of Education, Indianapolis. Center for School Improvement and Performance.

    During the 1987-88 school year the Indiana Department of Education assisted the United States Department of the Interior and the Indiana Department of Natural Resources with a program which asked students to become involved in activities to maintain and manage public lands. The 1987 Take Pride in America (TPIA) school program encouraged volunteer…

  19. Teachers Taking Professional Abuse

    ERIC Educational Resources Information Center

    Normore, Anthony H.; Floyd, Andrea

    2005-01-01

    Preservice teachers get their first teaching position hoping to take the first step toward becoming professional educators and expecting support from experienced colleagues and administrators, who often serve as their mentors. In this article, the authors present the story of Kristine (a pseudonym), who works at a middle school in a large U.S.…

  20. Take a Bow

    ERIC Educational Resources Information Center

    Spitzer, Greg; Ogurek, Douglas J.

    2009-01-01

    Performing-arts centers can provide benefits at the high school and collegiate levels, and administrators can take steps now to get the show started. When a new performing-arts center comes to town, local businesses profit. Events and performances draw visitors to the community. Ideally, a performing-arts center will play many roles: entertainment…

  1. Take time for laughter.

    PubMed

    Huntley, Mary I

    2009-01-01

    Taking time for positive laughter in the workplace every day is energizing, health-promoting, and rewarding. Humor happenings and mirthful moments are all around us; we need to be receptive to them. Research provides evidence that laughter is a powerful tool when used appropriately in our personal and professional life journey. PMID:19343850

  2. Simulating Price-Taking

    ERIC Educational Resources Information Center

    Engelhardt, Lucas M.

    2015-01-01

    In this article, the author presents a price-takers' market simulation geared toward principles-level students. This simulation demonstrates that price-taking behavior is a natural result of the conditions that create perfect competition. In trials, there is a significant degree of price convergence in just three or four rounds. Students find this…

  3. Take action: influence diversity.

    PubMed

    Gomez, Norma J

    2013-01-01

    Increased diversity brings strength to nursing and ANNA. Being a more diverse association will require all of us working together. There is an old proverb that says: "one hand cannot cover the sky; it takes many hands." ANNA needs every one of its members to be a part of the diversity initiative. PMID:24579394

  4. Taking the thrombin "fork".

    PubMed

    Mann, Kenneth G

    2010-07-01

    The proverb that probably best exemplifies my career in research is attributable to Yogi Berra (http://www.yogiberra.com/), ie, "when you come to a fork in the road ... take it." My career is a consequence of chance interactions with great mentors and talented students and the opportunities provided by a succession of ground-breaking improvements in technology. PMID:20554951

  5. Taking Library Leadership Personally

    ERIC Educational Resources Information Center

    Davis, Heather; Macauley, Peter

    2011-01-01

    This paper outlines the emerging trends for leadership in the knowledge era. It discusses these within the context of leading, creating and sustaining the performance development cultures that libraries require. The first step is to recognise that we all need to take leadership personally no matter whether we see ourselves as leaders or followers.…

  6. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  7. Live a Full Life with Fibro

    MedlinePlus

    ... Live a Full Life with Fibro Page Content Fibromyalgia is a chronic pain condition that affects 10 ... family, you can live an active life with fibromyalgia. Talking with Your Physician Take the first step ...

  8. JWST Full Scale Model Being Built

    NASA Video Gallery

    : The full-scale model of the James Webb Space Telescope is constructed for the 2010 World Science Festival in Battery Park, NY. The model takes about five days to construct. This video contains a ...

  9. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  10. Auditory perspective taking.

    PubMed

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners. PMID:23096077

  11. Take the "C" Train

    ERIC Educational Resources Information Center

    Lawton, Rebecca

    2008-01-01

    In this essay, the author recalls several of her experiences in which she successfully pulled her boats out of river holes by throwing herself to the water as a sea-anchor. She learned this trick from her senior guides at a spring training. Her guides told her, "When you're stuck in a hole, take the "C" train."" "Meaning?" The author asked her…

  12. Full-Service Schools.

    ERIC Educational Resources Information Center

    McChesney, Jim

    1996-01-01

    This research summary reviews six publications that explore the need for integrated school-based community services and describe ways in which challenges can be overcome to create effective full-time schools. The publications include the following: (1) "Full-Service Schools: A Revolution in Health and Social Services for Children, Youth, and…

  13. Physics Take-Outs

    NASA Astrophysics Data System (ADS)

    Riendeau, Diane; Hawkins, Stephanie; Beutlich, Scott

    2016-03-01

    Most teachers want students to think about their course content not only during class but also throughout their day. So, how do you get your students to see how what they learn in class applies to their lives outside of class? As physics teachers, we are fortunate that our students are continually surrounded by our content. How can we get them to notice the physics around them? How can we get them to make connections between the classroom content and their everyday lives? We would like to offer a few suggestions, Physics Take-Outs, to solve this problem.

  14. Computational evolution: taking liberties.

    PubMed

    Correia, Luís

    2010-09-01

    Evolution has, for a long time, inspired computer scientists to produce computer models mimicking its behavior. Evolutionary algorithm (EA) is one of the areas where this approach has flourished. EAs have been used to model and study evolution, but they have been especially developed for their aptitude as optimization tools for engineering. Developed models are quite simple in comparison with their natural sources of inspiration. However, since EAs run on computers, we have the freedom, especially in optimization models, to test approaches both realistic and outright speculative, from the biological point of view. In this article, we discuss different common evolutionary algorithm models, and then present some alternatives of interest. These include biologically inspired models, such as co-evolution and, in particular, symbiogenetics and outright artificial operators and representations. In each case, the advantages of the modifications to the standard model are identified. The other area of computational evolution, which has allowed us to study basic principles of evolution and ecology dynamics, is the development of artificial life platforms for open-ended evolution of artificial organisms. With these platforms, biologists can test theories by directly manipulating individuals and operators, observing the resulting effects in a realistic way. An overview of the most prominent of such environments is also presented. If instead of artificial platforms we use the real world for evolving artificial life, then we are dealing with evolutionary robotics (ERs). A brief description of this area is presented, analyzing its relations to biology. Finally, we present the conclusions and identify future research avenues in the frontier of computation and biology. Hopefully, this will help to draw the attention of more biologists and computer scientists to the benefits of such interdisciplinary research. PMID:20532997

  15. Full steam ahead

    NASA Astrophysics Data System (ADS)

    Heuer, Rolf-Dieter

    2008-03-01

    When the Economist recently reported the news of Rolf-Dieter Heuer's appointment as the next directorgeneral of CERN, it depicted him sitting cross-legged in the middle of a circular track steering a model train around him - smiling. It was an apt cartoon for someone who is about to take charge of the world's most powerful particle accelerator: the 27 km-circumference Large Hadron Collider (LHC), which is nearing completion at the European laboratory just outside Geneva. What the cartoonist did not known is that model railways are one of Heuer's passions.

  16. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  17. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  18. Categorizing Variations of Student-Implemented Sorting Algorithms

    ERIC Educational Resources Information Center

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  19. Take a Planet Walk

    ERIC Educational Resources Information Center

    Schuster, Dwight

    2008-01-01

    Physical models in the classroom "cannot be expected to represent the full-scale phenomenon with complete accuracy, not even in the limited set of characteristics being studied" (AAAS 1990). Therefore, by modifying a popular classroom activity called a "planet walk," teachers can explore upper elementary students' current understandings; create an…

  20. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  1. Full range resistive thermometers

    NASA Astrophysics Data System (ADS)

    Olivieri, E.; Rotter, M.; De Combarieu, M.; Forget, P.; Marrache-Kikuchi, C.; Pari, P.

    2015-12-01

    Resistive thermometers are widely used in low temperature physics, thanks to portability, simplicity of operation and reduced size. The possibility to precisely follow the temperature from room temperature down to the mK region is of major interest for numerous applications, although no single thermometer can nowadays cover this entire temperature range. In this article we report on a method to realize a full range thermometer, capable to measure, by itself, temperatures in the whole above-cited temperature range, with constant sensitivity and sufficient precision for the typical cryogenic applications. We present here the first results for three different full range thermometer prototypes. A detailed description of the set-up used for measurements and characterization is also reported.

  2. Neptune - full ring system

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This pair of Voyager 2 images (FDS 11446.21 and 11448.10), two 591-s exposures obtained through the clear filter of the wide angle camera, show the full ring system with the highest sensitivity. Visible in this figure are the bright, narrow N53 and N63 rings, the diffuse N42 ring, and (faintly) the plateau outside of the N53 ring (with its slight brightening near 57,500 km).

  3. Full Scale Tunnel model

    NASA Technical Reports Server (NTRS)

    1929-01-01

    Interior view of Full-Scale Tunnel (FST) model. (Small human figures have been added for scale.) On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel . 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow.

  4. Taking centre stage...

    NASA Astrophysics Data System (ADS)

    1998-11-01

    HAMLET (Highly Automated Multimedia Light Enhanced Theatre) was the star performance at the recent finals of the `Young Engineer for Britain' competition, held at the Commonwealth Institute in London. This state-of-the-art computer-controlled theatre lighting system won the title `Young Engineers for Britain 1998' for David Kelnar, Jonathan Scott, Ramsay Waller and John Wyllie (all aged 16) from Merchiston Castle School, Edinburgh. HAMLET replaces conventional manually-operated controls with a special computer program, and should find use in the thousands of small theatres, schools and amateur drama productions that operate with limited resources and without specialist expertise. The four students received a £2500 prize between them, along with £2500 for their school, and in addition they were invited to spend a special day with the Royal Engineers. A project designed to improve car locking systems enabled Ian Robinson of Durham University to take the `Working in industry award' worth £1000. He was also given the opportunity of a day at sea with the Royal Navy. Other prizewinners with their projects included: Jun Baba of Bloxham School, Banbury (a cardboard armchair which converts into a desk and chair); Kobika Sritharan and Gemma Hancock, Bancroft's School, Essex (a rain warning system for a washing line); and Alistair Clarke, Sam James and Ruth Jenkins, Bishop of Llandaff High School, Cardiff (a mechanism to open and close the retractable roof of the Millennium Stadium in Cardiff). The two principal national sponsors of the competition, which is organized by the Engineering Council, are Lloyd's Register and GEC. Industrial companies, professional engineering institutions and educational bodies also provided national and regional prizes and support. During this year's finals, various additional activities took place, allowing the students to surf the Internet and navigate individual engineering websites on a network of computers. They also visited the

  5. Multiscale full waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; Trampert, Jeannot; Cupillard, Paul; Saygin, Erdinc; Taymaz, Tuncay; Capdeville, Yann; Villaseñor, Antonio

    2013-07-01

    We develop and apply a full waveform inversion method that incorporates seismic data on a wide range of spatio-temporal scales, thereby constraining the details of both crustal and upper-mantle structure. This is intended to further our understanding of crust-mantle interactions that shape the nature of plate tectonics, and to be a step towards improved tomographic models of strongly scale-dependent earth properties, such as attenuation and anisotropy. The inversion for detailed regional earth structure consistently embedded within a large-scale model requires locally refined numerical meshes that allow us to (1) model regional wave propagation at high frequencies, and (2) capture the inferred fine-scale heterogeneities. The smallest local grid spacing sets the upper bound of the largest possible time step used to iteratively advance the seismic wave field. This limitation leads to extreme computational costs in the presence of fine-scale structure, and it inhibits the construction of full waveform tomographic models that describe earth structure on multiple scales. To reduce computational requirements to a feasible level, we design a multigrid approach based on the decomposition of a multiscale earth model with widely varying grid spacings into a family of single-scale models where the grid spacing is approximately uniform. Each of the single-scale models contains a tractable number of grid points, which ensures computational efficiency. The multi-to-single-scale decomposition is the foundation of iterative, gradient-based optimization schemes that simultaneously and consistently invert data on all scales for one multi-scale model. We demonstrate the applicability of our method in a full waveform inversion for Eurasia, with a special focus on Anatolia where coverage is particularly dense. Continental-scale structure is constrained by complete seismic waveforms in the 30-200 s period range. In addition to the well-known structural elements of the Eurasian mantle

  6. Taking Care of Your Vision

    MedlinePlus

    ... a Friend Who Cuts? Taking Care of Your Vision KidsHealth > For Teens > Taking Care of Your Vision ... are important parts of keeping your peepers perfect. Vision Basics One of the best things you can ...

  7. Why Take a Prenatal Supplement?

    MedlinePlus

    ... Newsroom Dietary Guidelines Communicator’s Guide Why take a prenatal supplement? You are here Home / Audience / Adults / Moms/ Moms-to-Be / Dietary Supplements Why take a prenatal supplement? Print Share During pregnancy, your needs increase ...

  8. Full Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)

  9. Full Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).

  10. Taking bioethics personally.

    PubMed

    Chambers, Tod; Ahmad, Ayesha; Crow, Sheila; Davis, Dena S; Dresser, Rebecca; Harter, Thomas D; Jordan, Sara R; Kaposy, Chris; Lanoix, Monique; Lee, K Jane; Scully, Jackie Leach; Taylor, Katherine A; Watson, Katie

    2013-01-01

    This narrative symposium examines the relationship of bioethics practice to personal experiences of illness. A call for stories was developed by Tod Chambers, the symposium editor, and editorial staff and was sent to several commonly used bioethics listservs and posted on the Narrative Inquiry in Bioethics website. The call asked authors to relate a personal story of being ill or caring for a person who is ill, and to describe how this affected how they think about bioethical questions and the practice of medicine. Eighteen individuals were invited to submit full stories based on review of their proposals. Twelve stories are published in this symposium, and six supplemental stories are published online only through Project MUSE. Authors explore themes of vulnerability, suffering, communication, voluntariness, cultural barriers, and flaws in local healthcare systems through stories about their own illnesses or about caring for children, partners, parents and grandparents. Commentary articles by Arthur Frank, Bradley Lewis, and Carol Taylor follow the collection of personal narratives. PMID:24406989

  11. SR-71 Taking Off

    NASA Technical Reports Server (NTRS)

    1990-01-01

    One of three U.S. Air Force SR-71 reconnaissance aircraft originally retired from operational service and loaned to NASA for a high-speed research program retracts its landing gear after taking off from NASA's Ames-Dryden Flight Research Facility (later Dryden Flight Research Center), Edwards, California, on a 1990 research flight. One of the SR-71As was later returned to the Air Force for active duty in 1995. Data from the SR-71 high-speed research program will be used to aid designers of future supersonic/hypersonic aircraft and propulsion systems. Two SR-71 aircraft have been used by NASA as testbeds for high-speed and high-altitude aeronautical research. The aircraft, an SR-71A and an SR-71B pilot trainer aircraft, have been based here at NASA's Dryden Flight Research Center, Edwards, California. They were transferred to NASA after the U.S. Air Force program was cancelled. As research platforms, the aircraft can cruise at Mach 3 for more than one hour. For thermal experiments, this can produce heat soak temperatures of over 600 degrees Fahrenheit (F). This operating environment makes these aircraft excellent platforms to carry out research and experiments in a variety of areas -- aerodynamics, propulsion, structures, thermal protection materials, high-speed and high-temperature instrumentation, atmospheric studies, and sonic boom characterization. The SR-71 was used in a program to study ways of reducing sonic booms or over pressures that are heard on the ground, much like sharp thunderclaps, when an aircraft exceeds the speed of sound. Data from this Sonic Boom Mitigation Study could eventually lead to aircraft designs that would reduce the 'peak' overpressures of sonic booms and minimize the startling affect they produce on the ground. One of the first major experiments to be flown in the NASA SR-71 program was a laser air data collection system. It used laser light instead of air pressure to produce airspeed and attitude reference data, such as angle of

  12. Full Jupiter Mosaic

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This image of Jupiter is produced from a 2x2 mosaic of photos taken by the New Horizons Long Range Reconnaissance Imager (LORRI), and assembled by the LORRI team at the Johns Hopkins University Applied Physics Laboratory. The telescopic camera snapped the images during a 3-minute, 35-second span on February 10, when the spacecraft was 29 million kilometers (18 million miles) from Jupiter. At this distance, Jupiter's diameter was 1,015 LORRI pixels -- nearly filling the imager's entire (1,024-by-1,024 pixel) field of view. Features as small as 290 kilometers (180 miles) are visible.

    Both the Great Red Spot and Little Red Spot are visible in the image, on the left and lower right, respectively. The apparent 'storm' on the planet's right limb is a section of the south tropical zone that has been detached from the region to its west (or left) by a 'disturbance' that scientists and amateur astronomers are watching closely.

    At the time LORRI took these images, New Horizons was 820 million kilometers (510 million miles) from home -- nearly 51/2 times the distance between the Sun and Earth. This is the last full-disk image of Jupiter LORRI will produce, since Jupiter is appearing larger as New Horizons draws closer, and the imager will start to focus on specific areas of the planet for higher-resolution studies.

  13. Full Color Holographic Endoscopy

    NASA Astrophysics Data System (ADS)

    Osanlou, A.; Bjelkhagen, H.; Mirlis, E.; Crosby, P.; Shore, A.; Henderson, P.; Napier, P.

    2013-02-01

    The ability to produce color holograms from the human tissue represents a major medical advance, specifically in the areas of diagnosis and teaching. This has been achieved at Glyndwr University. In corporation with partners at Gooch & Housego, Moor Instruments, Vivid Components and peninsula medical school, Exeter, UK, for the first time, we have produced full color holograms of human cell samples in which the cell boundary and the nuclei inside the cells could be clearly focused at different depths - something impossible with a two-dimensional photographic image. This was the main objective set by the peninsula medical school at Exeter, UK. Achieving this objective means that clinically useful images essentially indistinguishable from the object human cells could be routinely recorded. This could potentially be done at the tip of a holo-endoscopic probe inside the body. Optimised recording exposure and development processes for the holograms were defined for bulk exposures. This included the optimisation of in-house recording emulsions for coating evaluation onto polymer substrates (rather than glass plates), a key step for large volume commercial exploitation. At Glyndwr University, we also developed a new version of our in-house holographic (world-leading resolution) emulsion.

  14. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  15. Optimal Consumption When Consumption Takes Time

    ERIC Educational Resources Information Center

    Miller, Norman C.

    2009-01-01

    A classic article by Gary Becker (1965) showed that when it takes time to consume, the first order conditions for optimal consumption require the marginal rate of substitution between any two goods to equal their relative full costs. These include the direct money price and the money value of the time needed to consume each good. This important…

  16. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  17. Full Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Installation of Full Scale Tunnel (FST) power plant. Virginia Public Service Company could not supply adequate electricity to run the wind tunnels being built at Langley. (The Propeller Research Tunnel was powered by two submarine diesel engines.) This led to the consideration of a number of different ideas for generating electric power to drive the fan motors in the FST. The main proposition involved two 3000 hp and two 1000 hp diesel engines with directly connected generators. Another, proposition suggested 30 Liberty motors driving 600 hp DC generators in pairs. For a month, engineers at Langley were hopeful they could secure additional diesel engines from decommissioned Navy T-boats but the Navy could not offer a firm commitment regarding the future status of the submarines. By mid-December 1929, Virginia Public Service Company had agreed to supply service to the field at the north end of the King Street Bridge connecting Hampton and Langley Field. Thus, new plans for FST powerplant and motors were made. Smith DeFrance described the motors in NACA TR No. 459: 'The most commonly used power plant for operating a wind tunnel is a direct-current motor and motor-generator set with Ward Leonard control system. For the FST it was found that alternating current slip-ring induction motors, together with satisfactory control equipment, could be purchased for approximately 30 percent less than the direct-current equipment. Two 4000-horsepower slip-ring induction motors with 24 steps of speed between 75 and 300 r.p.m. were therefore installed.'

  18. SOHO Resumes Full Operation

    NASA Astrophysics Data System (ADS)

    2003-07-01

    SOHO orbit hi-res Size hi-res: 324 kb Credits: SOHO (ESA & NASA) SOHO orbit Because of its static position, every three months the high-gain antenna loses sight of Earth. During this time, engineers will rotate the spacecraft by 180 degrees to regain full contact a few days later. Since 19 June 2003, SOHO's high-gain antenna (HGA), which transmits high-speed data to Earth, has been fixed in position following the discovery of a malfunction in its pointing mechanism. This resulted in a loss of signal through SOHO's usual 26-metre ground stations on 27 June 2003. However, 34-metre radio dishes continued to receive high-speed transmissions from the HGA until 1 July 2003. Since then, astronomers have been relying primarily on a slower transmission rate signal, sent through SOHO's backup antenna. It can be picked up whenever a 34-metre dish is available. However, this signal could not transmit all of SOHO's data. Some data was recorded on board, however, and downloaded using high-speed transmissions through the backup antenna when time on the largest, 70-metre dishes could be spared. SOHO itself orbits a point in space, 1.5 million kilometres closer to the Sun than the Earth, once every 6 months. To reorient the HGA for the next half of this orbit, engineers rolled the spacecraft through a half-circle on 8 July 2003. On 10 July, the 34-metre radio dish in Madrid re-established contact with SOHO's HGA. Then on the morning of 14 July 2003, normal operations with the spacecraft resumed through its usual 26-metre ground stations, as predicted. With the HGA now static, the blackouts, lasting between 9 and 16 days, will continue to occur every 3 months. Engineers will rotate SOHO by 180 degrees every time this occurs. This manoeuvre will minimise data losses. Stein Haugan, acting SOHO project scientist, says "It is good to welcome SOHO back to normal operations, as it proves that we have a good understanding of the situation and can confidently work around it."

  19. A disturbance based control/structure design algorithm

    NASA Technical Reports Server (NTRS)

    Mclaren, Mark D.; Slater, Gary L.

    1989-01-01

    Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.

  20. Vector processor algorithms for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr.; Keller, J. D.; Hafez, M. M.

    1979-01-01

    This paper discusses a number of algorithms for solving the transonic full-potential equation in conservative form on a vector computer, such as the CDC STAR-100 or the CRAY-1. Recent research with the 'artificial density' method for transonics has led to development of some new iteration schemes which take advantage of vector-computer architecture without suffering significant loss of convergence rate. Several of these more promising schemes are described and 2-D and 3-D results are shown comparing the computational rates on the STAR and CRAY vector computers, and the CYBER-175 serial computer. Schemes included are: (1) Checkerboard SOR, (2) Checkerboard Leapfrog, (3) odd-even vertical line SOR, and (4) odd-even horizontal line SOR.

  1. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  2. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  3. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  4. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  5. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  6. Routes Countries Can Take To Achieve Full Ownership Of Immunization Programs.

    PubMed

    McQuestion, Michael; Carlson, Andrew; Dari, Khongorzul; Gnawali, Devendra; Kamara, Clifford; Mambu-Ma-Disu, Helene; Mbwanque, Jonas; Kizza, Diana; Silver, Dana; Paatashvili, Eka

    2016-02-01

    A goal of the Global Vaccine Action Plan, led by the World Health Organization, is country ownership by 2020, defined here as the point when a country fully finances its routine immunization program with domestic resources. This article reports the progress made toward country ownership in twenty-two lower- and lower-middle-income countries engaged in the Sabin Vaccine Institute's Sustainable Immunization Financing Program. We focus on new practices developed in the key public institutions concerned with immunization financing, budget and resource tracking, and legislation, using case studies as examples. Our analysis found that many countries are undertaking new funding mechanisms to reach financing goals. However, budget transparency remains a problem, as only eleven of the twenty-two countries have performed sequential analyses of their immunization program budgets. Promisingly, six countries (Cameroon, the Republic of the Congo, Nepal, Nigeria, Senegal, and Uganda) are creating new national immunization funding sources that are backed by legislation. Seven countries already have laws regarding immunization, and new immunization legislative projects are under way in thirteen others. PMID:26858379

  7. Developing competencies and training to enable senior nurses to take on full responsibility for DNACPR processes.

    PubMed

    Booth, Michele; Courtnell, Tracey

    2012-04-01

    There is currently great interest and activity around the development of do not attempt cardiopulmonary resuscitation (DNACPR) policies in health and social care. This paper describes how South Central Strategic Health Authority (SHA) in the UK underwent a process of agreeing a competency framework and devising an accompanying training course to enable senior nurses to be decision makers and signatories for DNACPR forms. The competencies that were agreed are presented, along with an exploration of the benefits of nurses completing DNACPR forms, including a costing of apparent financial benefits. With the restructuring of SHAs on the horizon it is important to share practice development in order to avoid duplication of effort. PMID:22584390

  8. Intelligent decision support algorithm for distribution system restoration.

    PubMed

    Singh, Reetu; Mehfuz, Shabana; Kumar, Parmod

    2016-01-01

    Distribution system is the means of revenue for electric utility. It needs to be restored at the earliest if any feeder or complete system is tripped out due to fault or any other cause. Further, uncertainty of the loads, result in variations in the distribution network's parameters. Thus, an intelligent algorithm incorporating hybrid fuzzy-grey relation, which can take into account the uncertainties and compare the sequences is discussed to analyse and restore the distribution system. The simulation studies are carried out to show the utility of the method by ranking the restoration plans for a typical distribution system. This algorithm also meets the smart grid requirements in terms of an automated restoration plan for the partial/full blackout of network. PMID:27512634

  9. Take Your Leadership Role Seriously.

    ERIC Educational Resources Information Center

    School Administrator, 1986

    1986-01-01

    The principal authors of a new book, "Profiling Excellence in America's Schools," state that leadership is the single most important element for effective schools. The generic skills of leaders are flexibility, autonomy, risk taking, innovation, and commitment. Exceptional principals and teachers take their leadership and management roles…

  10. Taking Over a Broken Program

    ERIC Educational Resources Information Center

    Grabowski, Carl

    2008-01-01

    Taking over a broken program can be one of the hardest tasks to take on. However, working towards a vision and a common goal--and eventually getting there--makes it all worth it in the end. In this article, the author shares the lessons she learned as the new director for the Bright Horizons Center in Ashburn, Virginia. She suggests that new…

  11. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  12. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  13. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  14. Spatial search algorithms on Hanoi networks

    NASA Astrophysics Data System (ADS)

    Marquezino, Franklin de Lima; Portugal, Renato; Boettcher, Stefan

    2013-01-01

    We use the abstract search algorithm and its extension due to Tulsi to analyze a spatial quantum search algorithm that finds a marked vertex in Hanoi networks of degree 4 faster than classical algorithms. We also analyze the effect of using non-Groverian coins that take advantage of the small-world structure of the Hanoi networks. We obtain the scaling of the total cost of the algorithm as a function of the number of vertices. We show that Tulsi's technique plays an important role to speed up the searching algorithm. We can improve the algorithm's efficiency by choosing a non-Groverian coin if we do not implement Tulsi's method. Our conclusions are based on numerical implementations.

  15. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  16. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  17. Taking medicines to treat tuberculosis

    MedlinePlus

    ... drugs. This is called directly observed therapy. Side Effects and Other Problems Women who may be pregnant, who are pregnant, or who are breastfeeding should talk to their provider before taking these ...

  18. LRO Takes the Moon's Temperature

    NASA Video Gallery

    During the June 2011 lunar eclipse, scientists will be able to get a unique view of the moon. While the sun is blocked by the Earth, LRO's Diviner instrument will take the temperature on the lunar ...

  19. LRO Takes the Moon's Temperature

    NASA Video Gallery

    During the December 2011 lunar eclipse, LRO's Diviner instrument will take the temperature on the lunar surface. Since different rock sizes cool at different rates, scientists will be able to infer...

  20. Brazilian physicists take centre stage

    NASA Astrophysics Data System (ADS)

    Curtis, Susan

    2014-06-01

    With the FIFA World Cup taking place in Brazil this month, Susan Curtis travels to South America's richest nation to find out how its physicists are exploiting recent big increases in science funding.

  1. Taking America To New Heights

    NASA Video Gallery

    NASA's Commercial Crew Program (CCP) is taking America to new heights with its Commercial Crew Development Round 2 (CCDev2) partners. In 2011, NASA entered into funded Space Act Agreements (SAAs) w...

  2. Numerical Simulations of Light Bullets, Using The Full Vector, Time Dependent, Nonlinear Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)

    1995-01-01

    This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that we currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Karr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.

  3. Numerical Simulations of Light Bullets, Using the Full Vector, Time Dependent, Nonlinear Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)

    1994-01-01

    This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that are currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Kerr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.

  4. Numerical Simulations of Light Bullets, Using The Full Vector, Time Dependent, Nonlinear Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)

    1994-01-01

    This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that are currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Kerr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.

  5. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  6. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  7. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  8. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  9. A Resampling Based Clustering Algorithm for Replicated Gene Expression Data.

    PubMed

    Li, Han; Li, Chun; Hu, Jie; Fan, Xiaodan

    2015-01-01

    In gene expression data analysis, clustering is a fruitful exploratory technique to reveal the underlying molecular mechanism by identifying groups of co-expressed genes. To reduce the noise, usually multiple experimental replicates are performed. An integrative analysis of the full replicate data, instead of reducing the data to the mean profile, carries the promise of yielding more precise and robust clusters. In this paper, we propose a novel resampling based clustering algorithm for genes with replicated expression measurements. Assuming those replicates are exchangeable, we formulate the problem in the bootstrap framework, and aim to infer the consensus clustering based on the bootstrap samples of replicates. In our approach, we adopt the mixed effect model to accommodate the heterogeneous variances and implement a quasi-MCMC algorithm to conduct statistical inference. Experiments demonstrate that by taking advantage of the full replicate data, our algorithm produces more reliable clusters and has robust performance in diverse scenarios, especially when the data is subject to multiple sources of variance. PMID:26671802

  10. Take Steps to Prevent Type 2 Diabetes

    MedlinePlus

    ... I at Risk? 4 of 9 sections Take Action! Take Action: Talk to Your Doctor Take these steps to ... Previous section Signs 5 of 9 sections Take Action: Cost and Insurance What about cost? Thanks to ...

  11. Fever and Taking Your Child's Temperature

    MedlinePlus

    ... About Zika & Pregnancy Fever and Taking Your Child's Temperature KidsHealth > For Parents > Fever and Taking Your Child's ... a mercury thermometer.) previous continue Tips for Taking Temperatures As any parent knows, taking a squirming child's ...

  12. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  13. Cluster algorithms and computational complexity

    NASA Astrophysics Data System (ADS)

    Li, Xuenan

    Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.

  14. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  15. Simultaneous stabilization using genetic algorithms

    SciTech Connect

    Benson, R.W.; Schmitendorf, W.E. . Dept. of Mechanical Engineering)

    1991-01-01

    This paper considers the problem of simultaneously stabilizing a set of plants using full state feedback. The problem is converted to a simple optimization problem which is solved by a genetic algorithm. Several examples demonstrate the utility of this method. 14 refs., 8 figs.

  16. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  17. A Full Front End Chain for Drift Chambers

    NASA Astrophysics Data System (ADS)

    Chiarello, G.; Corvaglia, A.; Grancagnolo, F.; Panareo, M.; Pepino, A.; Primiceri, P.; Tassielli, G.

    2014-03-01

    We developed a high performance full chain for drift chamber signals processing. The Front End electronics is a multistage amplifier board based on high performance commercial devices. In addition a fast readout algorithm for Cluster Counting and Timing purposes has been implemented on a Xilinx-Virtex 4 core FPGA. The algorithm analyzes and stores data coming from a Helium based drift tube and represents the outcome of balancing between efficiency and high speed performance.

  18. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  19. College Presidents Take on 21

    ERIC Educational Resources Information Center

    Fain, Paul

    2008-01-01

    College presidents have long gotten flak for refusing to take controversial stands on national issues. A large group of presidents opened an emotionally charged national debate on the drinking age. In doing so, they triggered an avalanche of news-media coverage and a fierce backlash. While the criticism may sting, the prime-time fracas may help…

  20. Synthesis Can Take Many Forms

    ERIC Educational Resources Information Center

    Darrow, Rob

    2005-01-01

    Synthesis can take many forms at the high school level and from a Big6 perspective. Synthesis means purposeful, valuable and interesting assignments. It is very important for a classroom teacher to recognize that students can synthesize information several times during a project and that there are many different ways to present information.

  1. Taking Stock and Standing down

    ERIC Educational Resources Information Center

    Peeler, Tom

    2009-01-01

    Standing down is an action the military takes to review, regroup, and reorganize. Unfortunately, it often comes after an accident or other tragic event. To stop losses, the military will "stand down" until they are confident they can resume safe operations. Standing down is good for everyone, not just the military. In today's fast-paced world,…

  2. Taking your carotid pulse (image)

    MedlinePlus

    ... take oxygenated blood from the heart to the brain. The pulse from the carotids may be felt on either side of the front of the neck just below the angle of the jaw. This rhythmic "beat" is caused by varying volumes of blood being pushed out of the heart ...

  3. Aspiring Teachers Take up Residence

    ERIC Educational Resources Information Center

    Honawar, Vaishall

    2008-01-01

    The Boston Teacher Residency program is a yearlong, selective preparation route that trains aspiring teachers, many of them career-changers, to take on jobs in some of the city's highest-needs schools. The program, which fits neither of the two most common types of teacher preparation--alternative routes and traditional teacher education…

  4. Pair take top science posts

    NASA Astrophysics Data System (ADS)

    Pockley, Peter

    2008-11-01

    Australia's science minister Kim Carr has appointed physical scientists to key posts. Penny Sackett, an astronomer, takes over as the government's chief scientist this month, while in January geologist Megan Clark will become chief executive of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the county's largest research agency. Both five-year appointments have been welcomed by researchers.

  5. Taking Stands for Social Justice

    ERIC Educational Resources Information Center

    Lindley, Lorinda; Rios, Francisco

    2004-01-01

    In this paper the authors describe efforts to help students take a stand for social justice in the College of Education at one predominantly White institution in the western Rocky Mountain region. The authors outline the theoretical frameworks that inform this work and the context of our work. The focus is on specific pedagogical strategies used…

  6. Four Takes on Tough Times

    ERIC Educational Resources Information Center

    Rebell, Michael A.; Odden, Allan; Rolle, Anthony; Guthrie, James W.

    2012-01-01

    Educational Leadership talks with four experts in the fields of education policy and finance about how schools can weather the current financial crisis. Michael A. Rebell focuses on the recession and students' rights; Allan Odden suggests five steps schools can take to improve in tough times; Anthony Rolle describes the tension between equity and…

  7. Intuitive Risk Taking during Adolescence

    ERIC Educational Resources Information Center

    Holland, James D.; Klaczynski, Paul A.

    2009-01-01

    Adolescents frequently engage in risky behaviors that endanger both themselves and others. Critical to the development of effective interventions is an understanding of the processes adolescents go through when deciding to take risks. This article explores two information processing systems; a slow, deliberative, analytic system and a quick,…

  8. Professionalism: Teachers Taking the Reins

    ERIC Educational Resources Information Center

    Helterbran, Valeri R.

    2008-01-01

    It is essential that teachers take a proactive look at their profession and themselves to strengthen areas of professionalism over which they have control. In this article, the author suggests strategies that include collaborative planning, reflectivity, growth in the profession, and the examination of certain personal characteristics.

  9. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  10. When perspective taking increases taking: reactive egoism in social interaction.

    PubMed

    Epley, Nicholas; Caruso, Eugene; Bazerman, Max H

    2006-11-01

    Group members often reason egocentrically, believing that they deserve more than their fair share of group resources. Leading people to consider other members' thoughts and perspectives can reduce these egocentric (self-centered) judgments such that people claim that it is fair for them to take less; however, the consideration of others' thoughts and perspectives actually increases egoistic (selfish) behavior such that people actually take more of available resources. A series of experiments demonstrates this pattern in competitive contexts in which considering others' perspectives activates egoistic theories of their likely behavior, leading people to counter by behaving more egoistically themselves. This reactive egoism is attenuated in cooperative contexts. Discussion focuses on the implications of reactive egoism in social interaction and on strategies for alleviating its potentially deleterious effects. PMID:17059307

  11. Full Duplex, Spread Spectrum Radio System

    NASA Technical Reports Server (NTRS)

    Harvey, Bruce A.

    2000-01-01

    The goal of this project was to support the development of a full duplex, spread spectrum voice communications system. The assembly and testing of a prototype system consisting of a Harris PRISM spread spectrum radio, a TMS320C54x signal processing development board and a Zilog Z80180 microprocessor was underway at the start of this project. The efforts under this project were the development of multiple access schemes, analysis of full duplex voice feedback delays, and the development and analysis of forward error correction (FEC) algorithms. The multiple access analysis involved the selection between code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA). Full duplex voice feedback analysis involved the analysis of packet size and delays associated with full loop voice feedback for confirmation of radio system performance. FEC analysis included studies of the performance under the expected burst error scenario with the relatively short packet lengths, and analysis of implementation in the TMS320C54x digital signal processor. When the capabilities and the limitations of the components used were considered, the multiple access scheme chosen was a combination TDMA/FDMA scheme that will provide up to eight users on each of three separate frequencies. Packets to and from each user will consist of 16 samples at a rate of 8,000 samples per second for a total of 2 ms of voice information. The resulting voice feedback delay will therefore be 4 - 6 ms. The most practical FEC algorithm for implementation was a convolutional code with a Viterbi decoder. Interleaving of the bits of each packet will be required to offset the effects of burst errors.

  12. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  13. Concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    In order to overcome the slow convergence rate and large steady-state mean square error of constant modulus algorithm (CMA), a concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals is proposed, which makes full use of the character which is that the high-order QAM signals locate in the different modulus. This algorithm uses the CMA as the basal mode. And in the second mode it uses the multi-modulus algorithm. Furthermore, the two modes operate concurrently. The efficiency of the method is proved by computer simulations in underwater acoustic channels.

  14. Sleep Deprivation and Advice Taking.

    PubMed

    Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie

    2016-01-01

    Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by - more or less qualified - advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507

  15. Sleep Deprivation and Advice Taking

    PubMed Central

    Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie

    2016-01-01

    Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507

  16. Full-Scale Tests of NACA Cowlings

    NASA Technical Reports Server (NTRS)

    Theodorsen, Theodore; Brevoort, M J; Stickle, George W

    1937-01-01

    A comprehensive investigation has been carried on with full-scale models in the NACA 20-foot wind tunnel, the general purpose of which is to furnish information in regard to the physical functioning of the composite propeller-nacelle unit under all conditions of take-off, taxiing, and normal flight. This report deals exclusively with the cowling characteristics under condition of normal flight and includes the results of tests of numerous combinations of more than a dozen nose cowlings, about a dozen skirts, two propellers, two sizes of nacelle, as well as various types of spinners and other devices.

  17. Unifying parametrized VLSI Jacobi algorithms and architectures

    NASA Astrophysics Data System (ADS)

    Deprettere, Ed F. A.; Moonen, Marc

    1993-11-01

    Implementing Jacobi algorithms in parallel VLSI processor arrays is a non-trivial task, in particular when the algorithms are parametrized with respect to size and the architectures are parametrized with respect to space-time trade-offs. The paper is concerned with an approach to implement several time-adaptive Jacobi-type algorithms on a parallel processor array, using only Cordic arithmetic and asynchronous communications, such that any degree of parallelism, ranging from single-processor up to full-size array implementation, is supported by a `universal' processing unit. This result is attributed to a gracious interplay between algorithmic and architectural engineering.

  18. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  19. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  20. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  1. An improved Camshift algorithm for target recognition

    NASA Astrophysics Data System (ADS)

    Fu, Min; Cai, Chao; Mao, Yusu

    2015-12-01

    Camshift algorithm and three frame difference algorithm are the popular target recognition and tracking methods. Camshift algorithm requires a manual initialization of the search window, which needs the subjective error and coherence, and only in the initialization calculating a color histogram, so the color probability model cannot be updated continuously. On the other hand, three frame difference method does not require manual initialization search window, it can make full use of the motion information of the target only to determine the range of motion. But it is unable to determine the contours of the object, and can not make use of the color information of the target object. Therefore, the improved Camshift algorithm is proposed to overcome the disadvantages of the original algorithm, the three frame difference operation is combined with the object's motion information and color information to identify the target object. The improved Camshift algorithm is realized and shows better performance in the recognition and tracking of the target.

  2. Parallelism of the SANDstorm hash algorithm.

    SciTech Connect

    Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree

    2009-09-01

    Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.

  3. On Approximate Factorization Schemes for Solving the Full Potential Equation

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1997-01-01

    An approximate factorization scheme based on the AF2 algorithm is presented for solving the three-dimensional full potential equation for the transonic flow about isolated wings. Two spatial discretization variations are presented, one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The present algorithm utilizes a C-H grid topology to map the flow field about the wing. One version of the AF2 iteration scheme is used on the upper wing surface and another slightly modified version is used on the lower surface. These two algorithm variations are then connected at the wing leading edge using a local iteration technique. The resulting scheme has improved linear stability characteristics and improved time-like damping characteristics relative to previous implementations of the AF2 algorithm. The presentation is highlighted with a grid refinement study and a number of numerical results.

  4. Risk taking among diabetic clients.

    PubMed

    Joseph, D H; Schwartz-Barcott, D; Patterson, B

    1992-01-01

    Diabetic clients must make daily decisions about their health care needs. Observational and anecdotal evidence suggests that vast differences exist between the kinds of choices diabetic clients make and the kinds of chances they are willing to take. The purpose of this investigation was to develop a diabetic risk-assessment tool. This instrument, which is based on subjective expected utility theory, measures risk-prone and risk-averse behavior. Initial findings from a pilot study of 18 women clients who are on insulin indicate that patterns of risk behavior exist in the areas of exercise, skin care, and diet. PMID:1729123

  5. Rover Takes a Sunday Drive

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation, made with images from the Mars Exploration Rover Spirit hazard-identification camera, shows the rover's perspective of its first post-egress drive on Mars Sunday. Engineers drove Spirit approximately 3 meters (10 feet) toward its first rock target, a football-sized, mountain-shaped rock called Adirondack. The drive took approximately 30 minutes to complete, including time stopped to take images. Spirit first made a series of arcing turns totaling approximately 1 meter (3 feet). It then turned in place and made a series of short, straightforward movements totaling approximately 2 meters (6.5 feet).

  6. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  7. Community detection based on modularity and an improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Bai, Jing; Jiao, Licheng; Jin, Chao

    2013-03-01

    Complex networks are widely applied in every aspect of human society, and community detection is a research hotspot in complex networks. Many algorithms use modularity as the objective function, which can simplify the algorithm. In this paper, a community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward. MIGA takes the modularity Q as the objective function, which can simplify the algorithm, and uses prior information (the number of community structures), which makes the algorithm more targeted and improves the stability and accuracy of community detection. Meanwhile, MIGA takes the simulated annealing method as the local search method, which can improve the ability of local search by adjusting the parameters. Compared with the state-of-art algorithms, simulation results on computer-generated and four real-world networks reflect the effectiveness of MIGA.

  8. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  9. Zero deadtime spectroscopy without full charge collection

    SciTech Connect

    Odell, D.M.C.; Bushart, B.S.; Harpring, L.J.; Moore, F.S.; Riley, T.N.

    1998-10-01

    The Savannah River Technology Center has built a remote gamma monitoring instrument which employs data sampling techniques rather than full charge collection to perform energy spectroscopy without instrument dead time. The raw, unamplified anode output of a photomultiplier tube is directly coupled to the instrument to generate many digital samples during the charge collection process, so that all pulse processing is done in the digital domain. The primary components are a free-running, 32 MSPS, 10-bit A/D, a field programmable gate array, FIFO buffers, and a digital signal processor (DSP). Algorithms for pulse integration, pile-up rejection, and other shape based criteria are being developed in DSP code for migration into the gate array. Spectra taken with a two inch Na(I) detector have been obtained at rates as high as 59,000 counts per second without dead time with peak resolution at 662 KeV measuring 7.3%.

  10. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  11. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  12. Take Care of Your Teeth and Gums

    MedlinePlus

    ... Previous section Overview 2 of 6 sections Take Action! Take Action: Brushing Tips Follow these tips for a healthy, ... Why It's Important 3 of 6 sections Take Action: Flossing Tips Floss every day. Floss every day ...

  13. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  14. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  15. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  16. Full-coverage film cooling

    NASA Technical Reports Server (NTRS)

    Meitner, P. L.

    1980-01-01

    Program calculates coolant flow and wall temperatures of full-coverage film-cooled vanes or blades. Thermal barrier coatings may be specified on outer surfaces of blade. Program is written in FORTRAN IV for batch execution on UNIVAC 1100.

  17. Paraxial Full-Field Cloaking

    NASA Astrophysics Data System (ADS)

    Choi, Joseph; Howell, John

    2015-05-01

    Broadband, omnidirectional invisibility cloaking has been a goal of scientists since coordinate transformations were suggested for cloaking. The requirements for realizing such a cloak can be simplified by considering only the paraxial (`small-angle') regime. We recap the experimental demonstration of paraxial ray optics cloaking and theoretically complete its formalism, by extending it to the full-field of light. We then show how to build a full-field paraxial cloaking system.

  18. Optimal configuration algorithm of a satellite transponder

    NASA Astrophysics Data System (ADS)

    Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.

    2016-04-01

    This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.

  19. SOM-based algorithms for qualitative variables.

    PubMed

    Cottrell, Marie; Ibbou, Smaïl; Letrémy, Patrick

    2004-01-01

    It is well known that the SOM algorithm achieves a clustering of data which can be interpreted as an extension of Principal Component Analysis, because of its topology-preserving property. But the SOM algorithm can only process real-valued data. In previous papers, we have proposed several methods based on the SOM algorithm to analyze categorical data, which is the case in survey data. In this paper, we present these methods in a unified manner. The first one (Kohonen Multiple Correspondence Analysis, KMCA) deals only with the modalities, while the two others (Kohonen Multiple Correspondence Analysis with individuals, KMCA_ind, Kohonen algorithm on DISJonctive table, KDISJ) can take into account the individuals, and the modalities simultaneously. PMID:15555858

  20. Taking charge: a personal responsibility.

    PubMed Central

    Newman, D M

    1987-01-01

    Women can adopt health practices that will help them to maintain good health throughout their various life stages. Women can take charge of their health by maintaining a nutritionally balanced diet, exercising, and using common sense. Women can also employ known preventive measures against osteoporosis, stroke, lung and breast cancer and accidents. Because women experience increased longevity and may require long-term care with age, the need for restructuring the nation's care system for the elderly becomes an important women's health concern. Adult day care centers, home health aides, and preventive education will be necessary, along with sufficient insurance to maintain quality care and self-esteem without depleting a person's resources. PMID:3120224

  1. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    PubMed

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  2. Algorithm validation using multicolor phantoms.

    PubMed

    Samarov, Daniel V; Clarke, Matthew L; Lee, Ji Youn; Allen, David W; Litorja, Maritoni; Hwang, Jeeseong

    2012-06-01

    We present a framework for hyperspectral image (HSI) analysis validation, specifically abundance fraction estimation based on HSI measurements of water soluble dye mixtures printed on microarray chips. In our work we focus on the performance of two algorithms, the Least Absolute Shrinkage and Selection Operator (LASSO) and the Spatial LASSO (SPLASSO). The LASSO is a well known statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundance fractions in a HSI scene, the "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The SPLASSO is a novel approach we introduce here for HSI analysis which takes the framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. In our work here we introduce the dye mixture platform as a new benchmark data set for hyperspectral biomedical image processing and show our algorithm's improvement over the standard LASSO. PMID:22741077

  3. Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks

    PubMed Central

    Zeng, Yali; Xu, Li; Chen, Zhide

    2015-01-01

    As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration. PMID:26703616

  4. Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks.

    PubMed

    Zeng, Yali; Xu, Li; Chen, Zhide

    2015-01-01

    As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration. PMID:26703616

  5. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  6. Full and partial gauge fixing

    SciTech Connect

    Shirzad, A.

    2007-08-15

    Gauge fixing may be done in different ways. We show that using the chain structure to describe a constrained system enables us to use either a full gauge, in which all gauged degrees of freedom are determined, or a partial gauge, in which some first class constraints remain as subsidiary conditions to be imposed on the solutions of the equations of motion. We also show that the number of constants of motion depends on the level in a constraint chain in which the gauge fixing condition is imposed. The relativistic point particle, electromagnetism, and the Polyakov string are discussed as examples and full or partial gauges are distinguished.

  7. A new frame-based registration algorithm.

    PubMed

    Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834

  8. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  9. Taking Care of Your Diabetes Means Taking Care of Your Heart

    MedlinePlus

    ENGLISH Taking Care of Your Diabetes Means Taking Care of Your Heart Diabetes and Heart Disease For people with diabetes, heart ... such as a heart attack or stroke. Taking care of your diabetes can also help you take ...

  10. Full Orchestra in Elementary School.

    ERIC Educational Resources Information Center

    Press, Doreen; Edman, Steve

    1997-01-01

    Contends that starting a full orchestra in elementary school allows a music program to be visible to the community and garner support for budgetary requests. Discusses the process for organizing an elementary orchestra and problems that orchestra directors may encounter. Includes a list of orchestra music for elementary musicians. (DSK)

  11. Step up to Full Inquiry

    ERIC Educational Resources Information Center

    Jensen, Jill; Kindem, Cathy

    2011-01-01

    Elementary students make great scientists. They are natural questioners and observers. Capitalizing on this natural curiosity and wonderment, the authors have developed a method of doing inquiry investigations with students that many teachers have found practical and user friendly. Their belief is that full inquiry lessons serve as a vital method…

  12. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  13. A Cross Unequal Clustering Routing Algorithm for Sensor Network

    NASA Astrophysics Data System (ADS)

    Tong, Wang; Jiyi, Wu; He, Xu; Jinghua, Zhu; Munyabugingo, Charles

    2013-08-01

    In the routing protocol for wireless sensor network, the cluster size is generally fixed in clustering routing algorithm for wireless sensor network, which can easily lead to the "hot spot" problem. Furthermore, the majority of routing algorithms barely consider the problem of long distance communication between adjacent cluster heads that brings high energy consumption. Therefore, this paper proposes a new cross unequal clustering routing algorithm based on the EEUC algorithm. In order to solve the defects of EEUC algorithm, this algorithm calculating of competition radius takes the node's position and node's remaining energy into account to make the load of cluster heads more balanced. At the same time, cluster adjacent node is applied to transport data and reduce the energy-loss of cluster heads. Simulation experiments show that, compared with LEACH and EEUC, the proposed algorithm can effectively reduce the energy-loss of cluster heads and balance the energy consumption among all nodes in the network and improve the network lifetime

  14. Staged optimization algorithms based MAC dynamic bandwidth allocation for OFDMA-PON

    NASA Astrophysics Data System (ADS)

    Liu, Yafan; Qian, Chen; Cao, Bingyao; Dun, Han; Shi, Yan; Zou, Junni; Lin, Rujian; Wang, Min

    2016-06-01

    Orthogonal frequency division multiple access passive optical network (OFDMA-PON) has being considered as a promising solution for next generation PONs due to its high spectral efficiency and flexible bandwidth allocation scheme. In order to take full advantage of these merits of OFDMA-PON, a high-efficiency medium access control (MAC) dynamic bandwidth allocation (DBA) scheme is needed. In this paper, we propose two DBA algorithms which can act on two different stages of a resource allocation process. To achieve higher bandwidth utilization and ensure the equity of ONUs, we propose a DBA algorithm based on frame structure for the stage of physical layer mapping. Targeting the global quality of service (QoS) of OFDMA-PON, we propose a full-range DBA algorithm with service level agreement (SLA) and class of service (CoS) for the stage of bandwidth allocation arbitration. The performance of the proposed MAC DBA scheme containing these two algorithms is evaluated using numerical simulations. Simulations of a 15 Gbps network with 1024 sub-carriers and 32 ONUs demonstrate the maximum network throughput of 14.87 Gbps and the maximum packet delay of 1.45 ms for the highest priority CoS under high load condition.

  15. Take-all or nothing.

    PubMed

    Hernández-Restrepo, M; Groenewald, J Z; Elliott, M L; Canning, G; McMillan, V E; Crous, P W

    2016-01-01

    Take-all disease of Poaceae is caused by Gaeumannomyces graminis (Magnaporthaceae). Four varieties are recognised in G. graminis based on ascospore size, hyphopodial morphology and host preference. The aim of the present study was to clarify boundaries among species and varieties in Gaeumannomyces by combining morphology and multi-locus phylogenetic analyses based on partial gene sequences of ITS, LSU, tef1 and rpb1. Two new genera, Falciphoriella and Gaeumannomycella were subsequently introduced in Magnaporthaceae. The resulting phylogeny revealed several cryptic species previously overlooked within Gaeumannomyces. Isolates of Gaeumannomyces were distributed in four main clades, from which 19 species could be delimited, 12 of which were new to science. Our results show that the former varieties Gaeumannomyces graminis var. avenae and Gaeumannomyces graminis var. tritici represent species phylogenetically distinct from G. graminis, for which the new combinations G. avenae and G. tritici are introduced. Based on molecular data, morphology and host preferences, Gaeumannomyces graminis var. maydis is proposed as a synonym of G. radicicola. Furthermore, an epitype for Gaeumannomyces graminis var. avenae was designated to help stabilise the application of that name. PMID:27504028

  16. Microgravity Smoldering Combustion Takes Flight

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The Microgravity Smoldering Combustion (MSC) experiment lifted off aboard the Space Shuttle Endeavour in September 1995 on the STS-69 mission. This experiment is part of series of studies focused on the smolder characteristics of porous, combustible materials in a microgravity environment. Smoldering is a nonflaming form of combustion that takes place in the interior of combustible materials. Common examples of smoldering are nonflaming embers, charcoal briquettes, and cigarettes. The objective of the study is to provide a better understanding of the controlling mechanisms of smoldering, both in microgravity and Earth gravity. As with other forms of combustion, gravity affects the availability of air and the transport of heat, and therefore, the rate of combustion. Results of the microgravity experiments will be compared with identical experiments carried out in Earth's gravity. They also will be used to verify present theories of smoldering combustion and will provide new insights into the process of smoldering combustion, enhancing our fundamental understanding of this frequently encountered combustion process and guiding improvement in fire safety practices.

  17. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  18. Full-F gyrofluid model

    SciTech Connect

    Madsen, Jens

    2013-07-15

    A global electromagnetic gyrofluid model based on the full-F gyrokinetic model is derived. The gyrofluid moment variables are not split into fluctuating and equilibrium parts. Profiles are evolved freely, and gyro-averaging operators are not parametrized, but are functions of the gyrofluid moment variables. The fluid moment hierarchy is closed by approximating the gyrokinetic distribution function as a finite order Hermite-Laguerre polynomial and by determining closure approximations for terms involving the gyrokinetic gyro-averaging operator. The model exactly conserves the gyrokinetic full-F energy invariant evaluated using the Hermite-Laguerre decomposition. The model is suited for qualitative studies of the interplay between turbulence, flows, and dynamically evolving profiles in magnetically confined plasmas.

  19. The Kepler Full Frame Images

    NASA Astrophysics Data System (ADS)

    Dotson, Jessie L.; Batalha, N.; Bryson, S.; Caldwell, D. A.; Clarke, B.; Haas, M. R.; Jenkins, J.; Kolodziejczak, J.; Quintana, E.; Van Cleve, J.; Kepler Team

    2010-01-01

    NASA's exoplanet discovery mission Kepler provides uninterrupted 1-min and 30-min optical photometry of a 100 square degree field over a 3.5 yr nominal mission. Downlink bandwidth is filled at these short cadences by selecting only detector pixels specific to 105 preselected stellar targets. The majority of the Kepler field, comprising 4 x 106 mv < 20 sources, is sampled at much lower 1-month cadence in the form of a full-frame image. The Full Frame Images (FFIs) are calibrated by the Science Operations Center at NASA Ames Research Center. The Kepler Team employ these images for astrometric and photometric reference but make the images available to the astrophysics community through the Multimission Archive at STScI (MAST). The full-frame images provide a resource for potential Kepler Guest Observers to select targets and plan observing proposals, while also providing a freely-available long-cadence legacy of photometric variation across a swathe of the Galactic disk. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

  20. The Kepler Full Frame Images

    NASA Technical Reports Server (NTRS)

    Dotson, Jessie L.; Batalha, Natalie; Bryson, Stephen T.; Caldwell, Douglas A.; Clarke, Bruce D.

    2010-01-01

    NASA's exoplanet discovery mission Kepler provides uninterrupted 1-min and 30-min optical photometry of a 100 square degree field over a 3.5 yr nominal mission. Downlink bandwidth is filled at these short cadences by selecting only detector pixels specific to 105 preselected stellar targets. The majority of the Kepler field, comprising 4 x 10(exp 6) m_v < 20 sources, is sampled at much lower 1-month cadence in the form of a full-frame image. The Full Frame Images (FFIs) are calibrated by the Science Operations Center at NASA Ames Research Center. The Kepler Team employ these images for astrometric and photometric reference but make the images available to the astrophysics community through the Multimission Archive at STScI (MAST). The full-frame images provide a resource for potential Kepler Guest Observers to select targets and plan observing proposals, while also providing a freely-available long-cadence legacy of photometric variation across a swathe of the Galactic disk.

  1. Efficient 2d full waveform inversion using Fortran coarray

    NASA Astrophysics Data System (ADS)

    Ryu, Donghyun; Kim, ahreum; Ha, Wansoo

    2016-04-01

    We developed a time-domain seismic inversion program using the coarray feature of the Fortran 2008 standard to parallelize the algorithm. We converted a 2d acoustic parallel full waveform inversion program with Message Passing Interface (MPI) to a coarray program and examined performance of the two inversion programs. The results show that the speed of the waveform inversion program using the coarray is slightly faster than that of the MPI version. The standard coarray lacks features for collective communication; however, it can be improved in following standards since it is introduced recently. The parallel algorithm can be applied for 3D seismic data processing.

  2. Reconstruction algorithm for limited-angle diffraction tomography for microwave NDE

    SciTech Connect

    Paladhi, P. Roy; Klaser, J.; Tayebi, A.; Udpa, L.; Udpa, S.

    2014-02-18

    Microwave tomography is becoming a popular imaging modality in nondestructive evaluation and medicine. A commonly encountered challenge in tomography in general is that in many practical situations a full 360° angular access is not possible and with limited access, the quality of reconstructed image is compromised. This paper presents an approach for reconstruction with limited angular access in diffraction tomography. The algorithm takes advantage of redundancies in image Fourier space data obtained from diffracted field measurements and couples it to an error minimization technique using a constrained total variation (CTV) minimization. Initial results from simulated data have been presented here to validate the approach.

  3. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  4. A prescription of Winograd's discrete Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1979-01-01

    A detailed and complete description of Winograd's discrete Fourier transform algorithm (DFT) is presented omitting all proofs and derivations. The algorithm begins with the transfer of data from the input vector array to the working array where the actual transformation takes place, otherwise known as input scrambling and output unscrambling. The third array holds constraints required in the transformation stage that are evaluated in the precomputation stage. The algorithm is made up of several FORTRAN subroutines which are not to be confused with practical software algorithmic implementation since they are designed for clarity and not for speed.

  5. Inhomogeneous phase shifting: an algorithm for nonconstant phase displacements

    SciTech Connect

    Tellez-Quinones, Alejandro; Malacara-Doblado, Daniel

    2010-11-10

    In this work, we have developed a different algorithm than the classical one on phase-shifting interferometry. These algorithms typically use constant or homogeneous phase displacements and they can be quite accurate and insensitive to detuning, taking appropriate weight factors in the formula to recover the wrapped phase. However, these algorithms have not been considered with variable or inhomogeneous displacements. We have generalized these formulas and obtained some expressions for an implementation with variable displacements and ways to get partially insensitive algorithms with respect to these arbitrary error shifts.

  6. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  7. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  8. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1931-01-01

    Wing and nacelle set-up in Full-Scale Tunnel (FST). The NACA conducted drag tests in 1931 on a P3M-1 nacelle which were presented in a special report to the Navy. Smith DeFrance described this work in the report's introduction: 'Tests were conducted in the full-scale wind tunnel on a five to four geared Pratt and Whitney Wasp engine mounted in a P3M-1 nacelle. In order to simulate the flight conditions the nacelle was assembled on a 15-foot span of wing from the same airplane. The purpose of the tests was to improve the cooling of the engine and to reduce the drag of the nacelle combination. Thermocouples were installed at various points on the cylinders and temperature readings were obtained from these by the power plants division. These results will be reported in a memorandum by that division. The drag results, which are covered by this memorandum, were obtained with the original nacelle condition as received from the Navy with the tail of the nacelle modified, with the nose section of the nacelle modified, with a Curtiss anti-drag ring attached to the engine, with a Type G ring developed by the N.A.C.A., and with a Type D cowling which was also developed by the N.A.C.A.' (p. 1)

  9. Information filtering via weighted heat conduction algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  10. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1929-01-01

    Modified propeller and spinner in Full-Scale Tunnel (FST) model. On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel. 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow. This model can be constructed in a comparatively short time, using 2 by 4 framing with matched sheathing inside, and where circular sections are desired they can be obtained by nailing sheet metal to wooden ribs, which can be cut on the band saw. It is estimated that three months will be required for the construction and testing of such a model and that the cost will be approximately three thousand dollars, one thousand dollars of which will be for the motors. No suitable location appears to exist in any of our present buildings, and it may be necessary to build it outside and cover it with a roof.' George Lewis responded immediately (June 27) granting the authority to proceed. He urged Langley to expedite construction and to employ extra carpenters if necessary. Funds for the model came from the FST project

  11. Integrated Resilient Aircraft Control Project Full Scale Flight Validation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    2009-01-01

    Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.

  12. Full-field vibrometry with digital Fresnel holography

    SciTech Connect

    Leval, Julien; Picart, Pascal; Boileau, Jean Pierre; Pascal, Jean Claude

    2005-09-20

    A setup that permits full-field vibration amplitude and phase retrieval with digital Fresnel holography is presented. Full reconstruction of the vibration is achieved with a three-step stroboscopic holographic recording, and an extraction algorithm is proposed. The finite temporal width of the illuminating light is considered in an investigation of the distortion of the measured amplitude and phase. In particular, a theoretical analysis is proposed and compared with numerical simulations that show good agreement. Experimental results are presented for a loudspeaker under sinusoidal excitation; the mean quadratic velocity extracted from amplitude evaluation under two different measuring conditions is presented. Comparison with time averaging validates the full-field vibrometer.

  13. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  14. Modified Cholesky factorizations in interior-point algorithms for linear programming.

    SciTech Connect

    Wright, S.; Mathematics and Computer Science

    1999-01-01

    We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.

  15. Full-bridge capacitive extensometer

    NASA Astrophysics Data System (ADS)

    Peters, Randall D.

    1993-08-01

    Capacitive transducers have proven to be very effective sensors of small displacements, because of inherent stability and noninvasive high resolution. The most versatile ones have been those of a differential type, in which two elements are altered in opposite directions in response to change of the system parameter being monitored. Oftentimes, this differential pair has been incorporated into a bridge circuit, which is a useful means for employing synchronous detection to improve signal to noise ratios. Unlike previous differential capacitive dilatometers which used only two active capacitors, the present sensor is a full-bridge type, which is well suited to measuring low-level thermal expansions. This analog sensor is capable of 0.1 μm resolution anywhere within a range of several centimeters, with a linearity of 0.1%. Its user friendly output can be put on a strip chart recorder or directed to a computer for sophisticated data analysis.

  16. A full-scale STOVL ejector experiment

    NASA Technical Reports Server (NTRS)

    Barankiewicz, Wendy S.

    1993-01-01

    The design and development of thrust augmenting short take-off and vertical landing (STOVL) ejectors has typically been an iterative process. In this investigation, static performance tests of a full-scale vertical lift ejector were performed at primary flow temperatures up to 1560 R (1100 F). Flow visualization (smoke generators, yarn tufts and paint dots) was used to assess inlet flowfield characteristics, especially around the primary nozzle and end plates. Performance calculations are presented for ambient temperatures close to 480 R (20 F) and 535 R (75 F) which simulate 'seasonal' aircraft operating conditions. Resulting thrust augmentation ratios are presented as functions of nozzle pressure ratio and temperature. Full-scale experimental tests such as this are expensive, and difficult to implement at engine exhaust temperatures. For this reason the utility of using similarity principles -- in particular, the Munk and Prim similarity principle for isentropic flow -- was explored. At different primary temperatures, exit pressure contours are compared for similarity. A nondimensional flow parameter is then shown to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments. Under the assumption that an appropriate similarity principle can be established, then properly chosen performance parameters should be similar for both hot flow and cold flow model tests.

  17. Full-color holographic 3D printer

    NASA Astrophysics Data System (ADS)

    Takano, Masami; Shigeta, Hiroaki; Nishihara, Takashi; Yamaguchi, Masahiro; Takahashi, Susumu; Ohyama, Nagaaki; Kobayashi, Akihiko; Iwata, Fujio

    2003-05-01

    A holographic 3D printer is a system that produces a direct hologram with full-parallax information using the 3-dimensional data of a subject from a computer. In this paper, we present a proposal for the reproduction of full-color images with the holographic 3D printer. In order to realize the 3-dimensional color image, we selected the 3 laser wavelength colors of red (λ=633nm), green (λ=533nm), and blue (λ=442nm), and we built a one-step optical system using a projection system and a liquid crystal display. The 3-dimensional color image is obtained by synthesizing in a 2D array the multiple exposure with these 3 wavelengths made on each 250mm elementary hologram, and moving recording medium on a x-y stage. For the natural color reproduction in the holographic 3D printer, we take the approach of the digital processing technique based on the color management technology. The matching between the input and output colors is performed by investigating first, the relation between the gray level transmittance of the LCD and the diffraction efficiency of the hologram and second, by measuring the color displayed by the hologram to establish a correlation. In our first experimental results a non-linear functional relation for single and multiple exposure of the three components were found. These results are the first step in the realization of a natural color 3D image produced by the holographic color 3D printer.

  18. Integrated powerhead demonstration full flow cycle development

    NASA Astrophysics Data System (ADS)

    Jones, J. Mathew; Nichols, James T.; Sack, William F.; Boyce, William D.; Hayes, William A.

    1998-01-01

    The Integrated Powerhead Demonstration (IPD) is a 1,112,000 N (250,000 lbf) thrust (at sea level) LOX/LH2 demonstration of a full flow cycle in an integrated system configuration. Aerojet and Rocketdyne are on contract to the Air Force Research Laboratory to design, develop, and deliver the required components, and to provide test support to accomplish the demonstration. Rocketdyne is on contract to provide a fuel and oxygen turbopump, a gas-gas injector, and system engineering and integration. Aerojet is on contract to provide a fuel and oxygen preburner, a main combustion chamber, and a nozzle. The IPD components are being designed with Military Spaceplane (MSP) performance and operability requirements in mind. These requirements include: lifetime >=200 missions, mean time between overhauls >=100 cycles, and a capability to throttle from 20% to 100% of full power. These requirements bring new challenges both in designing and testing the components. This paper will provide some insight into these issues. Lessons learned from operating and supporting the space shuttle main engine (SSME) have been reviewed and incorporated where applicable. The IPD program will demonstrate phase I goals of the Integrated High Payoff Rocket Propulsion Technology (IHPRPT) program while demonstrating key propulsion technologies that will be available for MSP concepts. The demonstration will take place on Test Stand 2A at the Air Force Research Laboratory at Edwards AFB. The component tests will begin in 1999 and the integrated system tests will be completed in 2002.

  19. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).

  20. Full-Scale Tunnel (FST)

    NASA Technical Reports Server (NTRS)

    1930-01-01

    Construction of Full-Scale Tunnel (FST): 120-Foot Truss hoisting, one and two point suspension. In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)

  1. Take Care of Your Child's Teeth

    MedlinePlus

    ... Decay 3 of 7 sections Take Action: Use Fluoride Protect your child’s teeth with fluoride. Fluoride is ... Brushing Tips 4 of 7 sections Take Action: Fluoride Supplements Ask for supplements if your water doesn' ...

  2. Take-off of heavily loaded airplanes

    NASA Technical Reports Server (NTRS)

    Proll, A

    1928-01-01

    In the present article, several suggestions will be made for shortening the otherwise long take-off distance. For the numerical verification of the process, I will use a graphic method for determining the take-off distance of seaplanes.

  3. Men: Take Charge of Your Health

    MedlinePlus

    ... charge of your health. Make small changes every day. Small changes can add up to big results – ... screening . Ask your doctor about taking aspirin every day. If you are age 50 to 59, taking ...

  4. Guide for Patients Taking Nonsteroidal Immunosuppressive Drugs

    MedlinePlus

    ... taking adalimumab, etanercept, or in iximab: Check your temperature frequently, and report a fever to your physician ... Receptor Antagonists For patients taking basiliximab: Check your temperature frequently, and report a fever to your physician ...

  5. Taking medicines - what to ask your doctor

    MedlinePlus

    ... medicine you take. Know what medicines, vitamins, and herbal supplements you take. Make a list of your medicines ... Will this medicine change how any of my herbal or dietary supplements work? Ask if your new medicine interferes with ...

  6. Prostate Cancer: Take Time to Decide

    MedlinePlus

    ... printing [PDF-983KB] Cancer Home Prostate Cancer: Take Time to Decide Infographic Language: English Español (Spanish) Recommend on Facebook Tweet Share Compartir Prostate Cancer: Take Time to Decide Most prostate cancers grow slowly, and ...

  7. Caregivers and Exercise -- Take Time for Yourself

    MedlinePlus

    ... nia.nih.gov/Go4Life Caregivers and Exercise—Take Time for Yourself Taking care of yourself is one ... you can do as a caregiver. Finding some time for regular exercise can be very important to ...

  8. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  9. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  10. Aerodynamics of a beetle in take-off flights

    NASA Astrophysics Data System (ADS)

    Lee, Boogeon; Park, Hyungmin; Kim, Sun-Tae

    2015-11-01

    In the present study, we investigate the aerodynamics of a beetle in its take-off flights based on the three-dimensional kinematics of inner (hindwing) and outer (elytron) wings, and body postures, which are measured with three high-speed cameras at 2000 fps. To track the highly deformable wing motions, we distribute 21 morphological markers and use the modified direct linear transform algorithm for the reconstruction of measured wing motions. To realize different take-off conditions, we consider two types of take-off flights; that is, one is the take-off from a flat ground and the other is from a vertical rod mimicking a branch of a tree. It is first found that the elytron which is flapped passively due to the motion of hindwing also has non-negligible wing-kinematic parameters. With the ground, the flapping amplitude of elytron is reduced and the hindwing changes its flapping angular velocity during up and downstrokes. On the other hand, the angle of attack on the elytron and hindwing increases and decreases, respectively, due to the ground. These changes in the wing motion are critically related to the aerodynamic force generation, which will be discussed in detail. Supported by the grant to Bio-Mimetic Robot Research Center funded by Defense Acquisition Program Administration (UD130070ID).

  11. 50 CFR 216.11 - Prohibited taking.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...

  12. 50 CFR 216.11 - Prohibited taking.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...

  13. 50 CFR 216.11 - Prohibited taking.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...

  14. 50 CFR 216.11 - Prohibited taking.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...

  15. 50 CFR 216.11 - Prohibited taking.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...

  16. Teenagers and Risk-Taking at Camp.

    ERIC Educational Resources Information Center

    Woods, Ann

    2002-01-01

    Teen risk-taking is normal, healthy developmental behavior. Teens act out their fantasies--good and bad--at camp because it is a safe place away from parents. Signs of unhealthy risk-taking, camp staff responses, and how the September 11 tragedy might affect risk-taking are discussed. Sidebars describe tips for understanding adolescent behavior…

  17. Optimized hardware and software for fast full-chip simulation

    NASA Astrophysics Data System (ADS)

    Cao, Yu; Lu, Yen-Wen; Chen, Luoqi; Ye, Jun

    2005-05-01

    Lithography simulation is an increasingly important part of semiconductor manufacturing due to the decreasing k1 value. It is not only required in lithography process development, but also in RET design, RET verification, and process latitude analysis, from library cells to full-chip. As the design complexity grows exponentially, pure software based simulation tools running on general-purpose computer clusters are facing increasing challenges in meeting today"s requirements for cycle time, coverage, and modeling accuracy. We have developed a new lithography simulation platform (TachyonTM) which achieves orders of magnitude speedup as compared to traditional pure software simulation tools. The platform combines innovations in all levels of the system: algorithm, software architecture, cluster-level architecture, and proprietary acceleration hardware using application specific integrated circuits. The algorithm approach is based on image processing, fundamentally different from conventional edge-based analysis. The system achieves superior model accuracy than conventional full-chip simulation methods, owing to its ability to handle hundreds of TCC kernels, using either vector or scalar optical model, without impacting throughput. Thus first-principle aerial image simulation at the full-chip level can be carried out within minutes. We will describe the hardware, algorithms and models used in the system and demonstrate its applications of the full chip verification purposes.

  18. Algorithms to Automate LCLS Undulator Tuning

    SciTech Connect

    Wolf, Zachary

    2010-12-03

    Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.

  19. Modeling a magnetostrictive transducer using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Almeida, L. A. L.; Deep, G. S.; Lima, A. M. N.; Neff, H.

    2001-05-01

    This work reports on the applicability of the genetic algorithm (GA) to the problem of parameter determination of magnetostrictive transducers. A combination of the Jiles-Atherton hysteresis model with a quadratic moment rotation model is simulated using known parameters of a sensor. The simulated sensor data are then used as input data for the GA parameter calculation method. Taking the previously known parameters, the accuracy of the GA parameter calculation method can be evaluated.

  20. Full L.A. treatment

    SciTech Connect

    Wahbeh, V.N.; Clark, J.H.; Naydo, W.R.; Horii, R.S.

    1993-09-01

    The high-purity-oxygen activated sludge process will be used to expand secondary treatment capacity and improve water quality in Santa Monica Bay. The facility is operated by the city of Los Angeles Department of Public Works` Bureau of Sanitation. The overall Hyperion Full Secondary Project is 30% complete, including a new headworks, a new primary clarifier battery, an electrical switch yard, and additional support facilities. The upgrading of secondary facilities is 50% complete, and construction of the digester facilities, the waste-activated sludge thickening facility, and the second phase of the three-phase modification to existing primary clarifier batteries has just begun. The expansion program will provide a maximum monthly design capacity of 19,723 L/s(450 mgd). Hyperion`s expansion program uses industrial treatment techniques rarely attempted in a municipal facility, particularly on such a large scale, including: a user-friendly intermediate pumping station featuring 3.8-m Archimedes screw pumps with a capacity of 5479 L/s each; space-efficient, high-purity-oxygen reactors; a one-of-a-kind, 777-Mg/d oxygen-generating facility incorporating several innovative features that not only save money and energy, but reduce noise; design improvements in 36 new final clarifiers to enhance settling and provide high effluent quality; and egg-shaped digesters to respond to technical and aesthetic design parameters.

  1. Full-Scale Wind Tunnel

    NASA Technical Reports Server (NTRS)

    1931-01-01

    Construction of Full-Scale Tunnel (FST) balance. Smith DeFrance described the 6-component type balance in NACA TR No. 459 (which also includes a schematic diagram of the balance and its various parts). 'Ball and socket fittings at the top of each of the struts hod the axles of the airplane to be tested; the tail is attached to the triangular frame. These struts are secured to the turntable, which is attached to the floating frame. This frame rests on the struts (next to the concrete piers on all four corners), which transmit the lift forces to the scales (partially visible on the left). The drag linkage is attached to the floating frame on the center line and, working against a known counterweight, transmits the drag force to the scale (center, face out). The cross-wind force linkages are attached to the floating frame on the front and rear sides at the center line. These linkages, working against known counterweights, transmit the cross-wind force to scales (two front scales, face in). In the above manner the forces in three directions are measured and by combining the forces and the proper lever arms, the pitching, rolling, and yawing moments can be computed. The scales are of the dial type and are provided with solenoid-operated printing devices. When the proper test condition is obtained, a push-button switch is momentarily closed and the readings on all seven scales are recorded simultaneously, eliminating the possibility of personal errors.'

  2. Full Stokes polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Vedel, M.; Breugnot, S.; Lechocinski, N.

    2011-10-01

    Objective and background: We present a new version of Bossa Nova Technologies' passive polarization imaging camera. The previous version was performing live measurement of the Linear Stokes parameters (S0, S1, S2), and its derivatives. This new version presented in this paper performs live measurement of Full Stokes parameters, i.e. including the fourth parameter S3 related to the amount of circular polarization. Dedicated software was developed to provide live images of any Stokes related parameters such as the Degree Of Linear Polarization (DOLP), the Degree Of Circular Polarization (DOCP), the Angle Of Polarization (AOP). Results: We first we give a brief description of the camera and its technology. It is a Division Of Time Polarimeter using a custom ferroelectric liquid crystal cell. A description of the method used to calculate Data Reduction Matrix (DRM)5,9 linking intensity measurements and the Stokes parameters is given. The calibration was developed in order to maximize the condition number of the DRM. It also allows very efficient post processing of the images acquired. Complete evaluation of the precision of standard polarization parameters is described. We further present the standard features of the dedicated software that was developed to operate the camera. It provides live images of the Stokes vector components and the usual associated parameters. Finally some tests already conducted are presented. It includes indoor laboratory and outdoor measurements. This new camera will be a useful tool for many applications such as biomedical, remote sensing, metrology, material studies, and others.

  3. Inertial Upper Stage navigation algorithms evaluation

    NASA Astrophysics Data System (ADS)

    Joldersma, T.; Winkel, D. J.; Goodstein, R.; Simmons, E. J., Jr.

    The Inertial Upper Stage is a Space Shuttle-deployed vehicle taking payloads from low earth orbit to geosynchronous and other orbits, and incorporates a redundant inertial measurement unit containing five gyros and five accelerometers in a strapped down, skewed orientation. The gyro and accelerometer outputs are provided to redundant, on board digital computers to conduct sensor motion compensation, failure detection and isolation, and navigation in an earth-centered inertial coordinate system. Two representations of the flight software algorithms are under evaluation in preparation for the first payload-carrying flight: a FORTRAN nonreal time version for a scientific computer, and a JOVIAL version compiled for the flight computer. Results to date on nominal and off-nominal simulation runs are meeting navigation algorithm error allocations and generating correct responses to sensor error simulations for the redundancy algorithms.

  4. Conjugate gradient algorithms using multiple recursions

    SciTech Connect

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  5. An MPEG-21-driven multimedia adaptation decision-taking engine based on constraint satisfaction problem

    NASA Astrophysics Data System (ADS)

    Feng, Xiao; Tang, Rui-chun; Zhai, Yi-li; Feng, Yu-qing; Hong, Bo-hai

    2013-07-01

    Multimedia adaptation decision-taking techniques based on context are considered. Constraint satisfaction problem-Based Content Adaptation Algorithm (CBCAA) is proposed. First the algorithm obtains and classifies context information using MPEG-21; then it builds the constraint model according to different types of context information, constraint satisfaction method is used to acquire Media Description Decision Set (MDDS); finally a bit-stream adaptation engine performs the multimedia transcoding. Simulation results prove that the presented algorithm offers an efficient solution for personalized multimedia adaptation in heterogeneous environments.

  6. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  7. 75 FR 6188 - Full-Service Community Schools

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-08

    ...The Secretary of Education proposes priorities, requirements, definitions, and selection criteria for the Full-Service Community Schools (FSCS) program. The Secretary may use these priorities, requirements, definitions, and selection criteria for competitions in fiscal year (FY) 2010 and later years. We take this action to focus Federal assistance on supporting collaboration among schools and......

  8. Primary Care Sports Medicine: A Full-Timer's Perspective.

    ERIC Educational Resources Information Center

    Moats, William E.

    1988-01-01

    This article describes the history and structure of a sports medicine facility, the patient care services it offers, and the types of injuries treated at the center. Opportunities and potentials for physicians who wish to enter the field of sports medicine on a full-time basis are described, as are steps to take to prepare to do so. (Author/JL)

  9. Accelerated ray tracing algorithm under urban macro cell

    NASA Astrophysics Data System (ADS)

    Liu, Z.-Y.; Guo, L.-X.; Guan, X.-W.

    2015-10-01

    In this study, an ray tracing propagation prediction model, which is based on creating a virtual source tree, is used because of their high efficiency and reliable prediction accuracy. In addition, several acceleration techniques are also adopted to improve the efficiency of ray-tracing-based prediction over large areas. However, in the process of employing the ray tracing method for coverage zone prediction, runtime is linearly proportional to the total number of prediction points, leading to large and sometimes prohibitive computation time requirements under complex geographical urban macrocell environments. In order to overcome this bottleneck, the compute unified device architecture (CUDA), which provides fine-grained data parallelism and thread parallelism, is implemented to accelerate the calculation. Taking full advantage of tens of thousands of threads in CUDA program, the decomposition of the coverage prediction problem is firstly conducted by partitioning the image tree and the visible prediction points to different sources. Then, we make every thread calculate the electromagnetic field of one propagation path and then collect these results. Comparing this parallel algorithm with the traditional sequential algorithm, it can be found that computational efficiency has been improved.

  10. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  11. Vectorizable multigrid algorithms for transonic-flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.

    1986-01-01

    The analysis and the incorporation into a multigrid scheme of several vectorizable algorithms are discussed. von Neumann analyses of vertical-line, horizontal-line, and alternating-direction ZEBRA algorithms were performed; and the results were used to predict their multigrid damping rates. The algorithms were then successfully implemented in a transonic conservative full-potential computer program. The convergence acceleration effect of multiple grids is shown, and the convergence rates of the vectorizable algorithms are compared with those of standard successive-line overrelaxation (SLOR) algorithms.

  12. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  13. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  14. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  15. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  16. Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices

    NASA Astrophysics Data System (ADS)

    Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly

    2010-01-01

    In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.

  17. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  18. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  19. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  20. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  1. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  2. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  3. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  4. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  5. Quantum algorithms and the finite element method

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley; Pallister, Sam

    2016-03-01

    The finite element method is used to approximately solve boundary value problems for differential equations. The method discretizes the parameter space and finds an approximate solution by solving a large system of linear equations. Here we investigate the extent to which the finite element method can be accelerated using an efficient quantum algorithm for solving linear equations. We consider the representative general question of approximately computing a linear functional of the solution to a boundary value problem and compare the quantum algorithm's theoretical performance with that of a standard classical algorithm—the conjugate gradient method. Prior work claimed that the quantum algorithm could be exponentially faster but did not determine the overall classical and quantum run times required to achieve a predetermined solution accuracy. Taking this into account, we find that the quantum algorithm can achieve a polynomial speedup, the extent of which grows with the dimension of the partial differential equation. In addition, we give evidence that no improvement of the quantum algorithm can lead to a superpolynomial speedup when the dimension is fixed and the solution satisfies certain smoothness properties.

  6. Minimalist ensemble algorithms for genome-wide protein localization prediction

    PubMed Central

    2012-01-01

    proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391

  7. A new algorithm for agile satellite-based acquisition operations

    NASA Astrophysics Data System (ADS)

    Bunkheila, Federico; Ortore, Emiliano; Circi, Christian

    2016-06-01

    Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.

  8. A full variational calculation based on a tensor product decomposition

    NASA Astrophysics Data System (ADS)

    Senese, Frederick A.; Beattie, Christopher A.; Schug, John C.; Viers, Jimmy W.; Watson, Layne T.

    1989-08-01

    A new direct full variational approach exploits a tensor (Kronecker) product decomposition of the Hamiltonian. Explicit assembly and storage of the Hamiltonian matrix is avoided by using the Kronecker product structure to form matrix-vector products directly from the molecular integrals. Computation-intensive integral transformations and formula tapes are unnecessary. The wavefunction is expanded in terms of spin-free primitive kets rather than Slater determinants or configuration state functions, and the expansion is equivalent to a full configuration interaction expansion. The approach suggests compact storage schemes and algorithms which are naturally suited to parallel and pipelined machines.

  9. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks. PMID:24852272

  10. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  11. 77 FR 64961 - Taking and Importing Marine Mammals; Taking Marine Mammals Incidental to Replacement of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... National Oceanic and Atmospheric Administration RIN 0648-BC69 Taking and Importing Marine Mammals; Taking... of 5-year regulations governing the incidental taking of marine mammals and inviting information... Commerce to allow, upon request, the incidental, but not intentional, taking of small numbers of...

  12. Computational and methodological developments towards 3D full waveform inversion

    NASA Astrophysics Data System (ADS)

    Etienne, V.; Virieux, J.; Hu, G.; Jia, Y.; Operto, S.

    2010-12-01

    Full waveform inversion (FWI) is one of the most promising techniques for seismic imaging. It relies on a formalism taking into account every piece of information contained in the seismic data as opposed to more classical techniques such as travel time tomography. As a result, FWI is a high resolution imaging process able to reach a spatial accuracy equal to half a wavelength. FWI is based on a local optimization scheme and therefore the main limitation concerns the starting model which has to be closed enough to the real one in order to converge to the global minimum. Another counterpart of FWI is the required computational resources when considering models and frequencies of interest. The task becomes even more tremendous when one tends to perform the inversion using the elastic equation instead of using the acoustic approximation. This is the reason why until recently most studies were limited to 2D cases. In the last few years, due to the increase of the available computational power, FWI has focused a lot of interests and continuous efforts towards inversion of 3D models, leading to remarkable applications up to the continental scale. We investigate the computational burden induced by FWI in 3D elastic media and propose some strategic features leading to the reduction of the numerical cost while providing a great flexibility in the inversion parametrization. First, in order to release the memory requirements, we developed our FWI algorithm in the frequency domain and take benefit of the wave-number redundancy in the seismic data to process a quite reduced number of frequencies. To do so, we extract frequency solutions from time marching techniques which are efficient for 3D structures. Moreover, this frequency approach permits a multi-resolution strategy by proceeding from low to high frequencies: the final model at one frequency is used as the starting model for the next frequency. This procedure overcomes partially the non-linear behavior of the inversion

  13. TakeTwo: an indexing algorithm suited to still images with known crystal parameters.

    PubMed

    Ginn, Helen Mary; Roedig, Philip; Kuo, Anling; Evans, Gwyndaf; Sauter, Nicholas K; Ernst, Oliver; Meents, Alke; Mueller-Werkmeister, Henrike; Miller, R J Dwayne; Stuart, David Ian

    2016-08-01

    The indexing methods currently used for serial femtosecond crystallography were originally developed for experiments in which crystals are rotated in the X-ray beam, providing significant three-dimensional information. On the other hand, shots from both X-ray free-electron lasers and serial synchrotron crystallography experiments are still images, in which the few three-dimensional data available arise only from the curvature of the Ewald sphere. Traditional synchrotron crystallography methods are thus less well suited to still image data processing. Here, a new indexing method is presented with the aim of maximizing information use from a still image given the known unit-cell dimensions and space group. Efficacy for cubic, hexagonal and orthorhombic space groups is shown, and for those showing some evidence of diffraction the indexing rate ranged from 90% (hexagonal space group) to 151% (cubic space group). Here, the indexing rate refers to the number of lattices indexed per image. PMID:27487826

  14. TakeTwo: an indexing algorithm suited to still images with known crystal parameters

    PubMed Central

    Ginn, Helen Mary; Roedig, Philip; Kuo, Anling; Evans, Gwyndaf; Sauter, Nicholas K.; Ernst, Oliver; Meents, Alke; Mueller-Werkmeister, Henrike; Miller, R. J. Dwayne; Stuart, David Ian

    2016-01-01

    The indexing methods currently used for serial femtosecond crystallography were originally developed for experiments in which crystals are rotated in the X-ray beam, providing significant three-dimensional information. On the other hand, shots from both X-ray free-electron lasers and serial synchrotron crystallo­graphy experiments are still images, in which the few three-dimensional data available arise only from the curvature of the Ewald sphere. Traditional synchrotron crystallography methods are thus less well suited to still image data processing. Here, a new indexing method is presented with the aim of maximizing information use from a still image given the known unit-cell dimensions and space group. Efficacy for cubic, hexagonal and orthorhombic space groups is shown, and for those showing some evidence of diffraction the indexing rate ranged from 90% (hexagonal space group) to 151% (cubic space group). Here, the indexing rate refers to the number of lattices indexed per image. PMID:27487826

  15. A limited-memory algorithm for bound-constrained optimization

    SciTech Connect

    Byrd, R.H.; Peihuang, L.; Nocedal, J. |

    1996-03-01

    An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

  16. MICA: A fast short-read aligner that takes full advantage of Many Integrated Core Architecture (MIC)

    PubMed Central

    2015-01-01

    Background Short-read aligners have recently gained a lot of speed by exploiting the massive parallelism of GPU. An uprising alterative to GPU is Intel MIC; supercomputers like Tianhe-2, currently top of TOP500, is built with 48,000 MIC boards to offer ~55 PFLOPS. The CPU-like architecture of MIC allows CPU-based software to be parallelized easily; however, the performance is often inferior to GPU counterparts as an MIC card contains only ~60 cores (while a GPU card typically has over a thousand cores). Results To better utilize MIC-enabled computers for NGS data analysis, we developed a new short-read aligner MICA that is optimized in view of MIC's limitation and the extra parallelism inside each MIC core. By utilizing the 512-bit vector units in the MIC and implementing a new seeding strategy, experiments on aligning 150 bp paired-end reads show that MICA using one MIC card is 4.9 times faster than BWA-MEM (using 6 cores of a top-end CPU), and slightly faster than SOAP3-dp (using a GPU). Furthermore, MICA's simplicity allows very efficient scale-up when multiple MIC cards are used in a node (3 cards give a 14.1-fold speedup over BWA-MEM). Summary MICA can be readily used by MIC-enabled supercomputers for production purpose. We have tested MICA on Tianhe-2 with 90 WGS samples (17.47 Tera-bases), which can be aligned in an hour using 400 nodes. MICA has impressive performance even though MIC is only in its initial stage of development. Availability and implementation MICA's source code is freely available at http://sourceforge.net/projects/mica-aligner under GPL v3. Supplementary information Supplementary information is available as "Additional File 1". Datasets are available at www.bio8.cs.hku.hk/dataset/mica. PMID:25952019

  17. Motorcyclists, full-face helmets and neck injuries: can you take the helmet off safely, and if so, how?

    PubMed Central

    Branfoot, T

    1994-01-01

    Injured motorcyclists may have a damaged and unstable cervical spine (C-spine). This paper looks at whether a helmet can be safely removed, how and when should this be done? The literature is reviewed and the recommendations of the Trauma Working party of the Joint Colleges Ambulance Liaison Committee are presented. PMID:7921566

  18. Teach Kids Test-Taking Tactics

    ERIC Educational Resources Information Center

    Glenn, Robert E.

    2004-01-01

    Teachers can do something to help ensure students will do better on tests. They can actively teach test-taking skills so pupils will be better armed in the battle to acquire knowledge. The author challenges teachers to use the suggestions provided in this article in the classroom, and to share them with their students. Test-taking strategies will…

  19. Academic Risk Taking, Development, and External Constraint.

    ERIC Educational Resources Information Center

    Clifford, Margaret M.; And Others

    1990-01-01

    Academic risk taking--the selection of schoollike tasks ranging in difficulty and probability of success--was examined for 602 students in grades 4, 6, and 8 in Taiwan. Results of a self-report measure of tolerance for failure and a risk-taking task are discussed concerning self-enhancement versus self-assessment goals, metacognitive skills, and…

  20. Giving Ourselves Permission to Take Risks

    ERIC Educational Resources Information Center

    Jones, Elizabeth

    2012-01-01

    What's a risk? It's when one doesn't know what will happen when she/he takes action. Risks can be little or big, calculated or stupid. Every new idea carries risks--and the challenge to face them and see what will happen. Nobody becomes smart, creative, self-confident, and respectful of others without taking risks--remaining open to possibilities…

  1. Does Anticipation Training Affect Drivers' Risk Taking?

    ERIC Educational Resources Information Center

    McKenna, Frank P.; Horswill, Mark S.; Alexander, Jane L.

    2006-01-01

    Skill and risk taking are argued to be independent and to require different remedial programs. However, it is possible to contend that skill-based training could be associated with an increase, a decrease, or no change in risk-taking behavior. In 3 experiments, the authors examined the influence of a skill-based training program (hazard…

  2. Take Steps Toward a Healthier Life | Poster

    Cancer.gov

    The National Institutes of Health (NIH) is promoting wellness by encouraging individuals to take the stairs. In an effort to increase participation in this program, NIH has teamed up with Occupational Health Services (OHS). OHS is placing NIH-sponsored “Take the Stairs” stickers on stair entrances, stair exits, and elevators.

  3. Risk Taking Transfer in Development Training.

    ERIC Educational Resources Information Center

    Goldman, Kathy; Priest, Simon

    1991-01-01

    Twenty-seven corporate managers completed the Priest Attarian Risk Taking Inventory before and after a day of rappelling. Subjects also completed a business version of the inventory a few weeks before and a few weeks after the experience. Subjects appeared to transfer some of their new risk-taking behaviors to their jobs. (KS)

  4. Transonic Wing Shape Optimization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.

  5. Genetic algorithms, path relinking, and the flowshop sequencing problem.

    PubMed

    Reeves, C R; Yamada, T

    1998-01-01

    In a previous paper, a simple genetic algorithm (GA) was developed for finding (approximately) the minimum makespan of the n-job, m-machine permutation flowshop sequencing problem (PFSP). The performance of the algorithm was comparable to that of a naive neighborhood search technique and a proven simulated annealing algorithm. However, recent results have demonstrated the superiority of a tabu search method in solving the PFSP. In this paper, we reconsider the implementation of a GA for this problem and show that by taking into account the features of the landscape generated by the operators used, we are able to improve its performance significantly. PMID:10021740

  6. Genetic algorithm for bundle adjustment in aerial panoramic stitching

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxiao; Wen, Gaojin; Wu, Chunnan; Wang, Hongmin; Shang, Zhiming; Zhang, Qian

    2015-03-01

    This paper presents a genetic algorithm for bundle adjustment in aerial panoramic stitching. Compared with the conventional LM (Levenberg-Marquardt) algorithm for bundle adjustment, the proposed bundle adjustment combining the genetic algorithm optimization eliminates the possibility of sticking into the local minimum, and not requires the initial estimation of desired parameters, naturally avoiding the associated steps, that includes the normalization of matches, the computation of homography transformation, the calculations of rotation transformation and the focal length. Since the proposed bundle adjustment is composed of the directional vectors of matches, taking the advantages of genetic algorithm (GA), the Jacobian matrix and the normalization of residual error are not involved in the searching process. The experiment verifies that the proposed bundle adjustment based on the genetic algorithm can yield the global solution even in the unstable aerial imaging condition.

  7. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  8. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  9. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  10. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  11. A New Approximate Chimera Donor Cell Search Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Nixon, David (Technical Monitor)

    1998-01-01

    The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.

  12. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  13. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  14. Full-waveform data for building roof step edge localization

    NASA Astrophysics Data System (ADS)

    Słota, Małgorzata

    2015-08-01

    Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.

  15. How taking photos increases enjoyment of experiences.

    PubMed

    Diehl, Kristin; Zauberman, Gal; Barasch, Alixandra

    2016-08-01

    Experiences are vital to the lives and well-being of people; hence, understanding the factors that amplify or dampen enjoyment of experiences is important. One such factor is photo-taking, which has gone unexamined by prior research even as it has become ubiquitous. We identify engagement as a relevant process that influences whether photo-taking will increase or decrease enjoyment. Across 3 field and 6 lab experiments, we find that taking photos enhances enjoyment of positive experiences across a range of contexts and methodologies. This occurs when photo-taking increases engagement with the experience, which is less likely when the experience itself is already highly engaging, or when photo-taking interferes with the experience. As further evidence of an engagement-based process, we show that photo-taking directs greater visual attention to aspects of the experience one may want to photograph. Lastly, we also find that this greater engagement due to photo-taking results in worse evaluations of negative experiences. (PsycINFO Database Record PMID:27267324

  16. Oxytocin and vasopressin modulate risk-taking.

    PubMed

    Patel, Nilam; Grillon, Christian; Pavletic, Nevia; Rosen, Dana; Pine, Daniel S; Ernst, Monique

    2015-02-01

    The modulation of risk-taking is critical for adaptive and optimal behavior. This study examined how oxytocin (OT) and arginine vasopressin (AVP) influence risk-taking in function of three parameters: sex, risk-valence, and social context. Twenty-nine healthy adults (14 males) completed a risk-taking task, the Stunt task, both in a social-stress (evaluation by unfamiliar peers) and non-social context, in three separate drug treatment sessions. During each session, one of three drugs, OT, AVP, or placebo (PLC), was administered intra-nasally. OT and AVP relative to PLC reduced betting-rate (risk-averse effect). This risk-averse effect was further qualified: AVP reduced risk-taking in the positive risk-valence (high win-probability), and regardless of social context or sex. In contrast, OT reduced risk-taking in the negative risk-valence (low win-probability), and only in the social-stress context and men. The reduction in risk-taking might serve a role in defensive behavior. These findings extend the role of these neuromodulators to behaviors beyond the social realm. How the behavioral modulation of risk-taking maps onto the function of the neural targets of OT and AVP may be the next step in this line of research. PMID:25446228

  17. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  18. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    NASA Astrophysics Data System (ADS)

    Cantó, J.; Curiel, S.; Martínez-Gómez, E.

    2009-07-01

    Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.

  19. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  20. Full-3D waveform tomography of Southern California crustal structure by using earthquake recordings and ambient noise Green's functions based on adjoint and scattering-integral methods

    NASA Astrophysics Data System (ADS)

    Lee, E.; Chen, P.; Jordan, T. H.; Maechling, P. J.; Denolle, M.; Beroza, G. C.

    2013-12-01

    We apply a unified methodology for seismic waveform analysis and inversions to Southern California. To automate the waveform selection processes, we developed a semi-automatic seismic waveform analysis algorithm for full-wave earthquake source parameters and tomographic inversions. The algorithm is based on continuous wavelet transforms, a topological watershed method, and a set of user-adjustable criteria to select usable waveform windows for full-wave inversions. The algorithm takes advantages of time-frequency representations of seismograms and is able to separate seismic phases in both time and frequency domains. The selected wave packet pairs between observed and synthetic waveforms are then used for extracting frequency-dependent phase and amplitude misfit measurements, which are used in our seismic source and structural inversions. Our full-wave waveform tomography uses the 3D SCEC Community Velocity Model Version 4.0 as initial model, a staggered-grid finite-difference code to simulate seismic wave propagations. The sensitivity (Fréchet) kernels are calculated based on the scattering integral and adjoint methods to iteratively improve the model. We use both earthquake recordings and ambient noise Green's functions, stacking of station-to-station correlations of ambient seismic noise, in our full-3D waveform tomographic inversions. To reduce errors of earthquake sources, the epicenters and source parameters of earthquakes used in our tomographic inversion are inverted by our full-wave CMT inversion method. Our current model shows many features that relate to the geological structures at shallow depth and contrasting velocity values across faults. The velocity perturbations could up to 45% with respect to the initial model in some regions and relate to some structures that do not exist in the initial model, such as southern Great Valley. The earthquake waveform misfits reduce over 70% and the ambient noise Green's function group velocity delay time variance

  1. Fast algorithms for transport models. Final report

    SciTech Connect

    Manteuffel, T.A.

    1994-10-01

    This project has developed a multigrid in space algorithm for the solution of the S{sub N} equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell {mu}-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE`s. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M)).

  2. Applying the take-grant protection model

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1990-01-01

    The Take-Grant Protection Model has in the past been used to model multilevel security hierarchies and simple protection systems. The models are extended to include theft of rights and sharing information, and additional security policies are examined. The analysis suggests that in some cases the basic rules of the Take-Grant Protection Model should be augmented to represent the policy properly; when appropriate, such modifications are made and their efforts with respect to the policy and its Take-Grant representation are discussed.

  3. Stereoscopic full aperture imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Strocovsky, Sergio G.; Otero, Dino

    2011-06-01

    Images of planar scintigraphy and single photon emission computerized tomography (SPECT) used in nuclear medicine are often low quality. They usually appear to be blurred and noisy. This problem is due to the low spatial resolution and poor sensitivity of the acquisition technique with the gamma camera (GC). Other techniques, such as coded aperture imaging (CAI) reach higher spatial resolutions than GC. However, CAI is not frequently used for imaging in nuclear medicine, due to the decoding complexity of some images and the difficulty in controlling the noise magnitude. Summing up, the images obtained through GC are low quality and it is still difficult to implement CAI technique. A novel technique, full aperture Imaging (FAI), also uses gamma ray-encoding to obtain images, but the coding system and the method of images reconstruction are simpler than those used in CAI. In addition, FAI also reaches higher spatial resolution than GC. In this work, the principles of FAI technique and the method of images reconstruction are explained in detail. The FAI technique is tested by means of Monte Carlo simulations with filiform and spherical sources. Spatial resolution tests of GC versus FAI were performed using two different source-detector distances. First, simulations were made without interposing any material between the sources and the detector. Then, other more realistic simulations were made. In these, the sources were placed in the centre of a rectangular prismatic region, filled with water. A rigorous comparison was made between GC and FAI images of the linear filiform sources, by means of two methods: mean fluence profile graphs and correlation tests. Finally, three-dimensional capacity of FAI was tested with two spherical sources. The results show that FAI technique has greater sensitivity (>100 times) and greater spatial resolution (>2.6 times) than that of GC with LEHR collimator, in both cases, with and without attenuating material and long and short

  4. Taking Care of Your Diabetes Means Taking Care of Your Heart (Tip Sheet)

    MedlinePlus

    ... Your Heart: Manage the ABCs of Diabetes Taking Care of Your Diabetes Means Taking Care of Your Heart (Tip Sheet) Diabetes and Heart ... What you can do now Ask your health care team these questions: What can I do to ...

  5. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  6. Grid data extraction algorithm for ship routing

    NASA Astrophysics Data System (ADS)

    Li, Yuankui; Zhang, Yingjun; Yue, Xingwang; Gao, Zongjiang

    2015-05-01

    With the aim of extracting environmental data around routes, as the basis of ship routing optimization and other related studies, this paper, taking wind grid data as an example, proposes an algorithm that can effectively extract the grid data around rhumb lines. According to different ship courses, the algorithm calculates the wind grid index values in eight different situations, and a common computational formula is summarised. The wind grids around a ship route can be classified into `best-fitting' grids and `additional' grids, which are stored in such a way that, for example, when the data has a high-spacing resolution, only the `best-fitting' grids around ship routes are extracted. Finally, the algorithm was implemented and simulated with MATLAB programming. As the simulation results indicate, the algorithm designed in this paper achieved wind grid data extraction in different situations and further resolved the extraction problem of meteorological and hydrogeological field grids around ship routes efficiently. Thus, it can provide a great support for optimal ship routing related to meteorological factors.

  7. Obstacle Detection Algorithms for Rotorcraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)

    2001-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.

  8. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  9. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  10. General cardinality genetic algorithms

    PubMed

    Koehler; Bhattacharyya; Vose

    1997-01-01

    A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767

  11. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  12. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  13. Note Taking on Trial: A Legal Application of Note-Taking Research

    ERIC Educational Resources Information Center

    Kiewra, Kenneth A.

    2016-01-01

    This article is about note taking, but it is not an exhaustive review of note-taking literature. Instead, it portrays the application of note-taking research to an unusual and important area of practice--the law. I was hired to serve as an expert witness on note taking in a legal case that hinged, in part, on the completeness and accuracy of…

  14. Taking medicine at home - create a routine

    MedlinePlus

    ... page: //medlineplus.gov/ency/patientinstructions/000613.htm Taking medicine at home - create a routine To use the ... teeth. Find Ways to Help You Remember Your Medicines You can: Set the alarm on your clock, ...

  15. Alternative Medicine Taking Hold Among Americans: Report

    MedlinePlus

    ... fullstory_159511.html Alternative Medicine Taking Hold Among Americans: Report More than $30 billion paid out-of- ... 22, 2016 WEDNESDAY, June 22, 2016 (HealthDay News) -- Americans spend a good chunk of their health care ...

  16. The Solar Constant: A Take Home Lab

    ERIC Educational Resources Information Center

    Eaton, B. G.; And Others

    1977-01-01

    Describes a method that uses energy from the sun, absorbed by aluminum discs, to melt ice, and allows the determination of the solar constant. The take-home equipment includes Styrofoam cups, a plastic syringe, and aluminum discs. (MLH)

  17. Men: Take Charge of Your Health

    MedlinePlus

    ... of Your Health Print This Topic En español Men: Take Charge of Your Health Browse Sections The ... and Insurance The Basics The Basics: Overview Most men need to pay more attention to their health. ...

  18. Taking medicines - what to ask your doctor

    MedlinePlus

    ... medicine change how any of my herbal or dietary supplements work? Ask if your new medicine interferes with eating or drinking. Are there any foods that I should not drink or eat? Can I drink alcohol when taking ...

  19. Kids with Mild Asthma Can Take Acetaminophen

    MedlinePlus

    ... gov/news/fullstory_160475.html Kids With Mild Asthma Can Take Acetaminophen: Study Finding counters past research ... 17, 2016 (HealthDay News) -- Acetaminophen does not worsen asthma symptoms in young children, a new study finds. ...

  20. Take Care of Your Child's Teeth

    MedlinePlus

    ... Baby teeth hold space for adult teeth. Take care of your child’s teeth to protect your child from tooth decay (cavities). Tooth decay can: Cause your child pain Make it hard for your child to chew ...

  1. Alternative Medicine Taking Hold Among Americans: Report

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_159511.html Alternative Medicine Taking Hold Among Americans: Report More than $30 ... chunk of their health care dollars on alternative medicine, such as acupuncture, yoga, chiropractic care and natural ...

  2. Taking Medicines Safely: At Your Doctor's Office

    MedlinePlus

    ... Javascript on. Feature: Taking Medicines Safely At Your Doctor's Office Past Issues / Summer 2013 Table of Contents ... Chart PDF If you've gone to your doctor because you don't feel well, the doctor ...

  3. Depression: How to Safely Take Antidepressants

    MedlinePlus

    ... take, including over-the-counter medicines and herbal health products (such as St. John's wort). Ask your doctor and pharmacist if any of your regular medicines can cause problems when combined with an antidepressant. What is antidepressant ...

  4. Taking Care of You: Support for Caregivers

    MedlinePlus

    ... Are Reading Upsetting News Reports? What to Say Vaccines: Which Ones & When? Smart School Lunches Emmy-Nominated Video "Cerebral Palsy: Shannon's Story" 5 Things to Know About Zika & Pregnancy Taking Care of ...

  5. Taking Care of Your Teeth and Mouth

    MedlinePlus

    ... can protect your teeth from decay by using fluoride toothpaste. If you are at a higher risk ... of medicines you take), you might need more fluoride. Your dentist or dental hygienist may give you ...

  6. 37 CFR 41.157 - Taking testimony.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... OF COMMERCE PRACTICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Contested Cases § 41.157 Taking... the scope and nature of the testimony to be elicited. (5) Motion to quash. Objection to a defect...

  7. LRO's Diviner Takes the Eclipse's Temperature

    NASA Video Gallery

    During the June 15, 2011, total lunar eclipse, LRO's Diviner instrument will take temperature measurements of eclipsed areas of the moon, giving scientists a new look at rock distribution on the su...

  8. Tips for Taking Care of Your Limb

    MedlinePlus

    ... Technorati Yahoo MyWeb by Paddy Rossbach, RN, Former Amputee Coalition President & CEO, and Terrence P. Sheehan, MD ... crisis. Limb Care If you are a new amputee, it's better to take a bath or shower ...

  9. Fear, excitement, and financial risk-taking.

    PubMed

    Lee, Chan Jean; Andrade, Eduardo B

    2015-01-01

    Can fear trigger risk-taking? In this paper, we assess whether fear can be reinterpreted as a state of excitement as a result of contextual cues and promote, rather than discourage, risk-taking. In a laboratory experiment, the participants' emotional states were induced (fear vs. control), followed by a purportedly unrelated financial task. The task was framed as either a stock market investment or an exciting casino game. Our results showed that incidental fear (vs. control) induced risk-averse behaviour when the task was framed as a stock investment decision. However, fear encouraged risk-taking when the very same task was framed as an exciting casino game. The impact of fear on risk-taking was partially mediated by the excitement felt during the financial task. PMID:24661027

  10. Gateway to New Atlantis Attraction Takes Shape

    NASA Video Gallery

    The home of space shuttle Atlantis continues taking shape at the Kennedy Space Center Visitor Complex. Crews placed the nose cone atop the second of a replica pair of solid rocket boosters. A life-...

  11. India takes steps to curb air pollution.

    PubMed

    2016-07-01

    India's air pollution problem needs to be tackled systematically, taking an all-of-government approach, to reduce the huge burden of associated ill-health. Patralekha Chatterjee reports. PMID:27429486

  12. Take Steps to Prevent Type 2 Diabetes

    MedlinePlus

    ... En español Take Steps to Prevent Type 2 Diabetes Browse Sections The Basics Overview Types of Diabetes ... 1 of 9 sections The Basics: Types of Diabetes What is diabetes? Diabetes is a disease. People ...

  13. Equilibrium stellar systems with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gularte, E.; Carpintero, D. D.

    In 1979, M Schwarzschild showed that it is possible to build an equilibrium triaxial stellar system. However, the linear programmation used to that goal was not able to determine the uniqueness of the solution, nor even if that solution was the optimum one. Genetic algorithms are ideal tools to find a solution to this problem. In this work, we use a genetic algorithm to reproduce an equilibrium spherical stellar system from a suitable set of predefined orbits, obtaining the best solution attainable with the provided set. FULL TEXT IN SPANISH

  14. Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging

    NASA Astrophysics Data System (ADS)

    Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ

    2015-01-01

    Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)

  15. Cost Discrepancy, Signaling, and Risk Taking

    ERIC Educational Resources Information Center

    Lemon, Jim

    2005-01-01

    If risk taking is in some measure a signal to others by the person taking risks, the model of "costly signaling" predicts that the more the apparent cost of the risk to others exceeds the perceived cost of the risk to the risk taker, the more attractive that risk will be as a signal. One hundred and twelve visitors to youth "drop-in" centers…

  16. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-04-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  17. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  18. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    NASA Astrophysics Data System (ADS)

    Asebedo, Antonio Ray

    through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.

  19. Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance

    NASA Astrophysics Data System (ADS)

    Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.

    2015-04-01

    Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover

  20. 77 FR 49921 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-17

    ... from ION Geophysical (ION) for an Incidental Harassment Authorization (IHA) to take marine mammals, by... comments on its proposal to issue an IHA to ION to take, by harassment, nine species of marine mammals... March 1, 2012, from ION for the taking, by harassment, of marine mammals incidental to a marine...

  1. 76 FR 69758 - Draft Environmental Assessment, Incidental Take Plan, and Application for an Incidental Take...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ... the applicant that would authorize take of the federally threatened Canada lynx incidental to... Attn: Lynx HCP, Laury Zicari, Field Supervisor, U.S. Fish and Wildlife Service, Maine Field Office, 17... an incidental take permit to take the federally threatened Canada lynx (Lynx canadensis)...

  2. 75 FR 27708 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... take small numbers of marine mammals by harassment. Section 101(a)(5)(D) establishes a 45-day time... survey time is expected to take 30 days. The vessel that will be conducting this activity has not been.... The actual survey time is expected to take 45 days. Ice Gouge Survey As part of the feasibility...

  3. SPA: A Probabilistic Algorithm for Spliced Alignment

    PubMed Central

    van Nimwegen, Erik; Paul, Nicodeme; Sheridan, Robert; Zavolan, Mihaela

    2006-01-01

    Recent large-scale cDNA sequencing efforts show that elaborate patterns of splice variation are responsible for much of the proteome diversity in higher eukaryotes. To obtain an accurate account of the repertoire of splice variants, and to gain insight into the mechanisms of alternative splicing, it is essential that cDNAs are very accurately mapped to their respective genomes. Currently available algorithms for cDNA-to-genome alignment do not reach the necessary level of accuracy because they use ad hoc scoring models that cannot correctly trade off the likelihoods of various sequencing errors against the probabilities of different gene structures. Here we develop a Bayesian probabilistic approach to cDNA-to-genome alignment. Gene structures are assigned prior probabilities based on the lengths of their introns and exons, and based on the sequences at their splice boundaries. A likelihood model for sequencing errors takes into account the rates at which misincorporation, as well as insertions and deletions of different lengths, occurs during sequencing. The parameters of both the prior and likelihood model can be automatically estimated from a set of cDNAs, thus enabling our method to adapt itself to different organisms and experimental procedures. We implemented our method in a fast cDNA-to-genome alignment program, SPA, and applied it to the FANTOM3 dataset of over 100,000 full-length mouse cDNAs and a dataset of over 20,000 full-length human cDNAs. Comparison with the results of four other mapping programs shows that SPA produces alignments of significantly higher quality. In particular, the quality of the SPA alignments near splice boundaries and SPA's mapping of the 5′ and 3′ ends of the cDNAs are highly improved, allowing for more accurate identification of transcript starts and ends, and accurate identification of subtle splice variations. Finally, our splice boundary analysis on the human dataset suggests the existence of a novel non-canonical splice

  4. An Assessment of Current Satellite Precipitation Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2007-01-01

    The H-SAF Program requires an experimental operational European-centric Satellite Precipitation Algorithm System (E-SPAS) that produces medium spatial resolution and high temporal resolution surface rainfall and snowfall estimates over the Greater European Region including the Greater Mediterranean Basin. Currently, there are various types of experimental operational algorithm methods of differing spatiotemporal resolutions that generate global precipitation estimates. This address will first assess the current status of these methods and then recommend a methodology for the H-SAF Program that deviates somewhat from the current approach under development but one that takes advantage of existing techniques and existing software developed for the TRMM Project and available through the public domain.

  5. Allocating Railway Platforms Using A Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Clarke, M.; Hinde, C. J.; Withall, M. S.; Jackson, T. W.; Phillips, I. W.; Brown, S.; Watson, R.

    This paper describes an approach to automating railway station platform allocation. The system uses a Genetic Algorithm (GA) to find how a station’s resources should be allocated. Real data is used which needs to be transformed to be suitable for the automated system. Successful or ‘fit’ allocations provide a solution that meets the needs of the station schedule including platform re-occupation and various other constraints. The system associates the train data to derive the station requirements. The Genetic Algorithm is used to derive platform allocations. Finally, the system may be extended to take into account how further parameters that are external to the station have an effect on how an allocation should be applied. The system successfully allocates around 1000 trains to platforms in around 30 seconds requiring a genome of around 1000 genes to achieve this.

  6. CAST: Contraction Algorithm for Symmetric Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-09-22

    Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.

  7. Evolutionary algorithm for metabolic pathways synthesis.

    PubMed

    Gerard, Matias F; Stegmayer, Georgina; Milone, Diego H

    2016-06-01

    Metabolic pathway building is an active field of research, necessary to understand and manipulate the metabolism of organisms. There are different approaches, mainly based on classical search methods, to find linear sequences of reactions linking two compounds. However, an important limitation of these methods is the exponential increase of search trees when a large number of compounds and reactions is considered. Besides, such models do not take into account all substrates for each reaction during the search, leading to solutions that lack biological feasibility in many cases. This work proposes a new evolutionary algorithm that allows searching not only linear, but also branched metabolic pathways, formed by feasible reactions that relate multiple compounds simultaneously. Tests performed using several sets of reactions show that this algorithm is able to find feasible linear and branched metabolic pathways. PMID:27080162

  8. Landau-Zener type surface hopping algorithms.

    PubMed

    Belyaev, Andrey K; Lasser, Caroline; Trigila, Giulio

    2014-06-14

    A class of surface hopping algorithms is studied comparing two recent Landau-Zener (LZ) formulas for the probability of nonadiabatic transitions. One of the formulas requires a diabatic representation of the potential matrix while the other one depends only on the adiabatic potential energy surfaces. For each classical trajectory, the nonadiabatic transitions take place only when the surface gap attains a local minimum. Numerical experiments are performed with deterministically branching trajectories and with probabilistic surface hopping. The deterministic and the probabilistic approach confirm the affinity of both the LZ probabilities, as well as the good approximation of the reference solution computed by solving the Schrödinger equation via a grid based pseudo-spectral method. Visualizations of position expectations and superimposed surface hopping trajectories with reference position densities illustrate the effective dynamics of the investigated algorithms. PMID:24929375

  9. Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao

    2015-07-01

    Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.

  10. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  11. A bioinspired collision detection algorithm for VLSI implementation

    NASA Astrophysics Data System (ADS)

    Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.

    2005-06-01

    In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.

  12. Fast Density Inversion Solution for Full Tensor Gravity Gradiometry Data

    NASA Astrophysics Data System (ADS)

    Hou, Zhenlong; Wei, Xiaohui; Huang, Danian

    2016-02-01

    We modify the classical preconditioned conjugate gradient method for full tensor gravity gradiometry data. The resulting parallelized algorithm is implemented on a cluster to achieve rapid density inversions for various scenarios, overcoming the problems of computation time and memory requirements caused by too many iterations. The proposed approach is mainly based on parallel programming using the Message Passing Interface, supplemented by Open Multi-Processing. Our implementation is efficient and scalable, enabling its use with large-scale data. We consider two synthetic models and real survey data from Vinton Dome, US, and demonstrate that our solutions are reliable and feasible.

  13. A Hybrid Parallel Preconditioning Algorithm For CFD

    NASA Technical Reports Server (NTRS)

    Barth,Timothy J.; Tang, Wei-Pai; Kwak, Dochan (Technical Monitor)

    1995-01-01

    A new hybrid preconditioning algorithm will be presented which combines the favorable attributes of incomplete lower-upper (ILU) factorization with the favorable attributes of the approximate inverse method recently advocated by numerous researchers. The quality of the preconditioner is adjustable and can be increased at the cost of additional computation while at the same time the storage required is roughly constant and approximately equal to the storage required for the original matrix. In addition, the preconditioning algorithm suggests an efficient and natural parallel implementation with reduced communication. Sample calculations will be presented for the numerical solution of multi-dimensional advection-diffusion equations. The matrix solver has also been embedded into a Newton algorithm for solving the nonlinear Euler and Navier-Stokes equations governing compressible flow. The full paper will show numerous examples in CFD to demonstrate the efficiency and robustness of the method.

  14. Assessing allowable take of migratory birds

    USGS Publications Warehouse

    Runge, M.C.; Sauer, J.R.; Avery, M.L.; Blackwell, B.F.; Koneff, M.D.

    2009-01-01

    Legal removal of migratory birds from the wild occurs for several reasons, including subsistence, sport harvest, damage control, and the pet trade. We argue that harvest theory provides the basis for assessing the impact of authorized take, advance a simplified rendering of harvest theory known as potential biological removal as a useful starting point for assessing take, and demonstrate this approach with a case study of depredation control of black vultures (Coragyps atratus) in Virginia, USA. Based on data from the North American Breeding Bird Survey and other sources, we estimated that the black vulture population in Virginia was 91,190 (95% credible interval = 44,520?212,100) in 2006. Using a simple population model and available estimates of life-history parameters, we estimated the intrinsic rate of growth (rmax) to be in the range 7?14%, with 10.6% a plausible point estimate. For a take program to seek an equilibrium population size on the conservative side of the yield curve, the rate of take needs to be less than that which achieves a maximum sustained yield (0.5 x rmax). Based on the point estimate for rmax and using the lower 60% credible interval for population size to account for uncertainty, these conditions would be met if the take of black vultures in Virginia in 2006 was <3,533 birds. Based on regular monitoring data, allowable harvest should be adjusted annually to reflect changes in population size. To initiate discussion about how this assessment framework could be related to the laws and regulations that govern authorization of such take, we suggest that the Migratory Bird Treaty Act requires only that take of native migratory birds be sustainable in the long-term, that is, sustained harvest rate should be

  15. A Dynamic Construction Algorithm for the Compact Patricia Trie Using the Hierarchical Structure.

    ERIC Educational Resources Information Center

    Jung, Minsoo; Shishibori, Masami; Tanaka, Yasuhiro; Aoe, Jun-ichi

    2002-01-01

    Discussion of information retrieval focuses on the use of binary trees and how to compact it to use less memory and take less time. Explains retrieval algorithms and describes data structure and hierarchical structure. (LRW)

  16. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  17. Advanced algorithms for information science

    SciTech Connect

    Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.

  18. Algorithms for physical segregation of coal

    NASA Astrophysics Data System (ADS)

    Ganguli, Rajive

    The capability for on-line measurement of the quality characteristics of conveyed coal now enables mine operators to take advantage of the inherent heterogeneity of those streams and split them into wash and no-wash stocks. Relative to processing the entire stream, this reduces the amount of coal that must be washed at the mine and thereby reduces processing costs, recovery losses, and refuse generation levels. In this dissertation, two classes of segregation algorithms, using time series models and moving windows are developed and demonstrated using field and simulated data. In all of the developed segregation algorithms, a "cut-off" ash value was computed for coal scanned on the running conveyor belt by the ash analyzer. It determined if the coal was sent to the wash pile or to the nowash pile. Forecasts from time series models, at various lead times ahead, were used in one class of the developed algorithms, to determine the cut-off ash levels. The time series models were updated from time to time to reflect changes in process. Statistical Process Control (SPC) techniques were used to determine if an update was necessary at a given time. When an update was deemed necessary, optimization techniques were used to determine the next best set of model parameters. In the other class of segregation algorithms, "few" of the immediate past observations were used to determine the cut-off ash value. These "few" observations were called the window width . The window width was kept constant in some variants of this class of algorithms. The other variants of this class were an improvement over the fixed window width algorithms. Here, the window widths were varied rather than kept constant. In these cases, SPC was used to determine the window width at any instant. Statistics of the empirical distribution and the normal distribution were used in computation of the cut-off ash value in all the variants of this class of algorithms. The good performance of the developed algorithms

  19. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA. PMID:27408827

  20. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  1. A hybrid, self-adjusting search algorithm for optimal space trajectory design

    NASA Astrophysics Data System (ADS)

    Bolle, Andrea; Circi, Christian

    2012-08-01

    The aim of the present paper is to propose a hybrid, self adjusting, search algorithm for space trajectory optimization. By taking advantage of both direct and indirect methods, the present algorithm allows the finding of the optimal solution through the introduction of some new control parameters, whose number is smaller than that of the Lagrange multipliers, and whose range is bounded. Eventually, the optimal solution is determined by means of an iterative self-adjustment of the search domain occurring at "runtime", without any correction by an external user. This new set of parameters can be found through a reduction process of the degrees of freedom, obtained through the transversality conditions before entering the search loop. Furthermore, such a process shows that Lagrange multipliers are subject to a deep symmetry mirroring the features of the state vector. The algorithm reliability and efficiency is assessed through some test cases, and by reproducing some optimal transfer trajectories: a full three-dimensional, minimum time Mars mission, an optimal transfer to Jupiter, and finally an injection into a circular Moon orbit.

  2. A Mathematical Basis for the Safety Analysis of Conflict Prevention Algorithms

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Butler, Ricky W.; Munoz, Cesar A.; Dowek, Gilles

    2009-01-01

    In air traffic management systems, a conflict prevention system examines the traffic and provides ranges of guidance maneuvers that avoid conflicts. This guidance takes the form of ranges of track angles, vertical speeds, or ground speeds. These ranges may be assembled into prevention bands: maneuvers that should not be taken. Unlike conflict resolution systems, which presume that the aircraft already has a conflict, conflict prevention systems show conflicts for all maneuvers. Without conflict prevention information, a pilot might perform a maneuver that causes a near-term conflict. Because near-term conflicts can lead to safety concerns, strong verification of correct operation is required. This paper presents a mathematical framework to analyze the correctness of algorithms that produce conflict prevention information. This paper examines multiple mathematical approaches: iterative, vector algebraic, and trigonometric. The correctness theories are structured first to analyze conflict prevention information for all aircraft. Next, these theories are augmented to consider aircraft which will create a conflict within a given lookahead time. Certain key functions for a candidate algorithm, which satisfy this mathematical basis are presented; however, the proof that a full algorithm using these functions completely satisfies the definition of safety is not provided.

  3. Perspective taking in children's narratives about jealousy.

    PubMed

    Aldrich, Naomi J; Tenenbaum, Harriet R; Brooks, Patricia J; Harrison, Karine; Sines, Jennie

    2011-03-01

    This study explored relationships between perspective-taking, emotion understanding, and children's narrative abilities. Younger (23 5-/6-year-olds) and older (24 7-/8-year-olds) children generated fictional narratives, using a wordless picture book, about a frog experiencing jealousy. Children's emotion understanding was assessed through a standardized test of emotion comprehension and their ability to convey the jealousy theme of the story. Perspective-taking ability was assessed with respect to children's use of narrative evaluation (i.e., narrative coherence, mental state language, supplementary evaluative speech, use of subjective language, and placement of emotion expression). Older children scored higher than younger children on emotion comprehension and on understanding the story's complex emotional theme, including the ability to identify a rival. They were more advanced in perspective-taking abilities, and selectively used emotion expressions to highlight story episodes. Subjective perspective taking and narrative coherence were predictive of children's elaboration of the jealousy theme. Use of supplementary evaluative speech, in turn, was predictive of both subjective perspective taking and narrative coherence. PMID:21288255

  4. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  5. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  6. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  7. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  8. The Relegation Algorithm

    NASA Astrophysics Data System (ADS)

    Deprit, André; Palacián, Jesúus; Deprit, Etienne

    2001-03-01

    The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.

  9. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  10. A Heterogeneous Nonlinear Attenuating Full-Wave Model of Ultrasound

    PubMed Central

    Pinton, Gianmarco F.; Dahl, Jeremy; Rosenzweig, Stephen; Trahey, Gregg E.

    2015-01-01

    A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD). Three-dimensional solutions of the equation are verified with water tank measurements of a commercial diagnostic ultrasound transducer and are shown to be in excellent agreement in terms of the fundamental and harmonic acoustic fields and the power spectrum at the focus. The linear and nonlinear components of the algorithm are also verified independently. In the linear nonattenuating regime solutions match results from Field II, a well established software package used in transducer modeling, to within 0.3 dB. Nonlinear plane wave propagation is shown to closely match results from the Galerkin method up to 4 times the fundamental frequency. In addition to thermoviscous attenuation we present a numerical solution of the relaxation attenuation laws that allows modeling of arbitrary frequency dependent attenuation, such as that observed in tissue. A perfectly matched layer (PML) is implemented at the boundaries with a numerical implementation that allows the PML to be used with high-order discretizations. A −78 dB reduction in the reflected amplitude is demonstrated. The numerical algorithm is used to simulate a diagnostic ultrasound pulse propagating through a histologically measured representation of human abdominal wall with spatial variation in the speed of sound, attenuation, nonlinearity, and density. An ultrasound image is created in silico using the same physical and algorithmic process used in an ultrasound scanner: a series of pulses are transmitted through heterogeneous scattering tissue and the received echoes are used in a delay-and-sum beam-forming algorithm to generate a images. The resulting harmonic image exhibits characteristic improvement in lesion boundary definition and contrast when compared with the fundamental image. We demonstrate a mechanism of harmonic image quality

  11. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  12. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  13. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  14. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  15. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  16. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  17. CORDIC algorithms for SVM FPGA implementation

    NASA Astrophysics Data System (ADS)

    Gimeno Sarciada, Jesús; Lamel Rivera, Horacio; Jiménez, Matías

    2010-04-01

    Support Vector Machines are currently one of the best classification algorithms used in a wide number of applications. The ability to extract a classification function from a limited number of learning examples keeping in the structural risk low has demonstrated to be a clear alternative to other neural networks. However, the calculations involved in computing the kernel and the repetition of the process for all support vectors in the classification problem are certainly intensive, requiring time or power consumption in order to function correctly. This problem could be a drawback in certain applications with limited resources or time. Therefore simple algorithms circumventing this problem are needed. In this paper we analyze an FPGA implementation of a SVM which uses a CORDIC algorithm for simplifying the calculation of as specific kernel greatly reducing the time and hardware requirements needed for the classification, allowing for powerful in-field portable applications. The algorithm is and its calculation capabilities are shown. The full SVM classifier using this algorithm is implemented in an FPGA and its in-field use assessed for high speed low power classification.

  18. AKITA: Application Knowledge Interface to Algorithms

    NASA Astrophysics Data System (ADS)

    Barros, Paul; Mathis, Allison; Newman, Kevin; Wilder, Steven

    2013-05-01

    We propose a methodology for using sensor metadata and targeted preprocessing to optimize which selection from a large suite of algorithms are most appropriate for a given data set. Rather than applying several general purpose algorithms or requiring a human operator to oversee the analysis of the data, our method allows the most effective algorithm to be automatically chosen, conserving both computational, network and human resources. For example, the amount of video data being produced daily is far greater than can ever be analyzed. Computer vision algorithms can help sift for the relevant data, but not every algorithm is suited to every data type nor is it efficient to run them all. A full body detector won't work well when the camera is zoomed in or when it is raining and all the people are occluded by foul weather gear. However, leveraging metadata knowledge of the camera settings and the conditions under which the data was collected (generated by automatic preprocessing), face or umbrella detectors could be applied instead, increasing the likelihood of a correct reading. The Lockheed Martin AKITA™ system is a modular knowledge layer which uses knowledge of the system and environment to determine how to most efficiently and usefully process whatever data it is given.

  19. Algorithms for builder guidelines

    SciTech Connect

    Balcomb, J.D.; Lekov, A.B.

    1989-06-01

    The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.

  20. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  1. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  2. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  3. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  4. [Risk-taking behaviors among young people].

    PubMed

    Le Breton, David

    2004-01-01

    Risk-taking behaviors are often an ambivalent way of calling for help from close friends or family - those who count. It is an ultimate means of finding meaning and a system of values; it is a sign of an adolescent's active resistance and attempts to re-establish his or her place in the world. It contrasts with the far more incisive risk of depression and the radical collapse of meaning. In spite of the suffering it engenders, risk-taking nevertheless has a positive side, fostering independence in adolescents and a search for reference points. It leads to a better self-image and is a means of developing one's identity. It is nonetheless painful in terms of its repercussions in terms of injuries, death or addiction. The turbulence caused by risk-taking behaviors illustrates a determination to be rid of one's suffering and to fight on so that life can, at last, be lived. PMID:15918660

  5. Caregiver Leave-Taking in Spain: Rate, Motivations, and Barriers.

    PubMed

    Rogero-García, Jesús; García-Sainz, Cristina

    2016-01-01

    This paper aims to (1) determine the rate of (full- and part-time) caregiver leave-taking in Spain, (2) identify the reasons conducive to a more intense use of this resource, and (3) ascertain the main obstacles to its use, as perceived by caregivers. All 896 people covered by the sample were engaging in paid work and had cared for dependent adults in the last 12 years. This resource, in particular the full-time alternative, was found to be a minority option. The data showed that legal, work-related, and family and gender norm issues are the four types of factors that determine the decision to take such leaves. The most significant obstacles to their use are the forfeiture of income and the risk of losing one's job. Our results suggest that income replacement during a leave would increase the take-up of these resources. Moreover, enlargement of public care services would promote the use of leave as a free choice of caregivers. PMID:26808617

  6. Double 'take' threatens the future of nursing.

    PubMed

    Scott, Graham

    2015-12-01

    Most chancellors of the exchequer give with one hand and take with the other, but George Osborne hit nursing with a double whammy last week. First he announced that nursing students starting courses from 2017 will have to pay tuition fees, then added to the woe by revealing that they would not receive a bursary either. So the next generation of newly qualified nurses will start their careers on not much more than £21,000 and have debts that will take years to clear. PMID:26639249

  7. A real-time GPU implementation of the SIFT algorithm for large-scale video analysis tasks

    NASA Astrophysics Data System (ADS)

    Fassold, Hannes; Rosner, Jakub

    2015-02-01

    The SIFT algorithm is one of the most popular feature extraction methods and therefore widely used in all sort of video analysis tasks like instance search and duplicate/ near-duplicate detection. We present an efficient GPU implementation of the SIFT descriptor extraction algorithm using CUDA. The major steps of the algorithm are presented and for each step we describe how to efficiently parallelize it massively, how to take advantage of the unique capabilities of the GPU like shared memory / texture memory and how to avoid or minimize common GPU performance pitfalls. We compare the GPU implementation with the reference CPU implementation in terms of runtime and quality and achieve a speedup factor of approximately 3 - 5 for SD and 5 - 6 for Full HD video with respect to a multi-threaded CPU implementation, allowing us to run the SIFT descriptor extraction algorithm in real-time on SD video. Furthermore, quality tests show that the GPU implementation gives the same quality as the reference CPU implementation from the HessSIFT library. We further describe the benefits of GPU-accelerated SIFT descriptor calculation for video analysis applications such as near-duplicate video detection.

  8. Parallel training and testing methods for complex image processing algorithms on distributed, heterogeneous, unreliable, and non-dedicated resources

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; García, Daniel F.; Molleda, Julio; Sainz, Ignacio; Bulnes, Francisco G.

    2011-01-01

    Advances in the image processing field have brought new methods which are able to perform complex tasks robustly. However, in order to meet constraints on functionality and reliability, imaging application developers often design complex algorithms with many parameters which must be finely tuned for each particular environment. The best approach for tuning these algorithms is to use an automatic training method, but the computational cost of this kind of training method is prohibitive, making it inviable even in powerful machines. The same problem arises when designing testing procedures. This work presents methods to train and test complex image processing algorithms in parallel execution environments. The approach proposed in this work is to use existing resources in offices or laboratories, rather than expensive clusters. These resources are typically non-dedicated, heterogeneous and unreliable. The proposed methods have been designed to deal with all these issues. Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing. These methods are capable of harnessing the available computational power resources, giving more work to more powerful machines, while taking its unreliable nature into account. Both methods have been tested using real applications.

  9. Talk with Your Doctor about Taking Aspirin Every Day

    MedlinePlus

    ... Previous section Overview 2 of 5 sections Take Action! Take Action: Talk with Your Doctor Take these steps to ... Benefits and Risks 3 of 5 sections Take Action: Aspirin Tips Use aspirin safely. If you and ...

  10. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  11. A vectorized track finding and fitting algorithm in experimental high energy physics using a cyber-205

    NASA Astrophysics Data System (ADS)

    Georgiopoulos, C. H.; Goldman, J. H.; Hodous, M. F.

    1987-11-01

    We report on a fully vectorized track finding and fitting algorithm that has been used to reconstruct charged particle trajectories in a multiwire chamber system. This algorithm is currently used for data analysis of the E-711 experiment a Fermilab. The program is written for a CYBER 205 on which the average event takes 13.5 ms to process compared to 6.7 s for an optimized scalar algorithm on a VAX-11/780.

  12. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  13. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  14. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  15. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  16. Polynomial Algorithms for Item Matching.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Jones, Douglas H.

    1992-01-01

    Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)

  17. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  18. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  19. A fast full constraints unmixing method

    NASA Astrophysics Data System (ADS)

    Ye, Zhang; Wei, Ran; Wang, Qing Yan

    2012-10-01

    Mixed pixels are inevitable due to low-spatial resolutions of hyperspectral image (HSI). Linear spectrum mixture model (LSMM) is a classical mathematical model to relate the spectrum of mixing substance to corresponding individual components. The solving of LSMM, namely unmixing, is essentially a linear optimization problem with constraints, which is usually consisting of iterations implemented on decent direction and stopping criterion to terminate algorithms. Such criterion must be properly set in order to balance the accuracy and speed of solution. However, the criterion in existing algorithm is too strict, which maybe lead to convergence rate reducing. In this paper, by broaden constraints in unmixing, a new stopping rule is proposed, which can reduce rate of convergence. The experiments results prove both in runtime and iteration numbers that our method can accelerate convergence processing with only cost of little quality decrease in resulting.

  20. Minimal full-access networks: Enumeration and characterization

    NASA Astrophysics Data System (ADS)

    Sridhar, M. A.; Raghavendra, C. S.

    1990-08-01

    Minimal full-access (MFA) networks are the class of all interconnection networks for 2N = 2n+1 inputs and outputs that require a minimum number of switching elements and provide full access capability. In this paper, MFA networks with 2 × 2 switching elements are studied. Graph-theoretic ideas used in developing previous results concerning uniform MFA networks [M. A. Sridhar and C. S. Raghavendra, J. Parallel Distrib. Comput. 5, (1988), 383-403] are generalized to show how to enumerate a large class of MFA networks. An exponential lower bound on the number of such networks is derived, and routing algorithms are outlined. In the process, a characterization of the set of automorphisms of the Omega network is derived. A characterization of the permutations realizable by a certain subclass of these networks is also derived. Finally, it is shown that a simple self-routing algorithm exists for the class of networks introduced here. This author's research is supported in part by NSF Grant MIP 8452003 with matching funds from AT&T and TRW, DARPA/ARO Contract DAAG 29-84-K-0066, and ONR Contract N00014-86-K-0602.

  1. New packet scheduling algorithm in wireless CDMA data networks

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Gao, Zhuo; Li, Shaoqian; Li, Lemin

    2002-08-01

    The future 3G/4G wireless communication systems will provide internet access for mobile users. Packet scheduling algorithms are essential for QoS of diversified data traffics and efficient utilization of radio spectrum.This paper firstly presents a new packet scheduling algorithm DSTTF under the assumption of continuous transmission rates and scheduling intervals for CDMA data networks . Then considering the constraints of discrete transmission rates and fixed scheduling intervals imposed by the practical system, P-DSTTF, a modified version of DSTTF, is brought forward. Both scheduling algorithms take into consideration of channel condition, packet size and traffic delay bounds. The extensive simulation results demonstrate that the proposed scheduling algorithms are superior to some typical ones in current research. In addition, both static and dynamic wireless channel model of multi-level link capacity are established. These channel models sketch better the characterizations of wireless channel than two state Markov model widely adopted by the current literature.

  2. Genetic-algorithm-based tri-state neural networks

    NASA Astrophysics Data System (ADS)

    Uang, Chii-Maw; Chen, Wen-Gong; Horng, Ji-Bin

    2002-09-01

    A new method, using genetic algorithms, for constructing a tri-state neural network is presented. The global searching features of the genetic algorithms are adopted to help us easily find the interconnection weight matrix of a bipolar neural network. The construction method is based on the biological nervous systems, which evolve the parameters encoded in genes. Taking the advantages of conventional (binary) genetic algorithms, a two-level chromosome structure is proposed for training the tri-state neural network. A Matlab program is developed for simulating the network performances. The results show that the proposed genetic algorithms method not only has the features of accurate of constructing the interconnection weight matrix, but also has better network performance.

  3. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    SciTech Connect

    Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.

  4. Optimization of circuits using a constructive learning algorithm

    SciTech Connect

    Beiu, V.

    1997-05-01

    The paper presents an application of a constructive learning algorithm to optimization of circuits. For a given Boolean function f. a fresh constructive learning algorithm builds circuits belonging to the smallest F{sub n,m} class of functions (n inputs and having m groups of ones in their truth table). The constructive proofs, which show how arbitrary Boolean functions can be implemented by this algorithm, are shortly enumerated An interesting aspect is that the algorithm can be used for generating both classical Boolean circuits and threshold gate circuits (i.e. analogue inputs and digital outputs), or a mixture of them, thus taking advantage of mixed analogue/digital technologies. One illustrative example is detailed The size and the area of the different circuits are compared (special cost functions can be used to closer estimate the area and the delay of VLSI implementations). Conclusions and further directions of research are ending the paper.

  5. Taking Math Anxiety out of Math Instruction

    ERIC Educational Resources Information Center

    Shields, Darla J.

    2007-01-01

    To take math anxiety out of math instruction, teachers need to first know how to easily diagnose it in their students and second, how to analyze causes. Results of a recent study revealed that while students believed that their math anxiety was largely related to a lack of mathematical understanding, they often blamed their teachers for causing…

  6. Taking Perspective: Context, Culture, and History

    ERIC Educational Resources Information Center

    Suárez-Orozco, Marcelo M.; Suárez-Orozco, Carola

    2013-01-01

    There are important lessons to be learned from taking a comparative perspective in considering migration. Comparative examination of immigration experiences provides a way to glean common denominators of adaptation while considering the specificity of sending and receiving contexts and cultures. Equally important is a historical perspective that…

  7. Sexual risk taking among Taiwanese youth.

    PubMed

    Yeh, Chao-Hsing

    2002-01-01

    The purpose of this grounded theory study was to understand sexual risk-taking behavior among Taiwanese youth. Thirty-six participants were purposively selected for two to three semistructured, in-depth individual interviews. The constant comparative method and coding process were used for data analysis. The core category of preserving the fantasy of romantic innocence emerged from the initial data analysis to explain how and why young people engage in sexual risk taking. Accordingly, the subcategories of suppressing carnal knowledge and being swept away by love were developed. Suppressing carnal knowledge consisted of keeping silent, having an inadequate sexual education, and having stereotypical thinking and was identified as an explanation as to why young people cannot relate knowledge to actual practice. Being swept away by love included a false knowledge of one's sexual partner, shifting levels of intimacy, and nonacceptance of one's own sexuality. This conceptualization emphasizes the reasons why young people engage in sexual risk taking; that is, cultural reluctance to discuss sexuality openly. The implication of this theorizing is that interventions to reduce sexual risk taking should be done on an individual basis and should consider one's developmental context in order to increase one's skills in effectively discussing sex and sexuality. PMID:11841684

  8. Renew! Take a Break in Kindergarten

    ERIC Educational Resources Information Center

    Charlesworth, Rosalind

    2005-01-01

    A university child development/early childhood education professor renews her relationship with young children and with current public school teaching by spending 5 weeks in kindergarten. This article describes some highlights of her experience: the children's daily journal writing, an in-class and take-home math activity, and teaching the…

  9. Taking the Steam off Pressure Groups.

    ERIC Educational Resources Information Center

    Ledell, Marjorie A.

    1993-01-01

    School administrators must speak out when single-issue or "stealth" groups threaten to take over a school board. Administrators can help ensure that election campaigns stimulate community debate, discussion, and consensus about educational directions. They must know how to remove the cover from stealth candidates, respond to the public, and keep…

  10. Picture THIS: Taking Human Impact Seriously

    ERIC Educational Resources Information Center

    Patrick, Patricia; Patrick, Tammy

    2010-01-01

    Unfortunately, middle school students often view human impact as an abstract idea over which they have no control and do not see themselves as contributing to the Earth's environmental decline. How better to uncover students' ideas concerning human impact in their local community than to have them take photographs. With this objective in mind, the…

  11. Take Pride in America Educational Leader's Guide.

    ERIC Educational Resources Information Center

    Sledge, Janet H., Comp.

    The Take Pride in America (TPIA) school program encourages volunteer stewardship programs to help protect, enhance, and manage public lands such as school sites, forests, parks, water reservoirs, historical sites, fish and wildlife areas, public nature preserves, and wilderness areas in the United States. From this program an educational guide and…

  12. Surfing the Net: Test-Taking Skills.

    ERIC Educational Resources Information Center

    Berger, Sandra

    2003-01-01

    This article discusses the four elements to learning successful test taking: time strategies, error avoidance strategies, guessing strategies, and deductive reasoning strategies. Test tricks and gimmicks are described and a list of Web sites is provided that includes resources for identifying learning strategies and for accessing study guides. (CR)

  13. Role Taking in Childhood: Some Methodological Considerations

    ERIC Educational Resources Information Center

    Rubin, Kenneth H.

    1978-01-01

    Examines the convergent and discriminant validity of six widely used measures of role-taking skill. The Borke; Rothenberg; Miller, Kessel, and Flavell; Chandler; DeVries; and Glucksberg and Krauss tasks were administered to children in preschool and grades 1, 3, and 5. (Author/JMB)

  14. Taking Inventory. Student's Manual and Instructor's Manual.

    ERIC Educational Resources Information Center

    Hamer, Jean

    Supporting performance objective 56 of the V-TECS (Vocational-Technical Education Consortium of States) Secretarial Catalog, both a set of student materials and an instructor's manual on taking inventory are included in this packet. (The packet is the first in a set of nine on performing computational clerical activities--CE 016 951-959.) The…

  15. Disentangling Adolescent Pathways of Sexual Risk Taking

    ERIC Educational Resources Information Center

    Brookmeyer, Kathryn A.; Henrich, Christopher C.

    2009-01-01

    Using data from the National Longitudinal Survey of Youth, the authors aimed to describe the pathways of risk within sexual risk taking, alcohol use, and delinquency, and then identify how the trajectory of sexual risk is linked to alcohol use and delinquency. Risk trajectories were measured with adolescents aged 15-24 years (N = 1,778). Using…

  16. Teachable Moment: Google Earth Takes Us There

    ERIC Educational Resources Information Center

    Williams, Ann; Davinroy, Thomas C.

    2015-01-01

    In the current educational climate, where clearly articulated learning objectives are required, it is clear that the spontaneous teachable moment still has its place. Authors Ann Williams and Thomas Davinroy think that instructors from almost any discipline can employ Google Earth as a tool to take advantage of teachable moments through the…

  17. Low Performers Found Unready to Take Algebra

    ERIC Educational Resources Information Center

    Cavanagh, Sean

    2008-01-01

    As state and school leaders across the country push to have more students take algebra in 8th grade, a new study argues that middle schoolers struggling the most in math are being enrolled in that course despite being woefully unprepared. "The Misplaced Math Student: Lost in Eighth Grade Algebra," scheduled for release by the Brookings Institution…

  18. 37 CFR 41.157 - Taking testimony.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... expected to be used. A party requesting cross-examination testimony of more than one witness may choose the... testimony and must list: (i) The time and place of the deposition, (ii) The name and address of the witness... taking testimony. (1) Each witness before giving a deposition shall be duly sworn according to law by...

  19. Taking Your Show on the Road.

    ERIC Educational Resources Information Center

    Buchanan, Suzanne

    1998-01-01

    Describes a local interpretation program in New England that uses a motorcoach to take visitors on a day-long tour of several sites in the region. Explains how to create similar programs elsewhere and gives advice for preparing for the tour, orienting tour members, interpreting on the road, pacing tours over a day, and stopping at tour sites. (PVD)

  20. Take Our Children to Work Day

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Hundreds of children participated in the annual Take Our Children to Work Day at Stennis Space Center on July 29. During the day, children of Stennis employees received a tour of facilities and took part in various activities, including demonstrations in cryogenics and robotics.

  1. The difficulties of old people taking drugs.

    PubMed

    Atkinson, L; Gibson, I I; Andrews, J

    1977-08-01

    There is considerable interest in the problems of the elderly taking drugs correctly and appropriately. A recent survey (Parkin et al. 1976) showed that these problems that have long been known in geriatric practice have now been noted by general physicians. This review was undertaken when an occupational therapist in a geriatric unit team noted that, although patients and their relatives were taught methods of dressing, toileting, shaving, bathing, eating, walking, transferring to a chair, wheelchair mobility and communication by the occupational therapist, physiotherapist and speech therapist, no advice or teaching was given concerning the accurate taking of the drugs prescribed. The results of a detailed investigation are reported elsewhere (Atkinson, Gibson & Andrews 1978). Repeatedly, patients ready for discharge were handed a batch of drugs by a nurse at the last possible moment, even while sitting by their luggage awaiting the ambulance. Following this, special attention was paid to problems such as intellectual impairment, loss of memory and confusion, poor sight, inability to handle containers, failure to take drugs and lack of patient-education. During ward rounds, particularly when a geriatric health visitor was present, attention was drawn to special topics such as the number of patients who inadvertently kill themselves and the numbers needing readmission due to failure to take drugs, overdosage or underdosage or mixing of drugs (Wade 1972). Ferguson Anderson's comment (1974) that 7.15% of hospital admissions are due to drug reactions was also noted. PMID:899964

  2. Measuring Effectiveness: What Will It Take?

    ERIC Educational Resources Information Center

    Stumbo, Circe; McWalters, Peter

    2011-01-01

    Federal policy now focuses on teacher "effectiveness" rather than teacher "quality" as its central policy concern. Rather than measuring inputs, the new focus looks to measure the outcomes of a teacher's work--that is, the extent to which the educator has met crucial student needs, such as improved student achievement. As states move to take a…

  3. Distance Education: Taking Classes to the Students.

    ERIC Educational Resources Information Center

    Collins, Timothy; Dewees, Sarah

    2001-01-01

    Technological advances have equipped educational institutions with the capability to take classes to the student. Higher education institutions throughout the South are upgrading existing wide-area networks connecting buildings and campuses to create statewide "backbones" that will serve primary and secondary schools, libraries, offices, and…

  4. Take-off and propeller thrust

    NASA Technical Reports Server (NTRS)

    Schrenk, Martin

    1933-01-01

    As a result of previous reports, it was endeavored to obtain, along with the truest possible comprehension of the course of thrust, a complete, simple and clear formula for the whole take-off distance up to a certain altitude, which shall give the correct relative weight to all the factors.

  5. Taking It Online, and Making It Pay.

    ERIC Educational Resources Information Center

    Online & CD-ROM Review, 1996

    1996-01-01

    Discusses taking content online and payment models online based on sessions at the 1996 Internet World International conference in London (England). Highlights include publishers' decisions to reproduce materials on the World Wide Web; designing Web sites; guidelines for online content; online pricing; and the pros and cons of charging online…

  6. Kenojuak Ashevak: "Young Owl Takes a Ride."

    ERIC Educational Resources Information Center

    Schwartz, Bernard

    1988-01-01

    Describes a lesson plan used to introduce K-3 students to a Canadian Inuit artist, to the personal and cultural context of the artwork, and to a simple printmaking technique. Includes background information on the artist, instructional strategies, and a print of the artist's "Young Owl Takes a Ride." (GEA)

  7. Promoting Knowledge Transfer with Electronic Note Taking

    ERIC Educational Resources Information Center

    Katayama, Andrew D.; Shambaugh, R. Neal; Doctor, Tasneem

    2005-01-01

    We investigated the differences between (a) copying and pasting text versus typed note-taking methods of constructing study notes simultaneously with (b) vertically scaffolded versus horizontally scaffold notes on knowledge transfer. Forty-seven undergraduate educational psychology students participated. Materials included 2 electronic…

  8. Sleepy Teens Are Risk-Taking Teens

    MedlinePlus

    ... nih.gov/medlineplus/news/fullstory_158191.html Sleepy Teens Are Risk-Taking Teens Sleep deprivation leads to unsafe behaviors, CDC researchers ... than 50,000 students, researchers found that those teens who got seven hours of sleep or less ...

  9. Teen Risk-Taking: A Statistical Portrait.

    ERIC Educational Resources Information Center

    Lindberg, Laura Duberstein; Boggess, Scott; Porter, Laura; Williams, Sean

    This report provides a statistical portrait of teen participation in 10 of the most prevalent risk behaviors. It focuses on the overall participation in each behavior and in multiple risk taking. The booklet presents the overall incidence and patterns of teen involvement in the following risk behaviors: (1) regular alcohol use; (2) regular tobacco…

  10. String theorist takes over as Lucasian Professor

    NASA Astrophysics Data System (ADS)

    Banks, Michael

    2009-11-01

    String theorist Michael Green will be the next Lucasian Professor of Mathematics at Cambridge University. Green, 63, will succeed Stephen Hawking, who held the chair from 1980 before retiring last month at the age of 67 and taking up a distinguished research chair at the Perimeter Institute for Theoretical Physics in Canada (see above).

  11. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  12. Efficient multicomponent fuel algorithm

    NASA Astrophysics Data System (ADS)

    Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.

    2003-03-01

    We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.

  13. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  14. Academic Journal Embargoes and Full Text Databases.

    ERIC Educational Resources Information Center

    Brooks, Sam

    2003-01-01

    Documents the reasons for embargoes of academic journals in full text databases (i.e., publisher-imposed delays on the availability of full text content) and provides insight regarding common misconceptions. Tables present data on selected journals covering a cross-section of subjects and publishers and comparing two full text business databases.…

  15. Unquenched Studies Using the Truncated Determinant Algorithm

    SciTech Connect

    A. Duncan, E. Eichten and H. Thacker

    2001-11-29

    A truncated determinant algorithm is used to study the physical effects of the quark eigenmodes associated with eigenvalues below 420 MeV. This initial high statistics study focuses on coarse (6{sup 4}) lattices (with O(a{sup 2}) improved gauge action), light internal quark masses and large physical volumes. Three features of full QCD are examined: topological charge distributions, string breaking as observed in the static energy and the eta prime mass.

  16. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  17. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  18. Robust estimation by expectation maximization algorithm

    NASA Astrophysics Data System (ADS)

    Koch, Karl Rudolf

    2013-02-01

    A mixture of normal distributions is assumed for the observations of a linear model. The first component of the mixture represents the measurements without gross errors, while each of the remaining components gives the distribution for an outlier. Missing data are introduced to deliver the information as to which observation belongs to which component. The unknown location parameters and the unknown scale parameter of the linear model are estimated by the EM algorithm, which is iteratively applied. The E (expectation) step of the algorithm determines the expected value of the likelihood function given the observations and the current estimate of the unknown parameters, while the M (maximization) step computes new estimates by maximizing the expectation of the likelihood function. In comparison to Huber's M-estimation, the EM algorithm does not only identify outliers by introducing small weights for large residuals but also estimates the outliers. They can be corrected by the parameters of the linear model freed from the distortions by gross errors. Monte Carlo methods with random variates from the normal distribution then give expectations, variances, covariances and confidence regions for functions of the parameters estimated by taking care of the outliers. The method is demonstrated by the analysis of measurements with gross errors of a laser scanner.

  19. Image enhancement based on edge boosting algorithm

    NASA Astrophysics Data System (ADS)

    Ngernplubpla, Jaturon; Chitsobhuk, Orachat

    2015-12-01

    In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.

  20. Two fast algorithms of image inpainting

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Hou, Zhengxin; Wang, Chengyou

    2008-03-01

    Digital image inpainting is an interesting new research topic in multimedia computing and image processing since 2000. This talk covers the most recent contributions in digital image inpainting and image completion, as well as concepts in video inpainting. Image inpainting refers to reconstructing the corrupt regions where the data are all destroyed. A primary class of the technique is to build up a Partial Differential Equation (PDE), consider it as a boundary problem, and solve it by some iterative method. The most representative and creative one of the inpainting algorithms is Bertalmio-Sapiro-Caselles-Bellester (BSCB) model. After summarizes the development of image inpainting technique, this paper points the research at the improvement on BSCB model, and proposes two algorithms to solve the two drawbacks of this model. The first is selective adaptive interpolation which develops the traditional adaptive interpolation algorithm by introducing a priority value. Besides much faster than BSCB model, it can improve the inpainting effects. The second takes selective adaptive interpolation as a preprocessing step, reduces the operation time and improves the inpainting quality further.

  1. Traffic sharing algorithms for hybrid mobile networks

    NASA Technical Reports Server (NTRS)

    Arcand, S.; Murthy, K. M. S.; Hafez, R.

    1995-01-01

    In a hybrid (terrestrial + satellite) mobile personal communications networks environment, a large size satellite footprint (supercell) overlays on a large number of smaller size, contiguous terrestrial cells. We assume that the users have either a terrestrial only single mode terminal (SMT) or a terrestrial/satellite dual mode terminal (DMT) and the ratio of DMT to the total terminals is defined gamma. It is assumed that the call assignments to and handovers between terrestrial cells and satellite supercells take place in a dynamic fashion when necessary. The objectives of this paper are twofold, (1) to propose and define a class of traffic sharing algorithms to manage terrestrial and satellite network resources efficiently by handling call handovers dynamically, and (2) to analyze and evaluate the algorithms by maximizing the traffic load handling capability (defined in erl/cell) over a wide range of terminal ratios (gamma) given an acceptable range of blocking probabilities. Two of the algorithms (G & S) in the proposed class perform extremely well for a wide range of gamma.

  2. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography

    NASA Astrophysics Data System (ADS)

    Allner, S.; Koehler, T.; Fehringer, A.; Birnbacher, L.; Willner, M.; Pfeiffer, F.; Noël, P. B.

    2016-05-01

    The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.

  3. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography.

    PubMed

    Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B

    2016-05-21

    The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields. PMID:27100408

  4. The challenges of implementing and testing two signal processing algorithms for high rep-rate Coherent Doppler Lidar for wind sensing

    NASA Astrophysics Data System (ADS)

    Abdelazim, S.; Santoro, D.; Arend, M.; Moshary, F.; Ahmed, S.

    2015-05-01

    In this paper, we present two signal processing algorithms implemented using the FPGA. The first algorithm involves explicate time gating of received signals that correspond to a desired spatial resolution, performing a Fast Fourier Transform (FFT) calculation on each individual time gate, taking the square modulus of the FFT to form a power spectrum and then accumulating these power spectra for 10k return signals. The second algorithm involves calculating the autocorrelation of the backscattered signals and then accumulating the autocorrelation for 10k pulses. Efficient implementation of each of these two signal processing algorithms on an FPGA is challenging because it requires there to be tradeoffs between retaining the full data word width, managing the amount of on chip memory used and respecting the constraints imposed by the data width of the FPGA. A description of the approach used to manage these tradeoffs for each of the two signal processing algorithms are presented and explained in this article. Results of atmospheric measurements obtained through these two embedded programming techniques are also presented.

  5. Implementing a self-structuring data learning algorithm

    NASA Astrophysics Data System (ADS)

    Graham, James; Carson, Daniel; Ternovskiy, Igor

    2016-05-01

    In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.

  6. Sensitive algorithm for multiple-excitation-wavelength resonance Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Yellampalle, Balakishore; Wu, Hai-Shan; McCormick, William; Sluch, Mikhail; Martin, Robert; Ice, Robert; Lemoff, Brian E.

    2014-05-01

    Raman spectroscopy is a widely used spectroscopic technique with a number of applications. During the past few years, we explored the use of simultaneous multiple-excitation-wavelengths (MEW) in resonance Raman spectroscopy. This approach takes advantage of Raman band intensity variations across the Resonance Raman spectra obtained from two or more excitation wavelengths. Amplitude variations occur between corresponding Raman bands in Resonance Raman spectra due to complex interplay of resonant enhancement, self-absorption and laser penetration depth. We have developed a very sensitive algorithm to estimate concentration of an analyte from spectra obtained using the MEW technique. The algorithm uses correlations and least-square minimization approach to calculate an estimate for the concentration. For two or more excitation wavelengths, measured spectra were stacked in a two dimensional matrix. In a simple realization of the algorithm, we approximated peaks in the ideal library spectra as triangles. In this work, we present the performance of the algorithm with measurements obtained from a dual-excitation-wavelength Resonance Raman sensor. The novel sensor, developed at WVHTCF, detects explosives from a standoff distance. The algorithm was able to detect explosives with very high sensitivity even at signal-to-noise ratios as low as ~1.6. Receiver operating characteristics calculated using the algorithm showed a clear benefit in using the dual-excitation-wavelength technique over single-excitation-wavelength techniques. Variants of the algorithm that add more weight to amplitude variation information showed improved specificity to closely resembling spectra.

  7. Towards Possible Non-Extensive Thermodynamics of Algorithmic Processing — Statistical Mechanics of Insertion Sort Algorithm

    NASA Astrophysics Data System (ADS)

    Strzałka, Dominik; Grabowski, Franciszek

    Tsallis entropy introduced in 1988 is considered to have obtained new possibilities to construct generalized thermodynamical basis for statistical physics expanding classical Boltzmann-Gibbs thermodynamics for nonequilibrium states. During the last two decades this q-generalized theory has been successfully applied to considerable amount of physically interesting complex phenomena. The authors would like to present a new view on the problem of algorithms computational complexity analysis by the example of the possible thermodynamical basis of the sorting process and its dynamical behavior. A classical approach to the analysis of the amount of resources needed for algorithmic computation is based on the assumption that the contact between the algorithm and the input data stream is a simple system, because only the worst-case time complexity is considered to minimize the dependency on specific instances. Meanwhile the article shows that this process can be governed by long-range dependencies with thermodynamical basis expressed by the specific shapes of probability distributions. The classical approach does not allow to describe all properties of processes (especially the dynamical behavior of algorithms) that can appear during the computer algorithmic processing even if one takes into account the average case analysis in computational complexity. The importance of this problem is still neglected especially if one realizes two important things. The first one: nowadays computer systems work also in an interactive mode and for better understanding of its possible behavior one needs a proper thermodynamical basis. The second one: computers from mathematical point of view are Turing machines but in reality they have physical implementations that need energy for processing and the problem of entropy production appears. That is why the thermodynamical analysis of the possible behavior of the simple insertion sort algorithm will be given here.

  8. Linear-scaling and parallelisable algorithms for stochastic quantum chemistry

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Smart, Simon D.; Alavi, Ali

    2014-07-01

    For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.

  9. 76 FR 68973 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-07

    ...NMFS received an application from Shell Offshore Inc. (Shell) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Beaufort Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to Shell to take, by......

  10. 78 FR 12541 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ...NMFS received an application from ConocoPhillips Company (COP) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Chukchi Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to COP to take, by......

  11. 75 FR 25729 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ...NMFS received an application from Shell Offshore Inc. (Shell) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Chukchi Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to Shell to take, by......

  12. 75 FR 20481 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-19

    ...NMFS received an application from Shell Offshore Inc. (Shell) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Beaufort Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to Shell to take, by......

  13. 76 FR 69957 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-09

    ...NMFS received an application from Shell Offshore Inc. (Shell) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Chukchi Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to Shell to take, by......

  14. 78 FR 37209 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-20

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration RIN 0648-XC564 Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to Marine Seismic Survey in the Beaufort Sea,...

  15. 78 FR 64918 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... to PISCO to take marine mammals incidental to these same proposed activities (77 FR 72327, December 5... National Oceanic and Atmospheric Administration RIN 0648-XC893 Takes of Marine Mammals Incidental to... Atmospheric Administration (NOAA), Commerce. ACTION: Notice; proposed incidental harassment...

  16. 78 FR 77433 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-23

    ... to PISCO to take marine mammals incidental to these same proposed activities (77 FR 72327, December 5... National Oceanic and Atmospheric Administration RIN 0648-XC893 Takes of Marine Mammals Incidental to... Atmospheric Administration (NOAA), Commerce. ACTION: Notice; proposed incidental harassment...

  17. Evaluating Risk Taking Propensity as a Predictor of the Outcome Dimensions of Medication History Taking.

    ERIC Educational Resources Information Center

    Lively, Buford T.

    1983-01-01

    Senior pharmacy students' level of risk-taking as a personality trait was compared with their performance in medication history interviews in an ambulatory medicine clinic. Effective and efficient interviewers were significantly higher in risk-taking propensity than others. (MSE)

  18. 76 FR 58473 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-21

    ...NMFS received an application from Apache Alaska Corporation (Apache) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to a proposed 3D seismic survey in Cook Inlet, Alaska, between November 2011 and November 2012. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS requests comments on its proposal to issue an IHA to Apache to take, by......

  19. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    NASA Astrophysics Data System (ADS)

    Lehe, Rémi; Kirchen, Manuel; Andriyash, Igor A.; Godfrey, Brendan B.; Vay, Jean-Luc

    2016-06-01

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  20. Determinant based configuration interaction algorithms for complete and restricted configuration interaction spaces

    NASA Astrophysics Data System (ADS)

    Olsen, Jeppe; Roos, Björn O.; Jørgensen, Poul; Jensen, Hans Jørgen Aa.

    1988-08-01

    A restricted active space (RAS) wave function is introduced, which encompasses many commonly used restricted CI expansions. A highly vectorized algorithm is developed for full CI and other RAS calculations. The algorithm is based on Slater determinants expressed as products of alphastrings and betastrings and lends itself to a matrix indexing C(Iα, Iβ ) of the CI vector. The major features are: (1) The intermediate summation over determinants is replaced by two intermediate summations over strings, the number of which is only the square root of the number of determinants. (2) Intermediate summations over strings outside the RAS CI space is avoided and RAS calculations are therefore almost as efficient as full CI calculations with the same number of determinants. (3) An additional simplification is devised for MS =0 states, halving the number of operations. For a case with all single and double replacements out from 415 206 Slater determinants yielding 1 136 838 Slater determinants each CI iteration takes 161 s on an IBM 3090/150(VF).

  1. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  2. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  3. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2003-12-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  4. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2004-01-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  5. Taking an oil and gas company public

    SciTech Connect

    Wesneski, L.E.

    1981-01-01

    The increasing capital requirements in the energy industry have led a number of private, independent oil and gas companies to turn to the public market as a source for funding future growth. The principal asset of many of these companies, and often the primary determinant of value in the public market, is the firm's oil and gas reserves. Estimates of hydrocarbon reserves are usually based on studies and reports prepared by independent petroleum engineers. Although they play a key role in taking a company public, petroleum engineers usually do not get exposed to the entire process. THe purpose of this paper is to provide an overview of the process of taking a company public and to clarify the petroleum engineer's role in the process.

  6. Fast thought speed induces risk taking.

    PubMed

    Chandler, Jesse J; Pronin, Emily

    2012-04-01

    In two experiments, we tested for a causal link between thought speed and risk taking. In Experiment 1, we manipulated thought speed by presenting neutral-content text at either a fast or a slow pace and having participants read the text aloud. In Experiment 2, we manipulated thought speed by presenting fast-, medium-, or slow-paced movie clips that contained similar content. Participants who were induced to think more quickly took more risks with actual money in Experiment 1 and reported greater intentions to engage in real-world risky behaviors, such as unprotected sex and illegal drug use, in Experiment 2. These experiments provide evidence that faster thinking induces greater risk taking. PMID:22395129

  7. Risk-taking and the media.

    PubMed

    Fischer, Peter; Vingilis, Evelyn; Greitemeyer, Tobias; Vogrincic, Claudia

    2011-05-01

    In recent years, media formats with risk-glorifying content, such as video games that simulate illegal street racing ("bang and crash" games), films about extreme sports, and risky stunts have emerged as top sellers of the media industry. A variety of recent studies conducted by several researchers revealed that exposure to risk-glorifying media content (e.g., video games that simulate reckless driving, smoking and drinking in movies, or depictions that glorify extreme sports) increases the likelihood that recipients will show increased levels of risk-taking inclinations and behaviors. The present article (1) reviews the latest research on the detrimental impact of risk-glorifying media on risk-taking inclinations (cognitions, emotions, behaviors), (2) puts these findings in the theoretical context of recent sociocognitive models on media effects, and (3) makes suggestions to science and policymakers on how to deal with these effects in the future. PMID:21155859

  8. Individual welfare maximization in electricity markets including consumer and full transmission system modeling

    NASA Astrophysics Data System (ADS)

    Weber, James Daniel

    1999-11-01

    This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is

  9. Hybrid optimization methods for Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Datta, D.; Sen, M. K.

    2014-12-01

    FWI is slowly becoming the mainstream method to estimate velocity models of the subsurface from seismic data. Typically it makes use of a gradient descent approach in which a model update is computed by back propagating the residual seismograms and cross correlating with the forward propagating wavefields at each grid point in the subsurface model. FWI is a local optimization technique, which requires the starting model to be very close to the true model. Because the objective function is multimodal with many local minima, the requirement of good starting model becomes essential. A starting model is generated using travel time tomography. We propose two hybrid FWI algorithms one of which generates a very good starting model for a conventional FWI and the other, which works with a population of models uses gradient information from multiple starting locations in guiding the search. The first approach uses a sparse parameterization of model space using non-oscillatory splines, whose coeffiencts are estimated using an optimization algorithm like very fast simulated annealing (VFSA) by minimizing the misfit between the observed and synthetic data. The estimated velocity model is then used as a starting model for gradient-based FWI. This is done in the shot domain by converting the end-on marine geometry to a split spread geometry using the principle of reciprocity. The second approach is to uses an alternate global optimization algorithm called particle swarm optimization (PSO) where PSO update rules are applied. However, we employ a new gradient guided PSO that exploits the gradient information as well. This approach avoids the local minima and converges faster than a conventional PSO. We demonstrate our methods with application to 2D marine data sets from offshore India. Each line comprises over 1000 shots; our hybrid methods produce geologically meaningful velocity models fairly rapidly on a GPU cluster. We show that starting with the hybrid model gives a much

  10. Bradycardia in a patient taking black cohosh.

    PubMed

    McKenzie, Scott C; Rahman, Atifur

    2010-10-18

    Cimicifuga racemosa, better known as black cohosh, has been widely used in Western cultures as a herbal treatment for relieving symptoms of menopause. It has previously been linked to cases of liver toxicity. We report a case of reversible complete heart block in a woman who had recently begun taking a herbal supplement containing black cohosh. We review the known side effect profile of black cohosh and its relationship to our case. PMID:20955128

  11. Taking a Pulse on Your Practice.

    PubMed

    Hoagland-Smith, Leanne

    2015-01-01

    Each medical practice, like a living organism, occasionally requires reading of its vital signs. As with human beings, one of those vital signs is the pulse. For your medical practice, just like your patients, there are numerous places from which to take that reading. This article reviews seven key pulses that provide insight into what is happening within the workplace culture of your practice. PMID:26856032

  12. Astronaut Jack Lousma taking hot bath

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A closeup view of Astronaut Jack R. Lousma, Skylab 3 pilot, taking a hot bath in the crew quarters of the Orbital Workshop (OWS) of the Skylab space station cluster in Earth orbit. In deploying the shower facility, the shower curtain is pulled up from the floor and attached to the ceiling. The water comes through a push-button shower head attached to a flexible hose. Water is drawn off by a vacuum system.

  13. Equilibrium points in the full three-body problem

    NASA Astrophysics Data System (ADS)

    Woo, Pamela; Misra, Arun K.

    2014-06-01

    The orbital motion of a spacecraft in the vicinity of a binary asteroid system can be modelled as the full three-body problem. The circular restricted case is considered here. Taking into account the shape, size, and mass distribution of arbitrarily shaped primary bodies, the locations of the equilibrium points are computed and are found to be offset from those of the classical CR3BP with point-masses. Through numerical computations, it was found that in cases with highly aspherical primaries, additional collinear and noncollinear equilibrium points exist. Examples include systems with pear-shaped and peanut-shaped bodies.

  14. Medicine taking in Southampton: a second look.

    PubMed

    Sullivan, M J; George, C F

    1996-11-01

    1. A 1 in 200 sample of the Southampton electorate were sent a postal questionnaire in January 1993. Of the 756 adults surveyed, 400 (52.9%) returned completed questionnaires. One hundred and eighty-eight (47.0%) of the respondents had been prescribed a medicine within the previous month. 2. Compared with a survey 9 years earlier, medicine taking had increased amongst men (44.1% vs 33.7% NS) and drugs acting on the respiratory system were in more widespread use (19 vs 7 patients P < 0.05). 3. Patterns of storage of medicines were almost identical to those found in 1984. However, methods of disposal were significantly different with 34% of the respondents stating that they would return left-over medicines to the Doctor or Pharmacist compared with 17% in the previous study (P < 0.01). 4. Of those taking medicines 120 (63.8%) had received a manufacturers' information leaflet. Medicines used to treat disorders of the respiratory and cardiovascular systems were most likely to be accompanied by such a leaflet (74% and 70% respectively). 5. Those who received a leaflet were almost all satisfied by it. However, patient awareness of potential side effects remained poor with only 30% being aware of any which their medicine might cause. 6. Despite improvements in attitudes towards medicine taking over time patients awareness of potential adverse effects remains limited. Further research is necessary in order to determine how best to educate patients on this topic. PMID:8951187

  15. Bioarchaeological evidence for trophy-taking in prehistoric central California.

    PubMed

    Andrushko, Valerie A; Latham, Kate A S; Grady, Diane L; Pastron, Allen G; Walker, Phillip L

    2005-08-01

    Fourteen adult burials in a large (N = 224) prehistoric central California cemetery (CA-SCL-674) lack forearm bones. Twelve of these otherwise well-articulated primary interments have distal humeri bearing cutmarks with a distribution like that seen in fur seals butchered by Native Californians. Most of the burials with missing forearms are young adult males, a demographic profile that differs significantly from the full sample. Three of these males show evidence of perimortem trauma in addition to forearm amputation. Drilled and polished human radii and ulnae were recovered from the CA-SCL-674 cemetery in archaeological contexts separate from burials with missing forearms. A warfare-related trophy-taking practice is strongly suggested by these bioarchaeological data. Based on these data, it seems likely that 20% (N = 10) or more of the adult males (N = 59) in this population were victims of violence. Evidence of perimortem violence was much less common among women, with only about 2% (N = 2) of adult females (N = 86) subjected to trophy-taking. Examination of museum collections produced further evidence for perimortem forearm amputation among the Native American inhabitants of this area during the transition between the Early and Middle periods. The emergence of more hierarchical social systems during this period may have fostered warfare-related trophy-taking as a symbolic tool for enhancing the power and prestige of individuals within competing social groups. PMID:15693027

  16. Advanced signal separation and recovery algorithms for digital x-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Mahmoud, Imbaby I.; El Tokhy, Mohamed S.

    2015-02-01

    X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and

  17. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  18. D Multicomponent Time Domain Elastic Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Silva, R. U.; De Basabe, J. D.; Gallardo, L. A.

    2015-12-01

    The search of hydrocarbon reservoirs between the finest stratigraphic and structural traps relies on the detailed surveying and interpretation of multicomponent seismic waves. This need makes Full Waveform Inversion (FWI) one of the most active topics in seismic exploration research and there are a limited number of FWI algorithms that undertake the elastic approach required to model these multicomponent data. We developed an iterative Gauss-Newton 2D time-domain elastic FWI scheme that reproduces the vertical and horizontal particle velocity as measured by common seismic surveys and obtains simultaneously the distribution of three elastic parameters of our subsurface model (density ρ and the Lame parameters λ and μ). The elastic wave is propagated in a heterogeneous elastic media using a time domain 2D velocity-stress staggered grid finite difference method. Our code observes the necessary stability conditions and includes absorbing boundary conditions and basic multi-thread parallelization. The same forward modeling code is also used to calculate the Frechet's derivatives with respect to the three parameters of our model following the sensitivity equation approach and perturbation theory. We regularized our FWI algorithm applying two different criteria: (1) First order Tikhonov regularization (maximum smoothness) and (2) Minimum Gradient Support (MGS) that adopts an approximate zero-norm of the several property gradients. We applied our algorithm to various test models and demonstrated that their structural information resemble closely those of the original three synthetic model parameters (λ, µ and ρ). Finally, we compared the role of both regularization criteria in terms of data fit, model stability and structural resemblance.

  19. An improved harmony search algorithm with dynamically varying bandwidth

    NASA Astrophysics Data System (ADS)

    Kalivarapu, J.; Jain, S.; Bag, S.

    2016-07-01

    The present work demonstrates a new variant of the harmony search (HS) algorithm where bandwidth (BW) is one of the deciding factors for the time complexity and the performance of the algorithm. The BW needs to have both explorative and exploitative characteristics. The ideology is to use a large BW to search in the full domain and to adjust the BW dynamically closer to the optimal solution. After trying a series of approaches, a methodology inspired by the functioning of a low-pass filter showed satisfactory results. This approach was implemented in the self-adaptive improved harmony search (SIHS) algorithm and tested on several benchmark functions. Compared to the existing HS algorithm and its variants, SIHS showed better performance on most of the test functions. Thereafter, the algorithm was applied to geometric parameter optimization of a friction stir welding tool.

  20. Effect of qubit losses on Grover's quantum search algorithm

    NASA Astrophysics Data System (ADS)

    Rao, D. D. Bhaktavatsala; Mølmer, Klaus

    2012-10-01

    We investigate the performance of Grover's quantum search algorithm on a register that is subject to a loss of particles that carry qubit information. Under the assumption that the basic steps of the algorithm are applied correctly on the correspondingly shrinking register, we show that the algorithm converges to mixed states with 50% overlap with the target state in the bit positions still present. As an alternative to error correction, we present a procedure that combines the outcome of different trials of the algorithm to determine the solution to the full search problem. The procedure may be relevant for experiments where the algorithm is adapted as the loss of particles is registered and for experiments with Rydberg blockade interactions among neutral atoms, where monitoring of atom losses is not even necessary.