Full design of fuzzy controllers using genetic algorithms
NASA Technical Reports Server (NTRS)
Homaifar, Abdollah; Mccormick, ED
1992-01-01
This paper examines the applicability of genetic algorithms (GA) in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.
Full design of fuzzy controllers using genetic algorithms
NASA Technical Reports Server (NTRS)
Homaifar, Abdollah; Mccormick, ED
1992-01-01
This paper examines the applicability of genetic algorithms in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.
Calibration and imaging algorithms for full-Stokes optical interferometry
NASA Astrophysics Data System (ADS)
Elias, Nicholas M.; Mozurkewich, David; Schmidt, Luke M.; Jurgenson, Colby A.; Edel, Stanislav S.; Jones, Carol E.; Halonen, Robert J.; Schmitt, Henrique R.; Jorgensen, Anders M.; Hutter, Donald J.
2012-07-01
Optical interferometry and polarimetry have separately provided new insights into stellar astronomy, especially in the fields of fundamental parameters and atmospheric models. Optical interferometers will eventually add full-Stokes polarization measuring capabilities, thus combining both techniques. In this paper, we: 1) list the observables, calibration quantities, and data acquisition strategies for both limited and full optical interferometric polarimetry (OIP); 2) describe the masking interferometer AMASING and its polarization measuring enhancement called AMASING-POL; 3) show how a radio interferometry imaging package, CASA, can be used for optical interferometry data reduction; and 4) present imaging simulations for Be stars.
The wavenumber algorithm for full-matrix imaging using an ultrasonic array.
Hunter, Alan J; Drinkwater, Bruce W; Wilcox, Paul D
2008-11-01
Ultrasonic imaging using full-matrix capture, e.g., via the total focusing method (TFM), has been shown to increase angular inspection coverage and improve sensitivity to small defects in nondestructive evaluation. In this paper, we develop a Fourier-domain approach to full-matrix imaging based on the wavenumber algorithm used in synthetic aperture radar and sonar. The extension to the wavenumber algorithm for full-matrix data is described and the performance of the new algorithm compared with the TFM, which we use as a representative benchmark for the time-domain algorithms. The wavenumber algorithm provides a mathematically rigorous solution to the inverse problem for the assumed forward wave propagation model, whereas the TFM employs heuristic delay-and-sum beamforming. Consequently, the wavenumber algorithm has an improved point-spread function and provides better imagery. However, the major advantage of the wavenumber algorithm is its superior computational performance. For large arrays and images, the wavenumber algorithm is several orders of magnitude faster than the TFM. On the other hand, the key advantage of the TFM is its flexibility. The wavenumber algorithm requires a regularly sampled linear array, while the TFM can handle arbitrary imaging geometries. The TFM and the wavenumber algorithm are compared using simulated and experimental data. PMID:19049924
The wavenumber algorithm for full-matrix imaging using an ultrasonic array.
Hunter, Alan J; Drinkwater, Bruce W; Wilcox, Paul D
2008-11-01
Ultrasonic imaging using full-matrix capture, e.g., via the total focusing method (TFM), has been shown to increase angular inspection coverage and improve sensitivity to small defects in nondestructive evaluation. In this paper, we develop a Fourier-domain approach to full-matrix imaging based on the wavenumber algorithm used in synthetic aperture radar and sonar. The extension to the wavenumber algorithm for full-matrix data is described and the performance of the new algorithm compared with the TFM, which we use as a representative benchmark for the time-domain algorithms. The wavenumber algorithm provides a mathematically rigorous solution to the inverse problem for the assumed forward wave propagation model, whereas the TFM employs heuristic delay-and-sum beamforming. Consequently, the wavenumber algorithm has an improved point-spread function and provides better imagery. However, the major advantage of the wavenumber algorithm is its superior computational performance. For large arrays and images, the wavenumber algorithm is several orders of magnitude faster than the TFM. On the other hand, the key advantage of the TFM is its flexibility. The wavenumber algorithm requires a regularly sampled linear array, while the TFM can handle arbitrary imaging geometries. The TFM and the wavenumber algorithm are compared using simulated and experimental data.
NASA Technical Reports Server (NTRS)
Steger, J. L.; Caradonna, F. X.
1980-01-01
An implicit finite difference procedure is developed to solve the unsteady full potential equation in conservation law form. Computational efficiency is maintained by use of approximate factorization techniques. The numerical algorithm is first order in time and second order in space. A circulation model and difference equations are developed for lifting airfoils in unsteady flow; however, thin airfoil body boundary conditions have been used with stretching functions to simplify the development of the numerical algorithm.
ATLAS Distributed Computing Monitoring tools after full 2 years of LHC data taking
NASA Astrophysics Data System (ADS)
Schovancová, Jaroslava
2012-12-01
This paper details a variety of Monitoring tools used within ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the CERN Analysis Facility after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centres distributed worldwide. We present an overview of monitoring tools used daily to track ATLAS Distributed Computing activities ranging from network performance and data transfer throughput, through data processing and readiness of the computing services at the ATLAS computing centres, to the reliability and usability of the ATLAS computing centres. The described tools provide monitoring for issues of varying levels of criticality: from identifying issues with the instant online monitoring to long-term accounting information.
Application of a Chimera Full Potential Algorithm for Solving Aerodynamic Problems
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1997-01-01
A numerical scheme utilizing a chimera zonal grid approach for solving the three dimensional full potential equation is described. Special emphasis is placed on describing the spatial differencing algorithm around the chimera interface. Results from two spatial discretization variations are presented; one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The presentation is highlighted with a number of transonic wing flow field computations.
Bramble, J.H.; Pasciak, J.E.
1992-03-01
In this paper, we provide uniform estimates for V-cycle algorithms with one smoothing on each level. This theory is based on some elliptic regularity but does not require a smoother interaction hypothesis (sometimes referred to as a strengthened Cauchy Schwarz inequality) assumed in other theories. Thus, it is a natural extension of the full regularity V-cycle estimates provided by Braess and Hackbush.
An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.
K, Manasa; Channappayya, Sumohana S
2016-06-01
We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index. PMID:27093720
Newton-Krylov-Schwarz algorithms for the 2D full potential equation
Cai, Xiao-Chuan; Gropp, W.D.; Keyes, D.E.
1996-12-31
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The main algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, can be made robust for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report favorable choices for numerical convergence rate and overall execution time on a distributed-memory parallel computer.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations
NASA Astrophysics Data System (ADS)
Jayaram, V.; Crain, K.; Keller, G. R.
2011-12-01
We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element
Characterization of the effects of the FineView algorithm for full field digital mammography
NASA Astrophysics Data System (ADS)
Urbanczyk, H.; McDonagh, E.; Marshall, N. W.; Castellano, I.
2012-04-01
The aim of this study was to characterize the effect of an image processing algorithm (FineView) on both quantitative image quality parameters and the threshold contrast detail response of the GE Senographe DS full-field digital mammography system. The system was characterized using signal transfer property, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE) of the system. An algorithmic modulation transfer function (MTFa) was calculated from images acquired at a reduced detector air kerma (DAK) and with the FineView algorithm enabled. Two sets of beam conditions were used: Mo/Mo/28 kV and Rh/Rh/29 kV, both with 2 mm added Al filtration at the x-ray tube. Images were acquired with and without FineView at four DAK levels from 14 to 378 µGy. The threshold contrast detail response was assessed using the CDMAM contrast-detail test object which was imaged under standard clinical conditions with and without FineView at three DAK levels from 24 to 243 µGy. The images were scored by both human observers and by automated scoring software. Results indicated an improvement of up to 125% at 5 mm-1 in MTFa when FineView was activated, particularly at high DAK levels. A corresponding increase of up to 425% at 5 mm-1 was also seen in the NNPS, again with the same DAK dependence. FineView did not influence DQE, an indication that the signal to noise ratio transfer of the system remained unchanged. FineView did not affect the threshold contrast detectability of the system, a result that is consistent with the DQE results.
Characterization of the effects of the FineView algorithm for full field digital mammography.
Urbanczyk, H; McDonagh, E; Marshall, N W; Castellano, I
2012-04-01
The aim of this study was to characterize the effect of an image processing algorithm (FineView) on both quantitative image quality parameters and the threshold contrast detail response of the GE Senographe DS full-field digital mammography system. The system was characterized using signal transfer property, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE) of the system. An algorithmic modulation transfer function (MTF(a)) was calculated from images acquired at a reduced detector air kerma (DAK) and with the FineView algorithm enabled. Two sets of beam conditions were used: Mo/Mo/28 kV and Rh/Rh/29 kV, both with 2 mm added Al filtration at the x-ray tube. Images were acquired with and without FineView at four DAK levels from 14 to 378 µGy. The threshold contrast detail response was assessed using the CDMAM contrast-detail test object which was imaged under standard clinical conditions with and without FineView at three DAK levels from 24 to 243 µGy. The images were scored by both human observers and by automated scoring software. Results indicated an improvement of up to 125% at 5 mm⁻¹ in MTF(a) when FineView was activated, particularly at high DAK levels. A corresponding increase of up to 425% at 5 mm⁻¹ was also seen in the NNPS, again with the same DAK dependence. FineView did not influence DQE, an indication that the signal to noise ratio transfer of the system remained unchanged. FineView did not affect the threshold contrast detectability of the system, a result that is consistent with the DQE results.
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Beller, Stephen; Nolet, Guust; Operto, Stephane; Brossier, Romain; Métivier, Ludovic; Paul, Anne; Virieux, Jean
2014-05-01
The current development of dense seismic arrays and high performance computing make feasible today application of full-waveform inversion (FWI) on teleseismic data for high-resolution lithospheric imaging. In teleseismic configuration, the source is to first-order a plane-wave that impinges the base of the lithospheric target located below the receiver array. In this setting, FWI aims to exploit not only the forward-scattered waves propagating up to the receiver but also second-order arrivals that are back-scattered from the free-surface and the reflectors before their recordings on the surface. FWI requires using full-wave methods modeling such as finite-difference or finite-element methods. In this framework, careful design of FWI algorithms is topical to mitigate as much as possible the computational burden of multi-source full-waveform modeling. In this presentation, we review some key specifications that might be considered for versatile FWI implementation. An abstraction level between the forward and inverse problems that allows for the interfacing of different modeling engines with the inversion. This requires the subsurface meshings that are used to perform seismic modeling and update the subsurface models during inversion to be fully independent through some back-and-forth projection processes. The subsurface parameterization should be carefully chosen during multi-parameter FWI as it controls the trade-off between parameters of different nature. A versatile FWI algorithm should be designed such that different subsurface parameterizations for the model update can be easily implemented. The gradient of the misfit function should be computed as easily as possible with the adjoint-state method in parallel environment. This first requires the gradient to be independent to the discretization method that is used to perform seismic modeling. Second, the incident and adjoint wavefields should be computed with the same numerical scheme, even if the forward problem
Sahmel, Jennifer; Barlow, Christy A; Gaffney, Shannon; Avens, Heather J; Madl, Amy K; Henshaw, John; Unice, Ken; Galbraith, David; DeRose, Gretchen; Lee, Richard J; Van Orden, Drew; Sanchez, Matthew; Zock, Matthew; Paustenbach, Dennis J
2016-01-01
The potential for para-occupational, domestic, or take-home exposures from asbestos-contaminated work clothing has been acknowledged for decades, but historically has not been quantitatively well characterized. A simulation study was performed to measure airborne chrysotile concentrations associated with laundering of contaminated clothing worn during a full shift work day. Work clothing fitted onto mannequins was exposed for 6.5 h to an airborne concentration of 11.4 f/cc (PCME) of chrysotile asbestos, and was subsequently handled and shaken. Mean 5-min and 15-min concentrations during active clothes handling and shake-out were 3.2 f/cc and 2.9 f/cc, respectively (PCME). Mean airborne PCME concentrations decreased by 55% 15 min after clothes handling ceased, and by 85% after 30 min. PCM concentrations during clothes handling were 11-47% greater than PCME concentrations. Consistent with previously published data, daily mean 8-h TWA airborne concentrations for clothes-handling activity were approximately 1.0% of workplace concentrations. Similarly, weekly 40-h TWAs for clothes handling were approximately 0.20% of workplace concentrations. Estimated take-home cumulative exposure estimates for weekly clothes handling over 25-year working durations were below 1 f/cc-year for handling work clothes contaminated in an occupational environment with full shift airborne chrysotile concentrations of up to 9 f/cc (8-h TWA).
NASA Astrophysics Data System (ADS)
Xiang, Shiming; Zhang, Haijiang
2016-11-01
It is known full-waveform inversion (FWI) is generally ill-conditioned and various strategies including pre-conditioning and regularizing the inversion system have been proposed to obtain a reliable estimation of the velocity model. Here, we propose a new edge-guided strategy for FWI in frequency domain to efficiently and reliably estimate velocity models with structures of the size similar to the seismic wavelength. The edges of the velocity model at the current iteration are first detected by the Canny edge detection algorithm that is widely used in image processing. Then, the detected edges are used for guiding the calculation of FWI gradient as well as enforcing edge-preserving total variation (TV) regularization for next iteration of FWI. Bilateral filtering is further applied to remove noise but keep edges of the FWI gradient. The proposed edge-guided FWI in the frequency domain with edge-guided TV regularization and bilateral filtering is designed to preserve model edges that are recovered from previous iterations as well as from lower frequency waveforms when FWI is conducted from lower to higher frequencies. The new FWI method is validated using the complex Marmousi model that contains several steeply dipping fault zones and hundreds of horizons. Compared to FWI without edge guidance, our proposed edge-guided FWI recovers velocity model anomalies and edges much better. Unlike previous image-guided FWI or edge-guided TV regularization strategies, our method does not require migrating seismic data, thus is more efficient for real applications.
NASA Technical Reports Server (NTRS)
Biermann, David; Hartman, Edwin P
1938-01-01
Tests were made of eight full-scale propellers of different shape at various tip speeds up to about 1,000 feet per second. The range of blade-angle settings investigated was from 10 degrees to 30 degrees at the 0.75 radius. The results indicate that a loss in propulsive efficiency occurred at tip speeds from 0.5 to 0.7 the velocity of sound for the take-off and climbing conditions. As the tip speed increased beyond these critical values, the loss rapidly increased and amounted, in some instances, to more than 20 percent of the thrust power for tip-speed values of 0.8 the speed of sound. In general, as the blade-angle setting was increased, the loss started to occur at lower tip speeds. The maximum loss for a given tip speed occurred at a blade-angle setting of about 20 degrees for the take-off and 25 degrees for the climbing condition. A simplified method for correcting propellers for the effect of compressibility is given in an appendix.
MOD* Lite: An Incremental Path Planning Algorithm Taking Care of Multiple Objectives.
Oral, Tugcem; Polat, Faruk
2016-01-01
The need for determining a path from an initial location to a target one is a crucial task in many applications, such as virtual simulations, robotics, and computer games. Almost all of the existing algorithms are designed to find optimal or suboptimal solutions considering only a single objective, namely path length. However, in many real life application path length is not the sole criteria for optimization, there are more than one criteria to be optimized that cannot be transformed to each other. In this paper, we introduce a novel multiobjective incremental algorithm, multiobjective D* lite (MOD* lite) built upon a well-known path planning algorithm, D* lite. A number of experiments are designed to compare the solution quality and execution time requirements of MOD* lite with the multiobjective A* algorithm, an alternative genetic algorithm we developed multiobjective genetic path planning and the strength Pareto evolutionary algorithm.
Ojala, Jarkko J; Kapanen, Mika K; Hyödynmaa, Simo J; Wigren, Tuija K; Pitkänen, Maunu A
2014-01-01
The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms--pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB)--implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC-calculated dose distributions were compared to corresponding AXB-calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose-volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3mm and 2%/2 mm were applied. The AXB-calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm
Full-Featured Search Algorithm for Negative Electron-Transfer Dissociation.
Riley, Nicholas M; Bern, Marshall; Westphall, Michael S; Coon, Joshua J
2016-08-01
Negative electron-transfer dissociation (NETD) has emerged as a premier tool for peptide anion analysis, offering access to acidic post-translational modifications and regions of the proteome that are intractable with traditional positive-mode approaches. Whole-proteome scale characterization is now possible with NETD, but proper informatic tools are needed to capitalize on advances in instrumentation. Currently only one database search algorithm (OMSSA) can process NETD data. Here we implement NETD search capabilities into the Byonic platform to improve the sensitivity of negative-mode data analyses, and we benchmark these improvements using 90 min LC-MS/MS analyses of tryptic peptides from human embryonic stem cells. With this new algorithm for searching NETD data, we improved the number of successfully identified spectra by as much as 80% and identified 8665 unique peptides, 24 639 peptide spectral matches, and 1338 proteins in activated-ion NETD analyses, more than doubling identifications from previous negative-mode characterizations of the human proteome. Furthermore, we reanalyzed our recently published large-scale, multienzyme negative-mode yeast proteome data, improving peptide and peptide spectral match identifications and considerably increasing protein sequence coverage. In all, we show that new informatics tools, in combination with recent advances in data acquisition, can significantly improve proteome characterization in negative-mode approaches. PMID:27402189
Analysis of Full Charge Reconstruction Algorithms for X-Ray Pixelated Detectors
Baumbaugh, A.; Carini, G.; Deptuch, G.; Grybos, P.; Hoff, J.; Siddons, P., Maj.; Szczygiel, R.; Trimpl, M.; Yarema, R.; /Fermilab
2012-05-21
Existence of the natural diffusive spread of charge carriers on the course of their drift towards collecting electrodes in planar, segmented detectors results in a division of the original cloud of carriers between neighboring channels. This paper presents the analysis of algorithms, implementable with reasonable circuit resources, whose task is to prevent degradation of the detective quantum efficiency in highly granular, digital pixel detectors. The immediate motivation of the work is a photon science application requesting simultaneous timing spectroscopy and 2D position sensitivity. Leading edge discrimination, provided it can be freed from uncertainties associated with the charge sharing, is used for timing the events. Analyzed solutions can naturally be extended to the amplitude spectroscopy with pixel detectors.
Analysis of full charge reconstruction algorithms for x-ray pixelated detectors
Baumbaugh, A.; Carini, G.; Deptuch, G.; Grybos, P.; Hoff, J.; Siddons, P., Maj.; Szczygiel, R.; Trimpl, M.; Yarema, R.; /Fermilab
2011-11-01
Existence of the natural diffusive spread of charge carriers on the course of their drift towards collecting electrodes in planar, segmented detectors results in a division of the original cloud of carriers between neighboring channels. This paper presents the analysis of algorithms, implementable with reasonable circuit resources, whose task is to prevent degradation of the detective quantum efficiency in highly granular, digital pixel detectors. The immediate motivation of the work is a photon science application requesting simultaneous timing spectroscopy and 2D position sensitivity. Leading edge discrimination, provided it can be freed from uncertainties associated with the charge sharing, is used for timing the events. Analyzed solutions can naturally be extended to the amplitude spectroscopy with pixel detectors.
Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images
NASA Astrophysics Data System (ADS)
Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas
2015-03-01
The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.
NASA Astrophysics Data System (ADS)
Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong
2013-12-01
X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.
Determination of full piezoelectric complex parameters using gradient-based optimization algorithm
NASA Astrophysics Data System (ADS)
Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.
2016-02-01
At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.
Hayer, Katharina E.; Pizarro, Angel; Lahens, Nicholas F.; Hogenesch, John B.; Grant, Gregory R.
2015-01-01
Motivation: Because of the advantages of RNA sequencing (RNA-Seq) over microarrays, it is gaining widespread popularity for highly parallel gene expression analysis. For example, RNA-Seq is expected to be able to provide accurate identification and quantification of full-length splice forms. A number of informatics packages have been developed for this purpose, but short reads make it a difficult problem in principle. Sequencing error and polymorphisms add further complications. It has become necessary to perform studies to determine which algorithms perform best and which if any algorithms perform adequately. However, there is a dearth of independent and unbiased benchmarking studies. Here we take an approach using both simulated and experimental benchmark data to evaluate their accuracy. Results: We conclude that most methods are inaccurate even using idealized data, and that no method is highly accurate once multiple splice forms, polymorphisms, intron signal, sequencing errors, alignment errors, annotation errors and other complicating factors are present. These results point to the pressing need for further algorithm development. Availability and implementation: Simulated datasets and other supporting information can be found at http://bioinf.itmat.upenn.edu/BEERS/bp2 Supplementary information: Supplementary data are available at Bioinformatics online. Contact: hayer@upenn.edu PMID:26338770
Optimized MPPT algorithm for boost converters taking into account the environmental variables
NASA Astrophysics Data System (ADS)
Petit, Pierre; Sawicki, Jean-Paul; Saint-Eve, Frédéric; Maufay, Fabrice; Aillerie, Michel
2016-07-01
This paper presents a study on the specific behavior of the Boost DC-DC converters generally used for powering conversion of PV panels connected to a HVDC (High Voltage Direct Current) Bus. It follows some works pointing out that converter MPPT (Maximum Power Point Tracker) is severely perturbed by output voltage variations due to physical dependency of parameters as the input voltage, the output voltage and the duty cycle of the PWM switching control of the MPPT. As a direct consequence many converters connected together on a same load perturb each other because of the output voltage variations induced by fluctuations on the HVDC bus essentially due to a not insignificant bus impedance. In this paper we show that it is possible to include an internal computed variable in charge to compensate local and external variations to take into account the environment variables.
NASA Astrophysics Data System (ADS)
Sainath, Kamalesh; Teixeira, Fernando L.
2016-05-01
We propose a full-wave pseudo-analytical numerical electromagnetic (EM) algorithm to model subsurface induction sensors, traversing planar-layered geological formations of arbitrary EM material anisotropy and loss, which are used, for example, in the exploration of hydrocarbon reserves. Unlike past pseudo-analytical planar-layered modeling algorithms that impose parallelism between the formation's bed junctions, our method involves judicious employment of Transformation Optics techniques to address challenges related to modeling relative slope (i.e., tilting) between said junctions (including arbitrary azimuth orientation of each junction). The algorithm exhibits this flexibility, both with respect to loss and anisotropy in the formation layers as well as junction tilting, via employing special planar slabs that coat each "flattened" (i.e., originally tilted) planar interface, locally redirecting the incident wave within the coating slabs to cause wave fronts to interact with the flattened interfaces as if they were still tilted with a specific, user-defined orientation. Moreover, since the coating layers are homogeneous rather than exhibiting continuous material variation, a minimal number of these layers must be inserted and hence reduces added simulation time and computational expense. As said coating layers are not reflectionless however, they do induce artificial field scattering that corrupts legitimate field signatures due to the (effective) interface tilting. Numerical results, for two half-spaces separated by a tilted interface, quantify error trends versus effective interface tilting, material properties, transmitter/receiver spacing, sensor position, coating slab thickness, and transmitter and receiver orientation, helping understand the spurious scattering's effect on reliable (effective) tilting this algorithm can model. Under the effective tilting constraints suggested by the results of said error study, we finally exhibit responses of sensors
NASA Astrophysics Data System (ADS)
van Houtte, Paul; Gawad, Jerzy; Eyckens, Philip; van Bael, Bert; Samaey, Giovanni; Roose, Dirk
2011-11-01
During metal forming, the mechanical properties in all locations of the part evolve, usually in a heterogeneous way. In principle, this should be taken into account when performing finite element (FE) simulations of the forming process, by modeling the evolution of the mechanical properties in every integration point of the FE mesh and coupling the result back to the FEshell. This is the meaning of the term `full-field modeling.' The issue is developed further with focus on the evolution of texture and plastic anisotropy. It is explained that in principle, such fullfield modeling would require a gigantic computational effort which (at least at present) would be out of reach of most research organizations. A methodology is then presented to overcome this difficulty by using efficient models for texture updating and for texture-based plastic anisotropy, and by optimizing the overall calculation scheme without sacrificing the accuracy of the texture prediction. Some of the first results (obtained for cup drawing of anisotropic deep drawing steel) are shown, including comparison to experimental results. Possible future applications of the method are proposed.
ERIC Educational Resources Information Center
Philadelphia Youth Network, 2006
2006-01-01
The title of this year's annual report has particular meaning for all of the staff at the Philadelphia Youth Network. The phrase derives from Philadelphia Youth Network's (PYN's) new vision statement, developed as part of its recent strategic planning process, which reads: All of our city's young people take their rightful places as full and…
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
ERIC Educational Resources Information Center
Ro, Jung Soon
1988-01-01
A comparison of the effectiveness of information retrieval based on full-text documents with retrieval based on paragraphs, abstracts, or controlled vocabularies was accomplished using a subset of journal articles with nine search questions. It was found that full-text retrieval achieved significantly higher recall and lower precision than did the…
Klymenko, M. V.; Remacle, F.
2014-10-28
A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables for the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.
Hardware-Assisted Algorithm for Full-Text Large-Dictionary String Matching Using N-Gram Hashing.
ERIC Educational Resources Information Center
Cohen, Jonathan D.
1998-01-01
Describes a method of full-text scanning for matches in a large dictionary. The method is suitable for selective dissemination of information systems, accommodating large dictionaries and typical digital data rates. It can be implemented on a single commercially-available board hosted by a personal computer or entirely in software. (Author/AEF)
Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo
2016-06-01
New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23. PMID:27189311
Spencer, W.A.; Goode, S.R.
1997-10-01
ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors` studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.
NASA Astrophysics Data System (ADS)
Jeung, Jaemin; Jeong, Seungmyeong; Lim, Jaesung
We propose an outband sensing-based IEEE 802.11h protocol without a full dynamic frequency selection (DFS) test. This scheme has two features. Firstly, every station performs a cooperative outband sensing, instead of inband sensing during a quiet period. And secondly, as soon as a current channel becomes bad, every station immediately hops to a good channel using the result of outband sensing. Simulation shows the proposed scheme increases network throughput against the legacy IEEE 802.11h.
NASA Astrophysics Data System (ADS)
Leblanc, T.; Haefele, A.; Sica, R. J.; van Gijsel, A.
2014-12-01
A new lidar data processing algorithm for the retrieval of ozone, temperature and water vapor has been developed for centralized use within the Network for the Detection of Atmospheric Composition Change (NDACC) and the GCOS Reference Upper Air Network (GRUAN). The program is written with the objective that raw data from a large number of lidar instruments can be analyzed consistently. The uncertainty budget includes 13 sources of uncertainty that are explicitly propagated taking into account vertical and inter-channel dependencies. Several standardized definitions of vertical resolution can be used, leading to a maximum flexibility, and to the production of tropospheric ozone, stratospheric ozone, middle atmospheric temperature and tropospheric water vapor profiles optimized for multiple user needs such as long-term monitoring, process studies and model and satellite validation. A review of the program's functionalities as well as the first retrieved products will be presented.
NASA Astrophysics Data System (ADS)
Pande, Paritosh; Shelton, Ryan L.; Monroy, Guillermo L.; Nolan, Ryan M.; Boppart, Stephen A.
2016-02-01
Tympanic membrane (TM) thickness can provide crucial information for diagnosing several middle ear pathologies. An imaging system integrating low coherence interferometry (LCI) with the standard video otoscope has been shown as a promising tool for quantitative assessment of in-vivo TM thickness. The small field-of-view (FOV) of TM surface images acquired by the combined LCI-otoscope system, however, makes the spatial registration of the LCI imaging sites and their location on the TM difficult to achieve. It is therefore desirable to have a tool that can map the imaged points on to an anatomically accurate full-field surface image of the TM. To this end, we propose a novel automated mosaicking algorithm for generating a full-field surface image of the TM with co-registered LCI imaging sites from a sequence of multiple small FOV images and corresponding LCI data. Traditional image mosaicking techniques reported in the biomedical literature, mostly for retinal imaging, are not directly applicable to TM image mosaicking because unlike retinal images, which have several distinctive features, TM images contain large homogeneous areas lacking in sharp features. The proposed algorithm overcomes these challenges of TM image mosaicking by following a two-step approach. In the first step, a coarse registration based on the correlation of gross image features is performed. Subsequently, in the second step, the coarsely registered images are used to perform a finer intensity-based co-registration. The proposed algorithm is used to generate, for the first time, full-field thickness distribution maps of in-vivo human TMs.
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Sekar, Anusha; Hoversten, G. Michael; Albertin, Uwe
2016-05-01
We present an algorithm to recover the Bayesian posterior model probability density function of subsurface elastic parameters, as required by the full pressure field recorded at an ocean bottom cable due to an impulsive seismic source. Both the data noise and source wavelet are estimated by our algorithm, resulting in robust estimates of subsurface velocity and density. In contrast to purely gradient based approaches, our method avoids model regularization entirely and produces an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. Our algorithm is trans-dimensional and performs model selection, sampling over a wide range of model parametrizations. We follow a frequency domain approach and derive the corresponding likelihood in the frequency domain. We present first a synthetic example of a reservoir at 2 km depth with minimal acoustic impedance contrast, which is difficult to study with conventional seismic amplitude versus offset changes. Finally, we apply our methodology to survey data collected over the Alba field in the North Sea, an area which is known to show very little lateral heterogeneity but nevertheless presents challenges for conventional post migration seismic amplitude versus offset analysis.
Kress, R.L.; Jansen, J.F.; Noakes, M.W.
1994-05-01
When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller.
Zhukov, V A; Shishkina, L N; Safatov, A S; Sergeev, A A; P'iankov, O V; Petrishchenko, V A; Zaĭtsev, B N; Toporkov, V S; Sergeev, A N; Nesvizhskiĭ, Iu V; Vorob'ev, A A
2010-01-01
The paper presents results of testing a modified algorithm for predicting virus ID50 values in a host of interest by extrapolation from a model host taking into account immune neutralizing factors and thermal inactivation of the virus. The method was tested for A/Aichi/2/68 influenza virus in SPF Wistar rats, SPF CD-1 mice and conventional ICR mice. Each species was used as a host of interest while the other two served as model hosts. Primary lung and trachea cells and secretory factors of the rats' airway epithelium were used to measure parameters needed for the purpose of prediction. Predicted ID50 values were not significantly different (p = 0.05) from those experimentally measured in vivo. The study was supported by ISTC/DARPA Agreement 450p.
Cashin, Cheryl E.; Brown, Timothy T.
2010-01-01
The need to move mental health systems toward more recovery-oriented treatment modes is well established. Progress has been made to define needed changes but evidence is lacking about the resources required to implement them. The Mental Health Services Act (MHSA) in California was designed to implement more recovery-oriented treatment modes. We use data from county funding requests and annual updates to examine how counties budgeted for recovery-oriented programs targeted to different age groups under MHSA. Findings indicate that initial per-client budgeting for Full Services Partnerships under MHSA was maintained in future cycles and counties budgeted less per client for children. With this analysis, we begin to benchmark resource allocation for programs that are intended to be recovery-oriented, which should be evaluated against appropriate outcome measures in the future to determine the degree of recovery-orientation. PMID:20440560
NASA Technical Reports Server (NTRS)
Vo, Q. D.
1984-01-01
A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor
Taking Full Advantage of Children's Literature
ERIC Educational Resources Information Center
Serafini, Frank
2012-01-01
Teachers need a deeper understanding of the texts being discussed, in particular the various textual and visual aspects of picturebooks themselves, including the images, written text and design elements, to support how readers made sense of these texts. As teachers become familiar with aspects of literary criticism, art history, visual grammar,…
Pirotta, Martin; Aquilina, Dorothy; Bhikha, Tilluck; Georg, Dietmar
2005-01-01
The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (Zm), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth ZR of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at Zm and ZR), and the traditional "full-scatter" methodology. All methodologies, except for the "full-scatter" methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at Zm (4% at 6 MV and 1.6% at 10 MV) than at ZR (1.-9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The "full-scatter" methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were quantified and the
NASA Astrophysics Data System (ADS)
Maj, P.; Baumbaugh, A.; Deptuch, G.; Grybos, P.; Szczygiel, R.
2012-12-01
Charge sharing is the main limitation of pixel detectors used in spectroscopic applications, noting that this applies to both time and amplitude/energy spectroscopy. Even though, charge sharing was the subject of many studies, there is still no ultimate solution which could be implemented in the hardware to suppress the negative effects of charge sharing. This is mainly because of strong demand on low power dissipation and small silicon area of a single pixel. The first solution of this problem was proposed by CERN and consequently it was implemented in the Medipix III chip. However, due to pixel-to-pixel threshold dispersions and some imperfections of the simplified algorithm, the hit allocation was not functioning properly. We are presenting novel algorithms which allow proper hit allocation even at the presence of charge sharing. They can be implemented in an integrated circuit using a deep submicron technology. In performed simulations, we assumed not only diffusive charge spread occurring in the course of charge drifting towards the electrodes but also limitations in the readout electronics, i.e. signal fluctuations due to noise and mismatch (gain and offsets). The simulations show that using, for example, a silicon pixel detector in the low X-ray energy range, we have been able to perform proper hit position identification and use the information from summing inter-pixel nodes for spectroscopy measurements.
A full field, 3-D velocimeter for microgravity crystallization experiments
NASA Technical Reports Server (NTRS)
Brodkey, Robert S.; Russ, Keith M.
1991-01-01
The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.
ERIC Educational Resources Information Center
Merson, Martha, Ed.; Reuys, Steve, Ed.
1999-01-01
Following an introduction on "Taking Risks" (Martha Merson), this journal contains 11 articles on taking risks in teaching adult literacy, mostly by educators in the Boston area. The following are included: "My Dreams Are Bigger than My Fears Now" (Sharon Carey); "Making a Pitch for Poetry in ABE [Adult Basic Education]" (Marie Hassett); "Putting…
... magnesium may cause diarrhea. Brands with calcium or aluminum may cause constipation. Rarely, brands with calcium may ... you take large amounts of antacids that contain aluminum, you may be at risk for calcium loss, ...
ERIC Educational Resources Information Center
Hopkins, Brian
2010-01-01
Two people take turns selecting from an even number of items. Their relative preferences over the items can be described as a permutation, then tools from algebraic combinatorics can be used to answer various questions. We describe each person's optimal selection strategies including how each could make use of knowing the other's preferences. We…
ERIC Educational Resources Information Center
Educational Leadership, 2011
2011-01-01
This paper begins by discussing the results of two studies recently conducted in Australia. According to the two studies, taking a gap year between high school and college may help students complete a degree once they return to school. The gap year can involve such activities as travel, service learning, or work. Then, the paper presents links to…
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
ERIC Educational Resources Information Center
Educational Leadership, 2011
2011-01-01
More than 1.5 million K-12 students in the United States engage in online or blended learning, according to a recent report. As of the end of 2010, 38 states had state virtual schools or state-led online initiatives; 27 states plus Washington, D.C., had full-time online schools; and 20 states offered both supplemental and full-time online learning…
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
Evaluation of TCP congestion control algorithms.
Long, Robert Michael
2003-12-01
Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris
2005-01-01
FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.
ERIC Educational Resources Information Center
Neihart, Maureen
1999-01-01
Describes systematic risk-taking, a strategy designed to develop skills and increase self-esteem, confidence, and courage in gifted youth. The steps of systematic risk-taking include understanding the benefits, initial self-assessment for risk-taking categories, identifying personal needs, determining a risk to take, taking the risk, and…
NASA Technical Reports Server (NTRS)
Chan, Hak-Wai; Yan, Tsun-Yee
1989-01-01
Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
ERIC Educational Resources Information Center
Boudreau, Sue
2010-01-01
The Take Action Project (TAP) was created to help middle school students take informed and effective action on science-related issues. The seven steps of TAP ask students to (1) choose a science-related problem of interest to them, (2) research their problem, (3) select an action to take on the problem, (4) plan that action, (5) take action, (6)…
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
ERIC Educational Resources Information Center
Brown, Marshall A.
2013-01-01
Today's work world is full of uncertainty. Every day, people hear about another organization going out of business, downsizing, or rightsizing. To prepare for these uncertain times, one must take charge of their own career. This article presents some tips for surviving in today's world of work: (1) Be self-managing; (2) Know what you…
Implicit, nonswitching, vector-oriented algorithm for steady transonic flow
NASA Technical Reports Server (NTRS)
Lottati, I.
1983-01-01
A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.
ERIC Educational Resources Information Center
Bennett, Robert B., Jr.
2010-01-01
Legal studies faculty need to take the long view in their academic and professional lives. Taking the long view would seem to be a cliched piece of advice, but too frequently legal studies faculty, like their students, get focused on meeting the next short-term hurdle--getting through the next class, grading the next stack of papers, making it…
... Live a Full Life with Fibro Page Content Fibromyalgia is a chronic pain condition that affects 10 ... family, you can live an active life with fibromyalgia. Talking with Your Physician Take the first step ...
JWST Full Scale Model Being Built
: The full-scale model of the James Webb Space Telescope is constructed for the 2010 World Science Festival in Battery Park, NY. The model takes about five days to construct. This video contains a ...
Simultaneous registration of structural and diffusion weighed images using the full DTI information
NASA Astrophysics Data System (ADS)
Nadeau, Hélène; Chai, Yaqiong; Thompson, Paul; Leporé, Natasha
2015-01-01
Banks of high-quality, multimodal neurological images offer new possibilities for analyses based on brain registration. To take full advantage of these, current algorithms should be significantly enhanced. We present here a new brain registration method driven simultaneously by the structural intensity and the total diffusion information of MRI scans. Using the two modalities together allows for a better alignment of general and specific aspects of the anatomy. Furthermore, keeping the full diffusion tensor in the cost function, rather than only some of its scalar measures, will allow for a thorough statistical analysis once the Jacobian of the transformation is obtained.
2007-09-12
Give and Take are set of companion utilities that allow a secure transfer of files from one user to another without exposing the files to third parties. The named files are copied to a spool area. The reciever can retrieve the files by running the "take" program. Ownership of the files remains with the giver until they are taken. Certain users may be limited to take files only from specific givers. For these users, files may only be taken from givers who are members of the gt-uid-group where uid is the UNIX id of the limited user.
2007-09-12
Give and Take are set of companion utilities that allow a secure transfer of files from one user to another without exposing the files to third parties. The named files are copied to a spool area. The reciever can retrieve the files by running the "take" program. Ownership of the files remains with the giver until they are taken. Certain users may be limited to take files only from specific givers. For these users, filesmore » may only be taken from givers who are members of the gt-uid-group where uid is the UNIX id of the limited user.« less
MedlinePlus Videos and Cool Tools
... better, the antibiotic is working in killing the bacteria, but it might not completely give what they call a "bactericidal effect." That means taking the bacteria completely out of the system. It might be ...
NASA Astrophysics Data System (ADS)
Heuer, Rolf-Dieter
2008-03-01
When the Economist recently reported the news of Rolf-Dieter Heuer's appointment as the next directorgeneral of CERN, it depicted him sitting cross-legged in the middle of a circular track steering a model train around him - smiling. It was an apt cartoon for someone who is about to take charge of the world's most powerful particle accelerator: the 27 km-circumference Large Hadron Collider (LHC), which is nearing completion at the European laboratory just outside Geneva. What the cartoonist did not known is that model railways are one of Heuer's passions.
... O Milkshakes Pudding Popsicles You can NOT eat solid foods when you are on a full liquid ... bouillon, consommé, and strained cream soups, but NO solids) Sodas, such as ginger ale and Sprite Gelatin ( ...
Khardon, R.
1996-12-31
We formalize a model for supervised learning of action strategies in dynamic stochastic domains, and show that pac-learning results on Occam algorithms hold in this model as well. We then identify a particularly useful bias for action strategies based on production rule systems. We show that a subset of production rule systems, including rules in predicate calculus style, small hidden state, and unobserved support predicates, is properly learnable. The bias we introduce enables the learning algorithm to invent the recursive support predicates which are used in the action strategy, and to reconstruct the internal state of the strategy. It is also shown that hierarchical strategies are learnable if a helpful teacher is available, but that otherwise the problem is computationally hard.
Why Online Education Will Attain Full Scale
ERIC Educational Resources Information Center
Sener, John
2010-01-01
Online higher education has attained scale and is poised to take the next step in its growth. Although significant obstacles to a full scale adoption of online education remain, we will see full scale adoption of online higher education within the next five to ten years. Practically all higher education students will experience online education in…
Categorizing Variations of Student-Implemented Sorting Algorithms
ERIC Educational Resources Information Center
Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri
2012-01-01
In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…
ERIC Educational Resources Information Center
Engelhardt, Lucas M.
2015-01-01
In this article, the author presents a price-takers' market simulation geared toward principles-level students. This simulation demonstrates that price-taking behavior is a natural result of the conditions that create perfect competition. In trials, there is a significant degree of price convergence in just three or four rounds. Students find this…
ERIC Educational Resources Information Center
Indiana State Dept. of Education, Indianapolis. Center for School Improvement and Performance.
During the 1987-88 school year the Indiana Department of Education assisted the United States Department of the Interior and the Indiana Department of Natural Resources with a program which asked students to become involved in activities to maintain and manage public lands. The 1987 Take Pride in America (TPIA) school program encouraged volunteer…
ERIC Educational Resources Information Center
Spitzer, Greg; Ogurek, Douglas J.
2009-01-01
Performing-arts centers can provide benefits at the high school and collegiate levels, and administrators can take steps now to get the show started. When a new performing-arts center comes to town, local businesses profit. Events and performances draw visitors to the community. Ideally, a performing-arts center will play many roles: entertainment…
Routing Algorithm Exploits Spatial Relations
NASA Technical Reports Server (NTRS)
Okino, Clayton; Jennings, Esther
2004-01-01
A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).
Cubit Adaptive Meshing Algorithm Library
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Dubois, Arnaud; Boccara, Claude
2006-10-01
Optical coherence tomography (OCT) is an emerging technique for imaging of biological media with micrometer-scale resolution, whose most significant impact concerns ophthalmology. Since its introduction in the early 1990's, OCT has known a lot of improvements and sophistications. Full-field OCT is our original approach of OCT, based on white-light interference microscopy. Tomographic images are obtained by combination of interferometric images recorded in parallel by a detector array such as a CCD camera. Whereas conventional OCT produces B-mode (axially-oriented) images like ultrasound imaging, full-field OCT acquires tomographic images in the en face (transverse) orientation. Full-field OCT is an alternative method to conventional OCT to provide ultrahigh resolution images (approximately 1 microm), using a simple halogen lamp instead of a complex laser-based source. Various studies have been carried, demonstrating the performances of this technology for three-dimensional imaging of ex vivo specimens. Full-field OCT can be used for non-invasive histological studies without sample preparation. In vivo imaging is still difficult because of the object motions. A lot of efforts are currently devoted to overcome this limitation. Ultra-fast full-field OCT was recently demonstrated with unprecedented image acquisition speed, but the detection sensitivity has still to be improved. Other research directions include the increase of the imaging penetration depth in highly scattering biological tissues such as skin, and the exploitation of new contrasts such as optical birefringence to provide additional information on the tissue morphology and composition. PMID:17026940
NASA Technical Reports Server (NTRS)
1929-01-01
Interior view of Full-Scale Tunnel (FST) model. (Small human figures have been added for scale.) On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel . 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow.
ERIC Educational Resources Information Center
Lawton, Rebecca
2008-01-01
In this essay, the author recalls several of her experiences in which she successfully pulled her boats out of river holes by throwing herself to the water as a sea-anchor. She learned this trick from her senior guides at a spring training. Her guides told her, "When you're stuck in a hole, take the "C" train."" "Meaning?" The author asked her…
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Whishaw, I Q
1998-01-01
The experiment tested the prediction that spatial mapping takes time and asked whether time use is reflected in the overt behavior of a performing animal. The study examines this question by exploiting the expected behavioral differences of control rats and rats with hippocampal formation damage induced with fimbria-fornix (FF) lesions on a spatial navigation task. Previous studies have shown that control rats use a mapping strategy, in which they use the relative positions of environmental cues to reach places in space, whereas FF rats use a cue-based strategy, in which they are guided by a single cue or their own body orientation. Therefore, control and FF rats were overtrained on a complex foraging task in which they left a burrow to retrieve eight food pellets hidden around the perimeter of a circular table. The control rats retrieved the food pellets in order of their distance from the burrow, took direct routes to the food, and made few errors, all of which suggested they used a spatial strategy. The FF rats were less likely to retrieve food as a function of its distance, took a circular path around the perimeter of the table, and made many errors, suggesting they used a cue-based strategy. Despite taking shorter routes than the FF rats, the control rats had proportionally slower response speeds. Their slow response speeds support the hypothesis that spatial mapping takes time and that mapping time is reflected in behavior. The results are discussed in relation to their relevance to spatial mapping theory, hippocampal function, and the evolution of foraging strategies.
Full Tolerant Archiving System
NASA Astrophysics Data System (ADS)
Knapic, C.; Molinaro, M.; Smareglia, R.
2013-10-01
The archiving system at the Italian center for Astronomical Archives (IA2) manages data from external sources like telescopes, observatories, or surveys and handles them in order to guarantee preservation, dissemination, and reliability, in most cases in a Virtual Observatory (VO) compliant manner. A metadata model dynamic constructor and a data archive manager are new concepts aimed at automatizing the management of different astronomical data sources in a fault tolerant environment. The goal is a full tolerant archiving system, nevertheless complicated by the presence of various and time changing data models, file formats (FITS, HDF5, ROOT, PDS, etc.) and metadata content, even inside the same project. To avoid this unpleasant scenario a novel approach is proposed in order to guarantee data ingestion, backward compatibility, and information preservation.
NASA Astrophysics Data System (ADS)
Riendeau, Diane; Hawkins, Stephanie; Beutlich, Scott
2016-03-01
Most teachers want students to think about their course content not only during class but also throughout their day. So, how do you get your students to see how what they learn in class applies to their lives outside of class? As physics teachers, we are fortunate that our students are continually surrounded by our content. How can we get them to notice the physics around them? How can we get them to make connections between the classroom content and their everyday lives? We would like to offer a few suggestions, Physics Take-Outs, to solve this problem.
NASA Technical Reports Server (NTRS)
1930-01-01
Construction of Full Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)
NASA Technical Reports Server (NTRS)
1930-01-01
Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).
NASA Technical Reports Server (NTRS)
2007-01-01
This image of Jupiter is produced from a 2x2 mosaic of photos taken by the New Horizons Long Range Reconnaissance Imager (LORRI), and assembled by the LORRI team at the Johns Hopkins University Applied Physics Laboratory. The telescopic camera snapped the images during a 3-minute, 35-second span on February 10, when the spacecraft was 29 million kilometers (18 million miles) from Jupiter. At this distance, Jupiter's diameter was 1,015 LORRI pixels -- nearly filling the imager's entire (1,024-by-1,024 pixel) field of view. Features as small as 290 kilometers (180 miles) are visible.
Both the Great Red Spot and Little Red Spot are visible in the image, on the left and lower right, respectively. The apparent 'storm' on the planet's right limb is a section of the south tropical zone that has been detached from the region to its west (or left) by a 'disturbance' that scientists and amateur astronomers are watching closely.
At the time LORRI took these images, New Horizons was 820 million kilometers (510 million miles) from home -- nearly 51/2 times the distance between the Sun and Earth. This is the last full-disk image of Jupiter LORRI will produce, since Jupiter is appearing larger as New Horizons draws closer, and the imager will start to focus on specific areas of the planet for higher-resolution studies.
Full Color Holographic Endoscopy
NASA Astrophysics Data System (ADS)
Osanlou, A.; Bjelkhagen, H.; Mirlis, E.; Crosby, P.; Shore, A.; Henderson, P.; Napier, P.
2013-02-01
The ability to produce color holograms from the human tissue represents a major medical advance, specifically in the areas of diagnosis and teaching. This has been achieved at Glyndwr University. In corporation with partners at Gooch & Housego, Moor Instruments, Vivid Components and peninsula medical school, Exeter, UK, for the first time, we have produced full color holograms of human cell samples in which the cell boundary and the nuclei inside the cells could be clearly focused at different depths - something impossible with a two-dimensional photographic image. This was the main objective set by the peninsula medical school at Exeter, UK. Achieving this objective means that clinically useful images essentially indistinguishable from the object human cells could be routinely recorded. This could potentially be done at the tip of a holo-endoscopic probe inside the body. Optimised recording exposure and development processes for the holograms were defined for bulk exposures. This included the optimisation of in-house recording emulsions for coating evaluation onto polymer substrates (rather than glass plates), a key step for large volume commercial exploitation. At Glyndwr University, we also developed a new version of our in-house holographic (world-leading resolution) emulsion.
Chen Guanghong; Tokalkanahalli, Ranjini; Zhuang Tingliang; Nett, Brian E.; Hsieh Jiang
2006-02-15
A novel exact fan-beam image reconstruction formula is presented and validated using both phantom data and clinical data. This algorithm takes the form of the standard ramp filtered backprojection (FBP) algorithm plus local compensation terms. This algorithm will be referred to as a locally compensated filtered backprojection (LCFBP). An equal weighting scheme is utilized in this algorithm in order to properly account for redundantly measured projection data. The algorithm has the desirable property of maintaining a mathematically exact result for: the full scan mode (2{pi}), the short scan mode ({pi}+ full fan angle), and the supershort scan mode [less than ({pi}+ full fan angle)]. Another desirable feature of this algorithm is that it is derivative-free. This feature is beneficial in preserving the spatial resolution of the reconstructed images. The third feature is that an equal weighting scheme has been utilized in the algorithm, thus the new algorithm has better noise properties than the standard filtered backprojection image reconstruction with a smooth weighting function. Both phantom data and clinical data were utilized to validate the algorithm and demonstrate the superior noise properties of the new algorithm.
Brock, Dan W
1985-07-01
Alan Donagan's position regarding the morality of taking innocent human life, that it is impermissible regardless of the wishes of the victim, is criticized by Brock who argues for a rights-based alternative. His argument appeals to the nature of persons' actual interest in life and gives them an additional element of control which they lack if a nonwaivable moral duty not to kill prevails. The author rejects Donagan's view that stopping a life-sustaining treatment, even when a competent patient has consented, is morally wrong and that there is no moral difference between killing and allowing to die. A rights-based position permits stopping treatment of incompetent patients based on what the patient would have wanted or what is in his or her best interest, and allows the withholding of treatment from a terminally ill person, with the patient's consent and for a benevolent motive, to be evaluated as morally different from killing that patient.
NASA Astrophysics Data System (ADS)
2003-07-01
SOHO orbit hi-res Size hi-res: 324 kb Credits: SOHO (ESA & NASA) SOHO orbit Because of its static position, every three months the high-gain antenna loses sight of Earth. During this time, engineers will rotate the spacecraft by 180 degrees to regain full contact a few days later. Since 19 June 2003, SOHO's high-gain antenna (HGA), which transmits high-speed data to Earth, has been fixed in position following the discovery of a malfunction in its pointing mechanism. This resulted in a loss of signal through SOHO's usual 26-metre ground stations on 27 June 2003. However, 34-metre radio dishes continued to receive high-speed transmissions from the HGA until 1 July 2003. Since then, astronomers have been relying primarily on a slower transmission rate signal, sent through SOHO's backup antenna. It can be picked up whenever a 34-metre dish is available. However, this signal could not transmit all of SOHO's data. Some data was recorded on board, however, and downloaded using high-speed transmissions through the backup antenna when time on the largest, 70-metre dishes could be spared. SOHO itself orbits a point in space, 1.5 million kilometres closer to the Sun than the Earth, once every 6 months. To reorient the HGA for the next half of this orbit, engineers rolled the spacecraft through a half-circle on 8 July 2003. On 10 July, the 34-metre radio dish in Madrid re-established contact with SOHO's HGA. Then on the morning of 14 July 2003, normal operations with the spacecraft resumed through its usual 26-metre ground stations, as predicted. With the HGA now static, the blackouts, lasting between 9 and 16 days, will continue to occur every 3 months. Engineers will rotate SOHO by 180 degrees every time this occurs. This manoeuvre will minimise data losses. Stein Haugan, acting SOHO project scientist, says "It is good to welcome SOHO back to normal operations, as it proves that we have a good understanding of the situation and can confidently work around it."
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
ERIC Educational Resources Information Center
Schuster, Dwight
2008-01-01
Physical models in the classroom "cannot be expected to represent the full-scale phenomenon with complete accuracy, not even in the limited set of characteristics being studied" (AAAS 1990). Therefore, by modifying a popular classroom activity called a "planet walk," teachers can explore upper elementary students' current understandings; create an…
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
Improvements of HITS Algorithms for Spam Links
NASA Astrophysics Data System (ADS)
Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao
The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.
Vector processor algorithms for transonic flow calculations
NASA Technical Reports Server (NTRS)
South, J. C., Jr.; Keller, J. D.; Hafez, M. M.
1979-01-01
This paper discusses a number of algorithms for solving the transonic full-potential equation in conservative form on a vector computer, such as the CDC STAR-100 or the CRAY-1. Recent research with the 'artificial density' method for transonics has led to development of some new iteration schemes which take advantage of vector-computer architecture without suffering significant loss of convergence rate. Several of these more promising schemes are described and 2-D and 3-D results are shown comparing the computational rates on the STAR and CRAY vector computers, and the CYBER-175 serial computer. Schemes included are: (1) Checkerboard SOR, (2) Checkerboard Leapfrog, (3) odd-even vertical line SOR, and (4) odd-even horizontal line SOR.
A distributed Canny edge detector: algorithm and FPGA implementation.
Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J
2014-07-01
The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100
A distributed Canny edge detector: algorithm and FPGA implementation.
Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J
2014-07-01
The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.
A DRAM compiler algorithm for high performance VLSI embedded memories
NASA Technical Reports Server (NTRS)
Eldin, A. G.
1992-01-01
In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .
NASA Astrophysics Data System (ADS)
1998-11-01
HAMLET (Highly Automated Multimedia Light Enhanced Theatre) was the star performance at the recent finals of the `Young Engineer for Britain' competition, held at the Commonwealth Institute in London. This state-of-the-art computer-controlled theatre lighting system won the title `Young Engineers for Britain 1998' for David Kelnar, Jonathan Scott, Ramsay Waller and John Wyllie (all aged 16) from Merchiston Castle School, Edinburgh. HAMLET replaces conventional manually-operated controls with a special computer program, and should find use in the thousands of small theatres, schools and amateur drama productions that operate with limited resources and without specialist expertise. The four students received a Â£2500 prize between them, along with Â£2500 for their school, and in addition they were invited to spend a special day with the Royal Engineers. A project designed to improve car locking systems enabled Ian Robinson of Durham University to take the `Working in industry award' worth Â£1000. He was also given the opportunity of a day at sea with the Royal Navy. Other prizewinners with their projects included: Jun Baba of Bloxham School, Banbury (a cardboard armchair which converts into a desk and chair); Kobika Sritharan and Gemma Hancock, Bancroft's School, Essex (a rain warning system for a washing line); and Alistair Clarke, Sam James and Ruth Jenkins, Bishop of Llandaff High School, Cardiff (a mechanism to open and close the retractable roof of the Millennium Stadium in Cardiff). The two principal national sponsors of the competition, which is organized by the Engineering Council, are Lloyd's Register and GEC. Industrial companies, professional engineering institutions and educational bodies also provided national and regional prizes and support. During this year's finals, various additional activities took place, allowing the students to surf the Internet and navigate individual engineering websites on a network of computers. They also visited the
... a Friend Who Cuts? Taking Care of Your Vision KidsHealth > For Teens > Taking Care of Your Vision ... are important parts of keeping your peepers perfect. Vision Basics One of the best things you can ...
[Decision on the rational algorithm in treatment of kidney cysts].
Antonov, A V; Ishutin, E Iu; Guliev, R N
2012-01-01
The article presents an algorithm of diagnostics and treatment of renal cysts and other liquid neoplasms of the retroperitoneal space on an analysis of 270 case histories. The algorithm takes into account the achievements of modern medical technologies developed in the recent years. The application of the proposed algorithm must elevate efficiency of the diagnosis and quality of treatment of patients with renal cysts.
Take-off mechanics in hummingbirds (Trochilidae).
Tobalske, Bret W; Altshuler, Douglas L; Powers, Donald R
2004-03-01
Initiating flight is challenging, and considerable effort has focused on understanding the energetics and aerodynamics of take-off for both machines and animals. For animal flight, the available evidence suggests that birds maximize their initial flight velocity using leg thrust rather than wing flapping. The smallest birds, hummingbirds (Order Apodiformes), are unique in their ability to perform sustained hovering but have proportionally small hindlimbs that could hinder generation of high leg thrust. Understanding the take-off flight of hummingbirds can provide novel insight into the take-off mechanics that will be required for micro-air vehicles. During take-off by hummingbirds, we measured hindlimb forces on a perch mounted with strain gauges and filmed wingbeat kinematics with high-speed video. Whereas other birds obtain 80-90% of their initial flight velocity using leg thrust, the leg contribution in hummingbirds was 59% during autonomous take-off. Unlike other species, hummingbirds beat their wings several times as they thrust using their hindlimbs. In a phylogenetic context, our results show that reduced body and hindlimb size in hummingbirds limits their peak acceleration during leg thrust and, ultimately, their take-off velocity. Previously, the influence of motivational state on take-off flight performance has not been investigated for any one organism. We studied the full range of motivational states by testing performance as the birds took off: (1) to initiate flight autonomously, (2) to escape a startling stimulus or (3) to aggressively chase a conspecific away from a feeder. Motivation affected performance. Escape and aggressive take-off featured decreased hindlimb contribution (46% and 47%, respectively) and increased flight velocity. When escaping, hummingbirds foreshortened their body movement prior to onset of leg thrust and began beating their wings earlier and at higher frequency. Thus, hummingbirds are capable of modulating their leg and
Optimal Consumption When Consumption Takes Time
ERIC Educational Resources Information Center
Miller, Norman C.
2009-01-01
A classic article by Gary Becker (1965) showed that when it takes time to consume, the first order conditions for optimal consumption require the marginal rate of substitution between any two goods to equal their relative full costs. These include the direct money price and the money value of the time needed to consume each good. This important…
Intelligent decision support algorithm for distribution system restoration.
Singh, Reetu; Mehfuz, Shabana; Kumar, Parmod
2016-01-01
Distribution system is the means of revenue for electric utility. It needs to be restored at the earliest if any feeder or complete system is tripped out due to fault or any other cause. Further, uncertainty of the loads, result in variations in the distribution network's parameters. Thus, an intelligent algorithm incorporating hybrid fuzzy-grey relation, which can take into account the uncertainties and compare the sequences is discussed to analyse and restore the distribution system. The simulation studies are carried out to show the utility of the method by ranking the restoration plans for a typical distribution system. This algorithm also meets the smart grid requirements in terms of an automated restoration plan for the partial/full blackout of network.
Intelligent decision support algorithm for distribution system restoration.
Singh, Reetu; Mehfuz, Shabana; Kumar, Parmod
2016-01-01
Distribution system is the means of revenue for electric utility. It needs to be restored at the earliest if any feeder or complete system is tripped out due to fault or any other cause. Further, uncertainty of the loads, result in variations in the distribution network's parameters. Thus, an intelligent algorithm incorporating hybrid fuzzy-grey relation, which can take into account the uncertainties and compare the sequences is discussed to analyse and restore the distribution system. The simulation studies are carried out to show the utility of the method by ranking the restoration plans for a typical distribution system. This algorithm also meets the smart grid requirements in terms of an automated restoration plan for the partial/full blackout of network. PMID:27512634
Horváth, Gábor; Barta, Andras; Gál, József; Suhai, Bence; Haiman, Ottó
2002-01-20
For elimination of the shortcomings of imaging polarimeters that take the necessary three pictures sequentially through linear-polarization filters, a three-lens, three-camera, full-sky imaging polarimeter was designed that takes the required pictures simultaneously. With this polarimeter, celestial polarization patterns can be measured even if rapid temporal changes occur in the sky: under cloudy sky conditions, or immediately after sunrise or prior to sunset. One of the possible applications of our polarimeter is the ground-based detection of clouds. With use of the additional information of the degree and the angle of polarization patterns of cloudy skies measured in the red (650 nm), green (550 nm), and blue (450 nm) spectral ranges, improved algorithms of radiometric cloud detection can be offered. We present a combined radiometric and polarimetric algorithm that performs the detection of clouds more efficiently and reliably as compared with an exclusively radiometric cloud-detection algorithm. The advantages and the limits of three-lens, three-camera, full-sky imaging polarimeters as well as the possibilities of improving our polarimetric cloud detection method are discussed briefly.
Empirical study of parallel LRU simulation algorithms
NASA Technical Reports Server (NTRS)
Carr, Eric; Nicol, David M.
1994-01-01
This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.
NASA Technical Reports Server (NTRS)
1990-01-01
One of three U.S. Air Force SR-71 reconnaissance aircraft originally retired from operational service and loaned to NASA for a high-speed research program retracts its landing gear after taking off from NASA's Ames-Dryden Flight Research Facility (later Dryden Flight Research Center), Edwards, California, on a 1990 research flight. One of the SR-71As was later returned to the Air Force for active duty in 1995. Data from the SR-71 high-speed research program will be used to aid designers of future supersonic/hypersonic aircraft and propulsion systems. Two SR-71 aircraft have been used by NASA as testbeds for high-speed and high-altitude aeronautical research. The aircraft, an SR-71A and an SR-71B pilot trainer aircraft, have been based here at NASA's Dryden Flight Research Center, Edwards, California. They were transferred to NASA after the U.S. Air Force program was cancelled. As research platforms, the aircraft can cruise at Mach 3 for more than one hour. For thermal experiments, this can produce heat soak temperatures of over 600 degrees Fahrenheit (F). This operating environment makes these aircraft excellent platforms to carry out research and experiments in a variety of areas -- aerodynamics, propulsion, structures, thermal protection materials, high-speed and high-temperature instrumentation, atmospheric studies, and sonic boom characterization. The SR-71 was used in a program to study ways of reducing sonic booms or over pressures that are heard on the ground, much like sharp thunderclaps, when an aircraft exceeds the speed of sound. Data from this Sonic Boom Mitigation Study could eventually lead to aircraft designs that would reduce the 'peak' overpressures of sonic booms and minimize the startling affect they produce on the ground. One of the first major experiments to be flown in the NASA SR-71 program was a laser air data collection system. It used laser light instead of air pressure to produce airspeed and attitude reference data, such as angle of
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Routes Countries Can Take To Achieve Full Ownership Of Immunization Programs.
McQuestion, Michael; Carlson, Andrew; Dari, Khongorzul; Gnawali, Devendra; Kamara, Clifford; Mambu-Ma-Disu, Helene; Mbwanque, Jonas; Kizza, Diana; Silver, Dana; Paatashvili, Eka
2016-02-01
A goal of the Global Vaccine Action Plan, led by the World Health Organization, is country ownership by 2020, defined here as the point when a country fully finances its routine immunization program with domestic resources. This article reports the progress made toward country ownership in twenty-two lower- and lower-middle-income countries engaged in the Sabin Vaccine Institute's Sustainable Immunization Financing Program. We focus on new practices developed in the key public institutions concerned with immunization financing, budget and resource tracking, and legislation, using case studies as examples. Our analysis found that many countries are undertaking new funding mechanisms to reach financing goals. However, budget transparency remains a problem, as only eleven of the twenty-two countries have performed sequential analyses of their immunization program budgets. Promisingly, six countries (Cameroon, the Republic of the Congo, Nepal, Nigeria, Senegal, and Uganda) are creating new national immunization funding sources that are backed by legislation. Seven countries already have laws regarding immunization, and new immunization legislative projects are under way in thirteen others. PMID:26858379
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1995-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that we currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Karr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1994-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that are currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Kerr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1994-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that are currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Kerr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
Personal pronouns and perspective taking in toddlers.
Ricard, M; Girouard, P C; Décarie, T G
1999-10-01
This study examined the evolution of visual perspective-taking skills in relation to the comprehension and production of first, second and third person pronouns. Twelve French-speaking and 12 English-speaking children were observed longitudinally from 1.6 until they had acquired all pronouns and succeeded on all tasks. Free-play sessions and three tasks were used to test pronominal competence. Four other tasks assessed Level-1 perspective-taking skills: two of these tasks required the capacity to consider two visual perspectives, and two others tested the capacity to coordinate three such perspectives. The results indicated that children's performance on perspective-taking tasks was correlated with full pronoun acquisition. Moreover, competence at coordinating two visual perspectives preceded the full mastery of first and second person pronouns, and competence at coordinating three perspectives preceded the full mastery of third person pronouns when a strict criterion was adopted. However, with less stringent criteria, the sequence from perspective taking to pronoun acquisition varied either slightly or considerably. These findings are discussed in the light of the 'specificity hypothesis' concerning the links between cognition and language, and also in the context of the recent body of research on the child's developing theory of mind.
Source Estimation by Full Wave Form Inversion
Sjögreen, Björn; Petersson, N. Anders
2013-08-07
Given time-dependent ground motion recordings at a number of receiver stations, we solve the inverse problem for estimating the parameters of the seismic source. The source is modeled as a point moment tensor source, characterized by its location, moment tensor components, the start time, and frequency parameter (rise time) of its source time function. In total, there are 11 unknown parameters. We use a non-linear conjugate gradient algorithm to minimize the full waveform misfit between observed and computed ground motions at the receiver stations. An important underlying assumption of the minimization problem is that the wave propagation is accurately described by the elastic wave equation in a heterogeneous isotropic material. We use a fourth order accurate finite difference method, developed in [12], to evolve the waves forwards in time. The adjoint wave equation corresponding to the discretized elastic wave equation is used to compute the gradient of the misfit, which is needed by the non-linear conjugated minimization algorithm. A new source point moment source discretization is derived that guarantees that the Hessian of the misfit is a continuous function of the source location. An efficient approach for calculating the Hessian is also presented. We show how the Hessian can be used to scale the problem to improve the convergence of the non-linear conjugated gradient algorithm. Numerical experiments are presented for estimating the source parameters from synthetic data in a layer over half-space problem (LOH.1), illustrating rapid convergence of the proposed approach.
Effects of visualization on algorithm comprehension
NASA Astrophysics Data System (ADS)
Mulvey, Matthew
Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.
GPU Accelerated Event Detection Algorithm
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
GPU Accelerated Event Detection Algorithm
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.
Take Your Leadership Role Seriously.
ERIC Educational Resources Information Center
School Administrator, 1986
1986-01-01
The principal authors of a new book, "Profiling Excellence in America's Schools," state that leadership is the single most important element for effective schools. The generic skills of leaders are flexibility, autonomy, risk taking, innovation, and commitment. Exceptional principals and teachers take their leadership and management roles…
ERIC Educational Resources Information Center
Grabowski, Carl
2008-01-01
Taking over a broken program can be one of the hardest tasks to take on. However, working towards a vision and a common goal--and eventually getting there--makes it all worth it in the end. In this article, the author shares the lessons she learned as the new director for the Bright Horizons Center in Ashburn, Virginia. She suggests that new…
Taking Chances in Romantic Relationships
ERIC Educational Resources Information Center
Elliott, Lindsey; Knox, David
2016-01-01
A 64 item Internet questionnaire was completed by 381 undergraduates at a large southeastern university to assess taking chances in romantic relationships. Almost three fourths (72%) self-identified as being a "person willing to take chances in my love relationship." Engaging in unprotected sex, involvement in a "friends with…
Full Duplex, Spread Spectrum Radio System
NASA Technical Reports Server (NTRS)
Harvey, Bruce A.
2000-01-01
The goal of this project was to support the development of a full duplex, spread spectrum voice communications system. The assembly and testing of a prototype system consisting of a Harris PRISM spread spectrum radio, a TMS320C54x signal processing development board and a Zilog Z80180 microprocessor was underway at the start of this project. The efforts under this project were the development of multiple access schemes, analysis of full duplex voice feedback delays, and the development and analysis of forward error correction (FEC) algorithms. The multiple access analysis involved the selection between code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA). Full duplex voice feedback analysis involved the analysis of packet size and delays associated with full loop voice feedback for confirmation of radio system performance. FEC analysis included studies of the performance under the expected burst error scenario with the relatively short packet lengths, and analysis of implementation in the TMS320C54x digital signal processor. When the capabilities and the limitations of the components used were considered, the multiple access scheme chosen was a combination TDMA/FDMA scheme that will provide up to eight users on each of three separate frequencies. Packets to and from each user will consist of 16 samples at a rate of 8,000 samples per second for a total of 2 ms of voice information. The resulting voice feedback delay will therefore be 4 - 6 ms. The most practical FEC algorithm for implementation was a convolutional code with a Viterbi decoder. Interleaving of the bits of each packet will be required to offset the effects of burst errors.
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei
2016-09-01
Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei
2016-09-01
Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509
Algorithmic Perspectives on Problem Formulations in MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Taking medicines to treat tuberculosis
... drugs. This is called directly observed therapy. Side Effects and Other Problems Women who may be pregnant, who are pregnant, or who are breastfeeding should talk to their provider before taking these ...
Taking Action for Healthy Kids.
ERIC Educational Resources Information Center
Kidd, Jill E.
2003-01-01
Summarizes research on relationship between physical activity, good nutrition, and academic performance. Offers several recommendations for how schools can take action to improve the nutrition and fitness of students. (PKP)
Brazilian physicists take centre stage
NASA Astrophysics Data System (ADS)
Curtis, Susan
2014-06-01
With the FIFA World Cup taking place in Brazil this month, Susan Curtis travels to South America's richest nation to find out how its physicists are exploiting recent big increases in science funding.
LRO Takes the Moon's Temperature
During the December 2011 lunar eclipse, LRO's Diviner instrument will take the temperature on the lunar surface. Since different rock sizes cool at different rates, scientists will be able to infer...
LRO Takes the Moon's Temperature
During the June 2011 lunar eclipse, scientists will be able to get a unique view of the moon. While the sun is blocked by the Earth, LRO's Diviner instrument will take the temperature on the lunar ...
NASA's Commercial Crew Program (CCP) is taking America to new heights with its Commercial Crew Development Round 2 (CCDev2) partners. In 2011, NASA entered into funded Space Act Agreements (SAAs) w...
Full-Scale Tests of NACA Cowlings
NASA Technical Reports Server (NTRS)
Theodorsen, Theodore; Brevoort, M J; Stickle, George W
1937-01-01
A comprehensive investigation has been carried on with full-scale models in the NACA 20-foot wind tunnel, the general purpose of which is to furnish information in regard to the physical functioning of the composite propeller-nacelle unit under all conditions of take-off, taxiing, and normal flight. This report deals exclusively with the cowling characteristics under condition of normal flight and includes the results of tests of numerous combinations of more than a dozen nose cowlings, about a dozen skirts, two propellers, two sizes of nacelle, as well as various types of spinners and other devices.
On Approximate Factorization Schemes for Solving the Full Potential Equation
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1997-01-01
An approximate factorization scheme based on the AF2 algorithm is presented for solving the three-dimensional full potential equation for the transonic flow about isolated wings. Two spatial discretization variations are presented, one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The present algorithm utilizes a C-H grid topology to map the flow field about the wing. One version of the AF2 iteration scheme is used on the upper wing surface and another slightly modified version is used on the lower surface. These two algorithm variations are then connected at the wing leading edge using a local iteration technique. The resulting scheme has improved linear stability characteristics and improved time-like damping characteristics relative to previous implementations of the AF2 algorithm. The presentation is highlighted with a grid refinement study and a number of numerical results.
Fever and Taking Your Child's Temperature
... About Zika & Pregnancy Fever and Taking Your Child's Temperature KidsHealth > For Parents > Fever and Taking Your Child's ... a mercury thermometer.) previous continue Tips for Taking Temperatures As any parent knows, taking a squirming child's ...
Predictive Algorithm For Aiming An Antenna
NASA Technical Reports Server (NTRS)
Gawronski, Wodek K.
1993-01-01
Method of computing control signals to aim antenna based on predictive control-and-estimation algorithm that takes advantage of control inputs. Conceived for controlling antenna in tracking spacecraft and celestial objects, near-future trajectories of which are known. Also useful in enhancing aiming performances of other antennas and instruments that track objects that move along fairly well known paths.
Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
ALFA: Automated Line Fitting Algorithm
NASA Astrophysics Data System (ADS)
Wesson, R.
2015-12-01
ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.
Parallelism of the SANDstorm hash algorithm.
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2009-09-01
Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.
NASA Astrophysics Data System (ADS)
Choi, Joseph; Howell, John
2015-05-01
Broadband, omnidirectional invisibility cloaking has been a goal of scientists since coordinate transformations were suggested for cloaking. The requirements for realizing such a cloak can be simplified by considering only the paraxial (`small-angle') regime. We recap the experimental demonstration of paraxial ray optics cloaking and theoretically complete its formalism, by extending it to the full-field of light. We then show how to build a full-field paraxial cloaking system.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Taking Stands for Social Justice
ERIC Educational Resources Information Center
Lindley, Lorinda; Rios, Francisco
2004-01-01
In this paper the authors describe efforts to help students take a stand for social justice in the College of Education at one predominantly White institution in the western Rocky Mountain region. The authors outline the theoretical frameworks that inform this work and the context of our work. The focus is on specific pedagogical strategies used…
ERIC Educational Resources Information Center
Rebell, Michael A.; Odden, Allan; Rolle, Anthony; Guthrie, James W.
2012-01-01
Educational Leadership talks with four experts in the fields of education policy and finance about how schools can weather the current financial crisis. Michael A. Rebell focuses on the recession and students' rights; Allan Odden suggests five steps schools can take to improve in tough times; Anthony Rolle describes the tension between equity and…
Experiencing discrimination increases risk taking.
Jamieson, Jeremy P; Koslov, Katrina; Nock, Matthew K; Mendes, Wendy Berry
2013-02-01
Prior research has revealed racial disparities in health outcomes and health-compromising behaviors, such as smoking and drug abuse. It has been suggested that discrimination contributes to such disparities, but the mechanisms through which this might occur are not well understood. In the research reported here, we examined whether the experience of discrimination affects acute physiological stress responses and increases risk-taking behavior. Black and White participants each received rejecting feedback from partners who were either of their own race (in-group rejection) or of a different race (out-group rejection, which could be interpreted as discrimination). Physiological (cardiovascular and neuroendocrine) changes, cognition (memory and attentional bias), affect, and risk-taking behavior were assessed. Significant participant race × partner race interactions were observed. Cross-race rejection, compared with same-race rejection, was associated with lower levels of cortisol, increased cardiac output, decreased vascular resistance, greater anger, increased attentional bias, and more risk-taking behavior. These data suggest that perceived discrimination is associated with distinct profiles of physiological reactivity, affect, cognitive processing, and risk taking, implicating direct and indirect pathways to health disparities.
Taking Stock and Standing down
ERIC Educational Resources Information Center
Peeler, Tom
2009-01-01
Standing down is an action the military takes to review, regroup, and reorganize. Unfortunately, it often comes after an accident or other tragic event. To stop losses, the military will "stand down" until they are confident they can resume safe operations. Standing down is good for everyone, not just the military. In today's fast-paced world,…
ERIC Educational Resources Information Center
Fain, Paul
2008-01-01
College presidents have long gotten flak for refusing to take controversial stands on national issues. A large group of presidents opened an emotionally charged national debate on the drinking age. In doing so, they triggered an avalanche of news-media coverage and a fierce backlash. While the criticism may sting, the prime-time fracas may help…
NASA Astrophysics Data System (ADS)
Pockley, Peter
2008-11-01
Australia's science minister Kim Carr has appointed physical scientists to key posts. Penny Sackett, an astronomer, takes over as the government's chief scientist this month, while in January geologist Megan Clark will become chief executive of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the county's largest research agency. Both five-year appointments have been welcomed by researchers.
A time-accurate multiple-grid algorithm
NASA Technical Reports Server (NTRS)
Jespersen, D. C.
1985-01-01
A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.
When perspective taking increases taking: reactive egoism in social interaction.
Epley, Nicholas; Caruso, Eugene; Bazerman, Max H
2006-11-01
Group members often reason egocentrically, believing that they deserve more than their fair share of group resources. Leading people to consider other members' thoughts and perspectives can reduce these egocentric (self-centered) judgments such that people claim that it is fair for them to take less; however, the consideration of others' thoughts and perspectives actually increases egoistic (selfish) behavior such that people actually take more of available resources. A series of experiments demonstrates this pattern in competitive contexts in which considering others' perspectives activates egoistic theories of their likely behavior, leading people to counter by behaving more egoistically themselves. This reactive egoism is attenuated in cooperative contexts. Discussion focuses on the implications of reactive egoism in social interaction and on strategies for alleviating its potentially deleterious effects. PMID:17059307
Wolfe, A.
1986-03-10
Supercomputing software is moving into high gear, spurred by the rapid spread of supercomputers into new applications. The critical challenge is how to develop tools that will make it easier for programmers to write applications that take advantage of vectorizing in the classical supercomputer and the parallelism that is emerging in supercomputers and minisupercomputers. Writing parallel software is a challenge that every programmer must face because parallel architectures are springing up across the range of computing. Cray is developing a host of tools for programmers. Tools to support multitasking (in supercomputer parlance, multitasking means dividing up a single program to run on multiple processors) are high on Cray's agenda. On tap for multitasking is Premult, dubbed a microtasking tool. As a preprocessor for Cray's CFT77 FORTRAN compiler, Premult will provide fine-grain multitasking.
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Automated Vectorization of Decision-Based Algorithms
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Shirzad, A.
2007-08-15
Gauge fixing may be done in different ways. We show that using the chain structure to describe a constrained system enables us to use either a full gauge, in which all gauged degrees of freedom are determined, or a partial gauge, in which some first class constraints remain as subsidiary conditions to be imposed on the solutions of the equations of motion. We also show that the number of constants of motion depends on the level in a constraint chain in which the gauge fixing condition is imposed. The relativistic point particle, electromagnetism, and the Polyakov string are discussed as examples and full or partial gauges are distinguished.
Optimal configuration algorithm of a satellite transponder
NASA Astrophysics Data System (ADS)
Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.
2016-04-01
This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.
ERIC Educational Resources Information Center
Jensen, Jill; Kindem, Cathy
2011-01-01
Elementary students make great scientists. They are natural questioners and observers. Capitalizing on this natural curiosity and wonderment, the authors have developed a method of doing inquiry investigations with students that many teachers have found practical and user friendly. Their belief is that full inquiry lessons serve as a vital method…
Full breastfeeding and paediatric cancer
Ortega-García, Juan A.; Ferrís-Tortajada, Josep; Torres-Cantero, Alberto M.; Soldin, Offie P.; Torres, Encarna Pastor; Fuster-Soler, Jose L.; Lopez-Ibor, Blanca; Madero-López, Luis
2013-01-01
Aim It has been suggested that there is an inverse association between breastfeeding and the risk of childhood cancer. We investigated the association between full breastfeeding and paediatric cancer (PC) in a case control study in Spain. Methods Maternal reports of full breastfeeding, collected through personal interviews using the Paediatric Environmental History, were compared among 187 children 6 months of age or older who had PC and 187 age-matched control siblings. Results The mean duration of full breastfeeding for cases were 8.43 and 11.25 weeks for controls. Cases had been significantly more often bottle-fed than controls (odds ratio (OR) 1.8; 95% confidence interval (CI) 1.1–2.8). Cases were significantly less breastfed for at least 2 months (OR 0.5; 95% CI 0.3–0.8), for at least 4 months (OR 0.5; 95% CI 0.3–0.8), and for 24 weeks or more (OR 0.5; 95% CI 0.2–0.9). Conclusions Breastfeeding was inversely associated with PC, the protection increasing with the duration of full breastfeeding. Additional research on possible mechanisms of this association may be warranted. Meanwhile, breastfeeding should be encouraged among mothers. PMID:17999666
Full potential multiple scattering theory
MacLaren, J.M.
1994-10-20
A practical method for performing self-consistent electronic structure calculations based upon full-potential multiple-scattering theory is presented. Solutions to the single site Schroedinger equation are obtained by solving coupled channel integral equations for a potential which is analytically continued out to the circumscribing sphere. This potential coincides with the full cell potential inside each atomic cell. Scattering matrices and wavefunctions for the full cell potential are obtained from surface Wronskian relations. The charge density is obtained from the single particle Green`s function. This Green`s function is computed using the cell scattering matrices and wavefunctions using the layer multiple scattering theory. Self consistent solutions require a solution at each iteration to the Poisson equation. The Poisson equation is solved using a variational cellular method. In the approach a local solution to each cell is augmented by adding a series of regular harmonics (solutions to Laplace`s equation). Minimizing the coulomb energy, subject to continuity of the potential across all cell boundary provides an expression for the coefficients of the regular harmonics. This method is applied to BCC Nb. Calculated properties converge well in angular momentum and show comparable accuracy to full potential linearized muffin-tin orbital calculations.
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks
Zeng, Yali; Xu, Li; Chen, Zhide
2015-01-01
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration. PMID:26703616
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks.
Zeng, Yali; Xu, Li; Chen, Zhide
2015-01-01
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration. PMID:26703616
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks.
Zeng, Yali; Xu, Li; Chen, Zhide
2015-12-22
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration.
Sleep Deprivation and Advice Taking
Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie
2016-01-01
Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507
Sleep Deprivation and Advice Taking.
Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie
2016-01-01
Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by - more or less qualified - advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
NASA Technical Reports Server (NTRS)
Dotson, Jessie L.; Batalha, Natalie; Bryson, Stephen T.; Caldwell, Douglas A.; Clarke, Bruce D.
2010-01-01
NASA's exoplanet discovery mission Kepler provides uninterrupted 1-min and 30-min optical photometry of a 100 square degree field over a 3.5 yr nominal mission. Downlink bandwidth is filled at these short cadences by selecting only detector pixels specific to 105 preselected stellar targets. The majority of the Kepler field, comprising 4 x 10(exp 6) m_v < 20 sources, is sampled at much lower 1-month cadence in the form of a full-frame image. The Full Frame Images (FFIs) are calibrated by the Science Operations Center at NASA Ames Research Center. The Kepler Team employ these images for astrometric and photometric reference but make the images available to the astrophysics community through the Multimission Archive at STScI (MAST). The full-frame images provide a resource for potential Kepler Guest Observers to select targets and plan observing proposals, while also providing a freely-available long-cadence legacy of photometric variation across a swathe of the Galactic disk.
Efficient 2d full waveform inversion using Fortran coarray
NASA Astrophysics Data System (ADS)
Ryu, Donghyun; Kim, ahreum; Ha, Wansoo
2016-04-01
We developed a time-domain seismic inversion program using the coarray feature of the Fortran 2008 standard to parallelize the algorithm. We converted a 2d acoustic parallel full waveform inversion program with Message Passing Interface (MPI) to a coarray program and examined performance of the two inversion programs. The results show that the speed of the waveform inversion program using the coarray is slightly faster than that of the MPI version. The standard coarray lacks features for collective communication; however, it can be improved in following standards since it is introduced recently. The parallel algorithm can be applied for 3D seismic data processing.
Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2016-09-01
A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.
Photoacoustic imaging taking into account thermodynamic attenuation
NASA Astrophysics Data System (ADS)
Acosta, Sebastián; Montalto, Carlos
2016-11-01
In this paper we consider a mathematical model for photoacoustic imaging which takes into account attenuation due to thermodynamic dissipation. The propagation of acoustic (compressional) waves is governed by a scalar wave equation coupled to the heat equation for the excess temperature. We seek to recover the initial acoustic profile from knowledge of acoustic measurements at the boundary. We recognize that this inverse problem is a special case of boundary observability for a thermoelastic system. This leads to the use of control/observability tools to prove the unique and stable recovery of the initial acoustic profile in the weak thermoelastic coupling regime. This approach is constructive, yielding a solvable equation for the unknown acoustic profile. Moreover, the solution to this reconstruction equation can be approximated numerically using the conjugate gradient method. If certain geometrical conditions for the wave speed are satisfied, this approach is well-suited for variable media and for measurements on a subset of the boundary. We also present a numerical implementation of the proposed reconstruction algorithm.
Staged optimization algorithms based MAC dynamic bandwidth allocation for OFDMA-PON
NASA Astrophysics Data System (ADS)
Liu, Yafan; Qian, Chen; Cao, Bingyao; Dun, Han; Shi, Yan; Zou, Junni; Lin, Rujian; Wang, Min
2016-06-01
Orthogonal frequency division multiple access passive optical network (OFDMA-PON) has being considered as a promising solution for next generation PONs due to its high spectral efficiency and flexible bandwidth allocation scheme. In order to take full advantage of these merits of OFDMA-PON, a high-efficiency medium access control (MAC) dynamic bandwidth allocation (DBA) scheme is needed. In this paper, we propose two DBA algorithms which can act on two different stages of a resource allocation process. To achieve higher bandwidth utilization and ensure the equity of ONUs, we propose a DBA algorithm based on frame structure for the stage of physical layer mapping. Targeting the global quality of service (QoS) of OFDMA-PON, we propose a full-range DBA algorithm with service level agreement (SLA) and class of service (CoS) for the stage of bandwidth allocation arbitration. The performance of the proposed MAC DBA scheme containing these two algorithms is evaluated using numerical simulations. Simulations of a 15 Gbps network with 1024 sub-carriers and 32 ONUs demonstrate the maximum network throughput of 14.87 Gbps and the maximum packet delay of 1.45 ms for the highest priority CoS under high load condition.
NASA Astrophysics Data System (ADS)
Addiss, John W.; Collins, Adam; Proud, William G.
2009-12-01
Digital Speckle Radiography (DSR) is a technique allowing full field displacement maps in a plan within an opaque material to be determined. The displacements are determined by tracking the motions of small sub-sections of a deforming speckle pattern, produced by seeding an internal layer of lead and taking flash x-ray images. An improved DSR algorithm is discussed which can improve the often poor contrast in DSR images, such that the mean and variance of the speckle pattern is uniform. This considerably improves the correlation success relative to other similar algorithms for DSR experiments. A series of experiments involving the penetration of granular media by long-rod projectiles, and the improved correlation achieved using this new algorithm, are discussed.
NASA Astrophysics Data System (ADS)
2007-08-01
New Wide Field Near-Infrared Imager for ESO's Very Large Telescope Europe's flagship ground-based astronomical facility, the ESO VLT, has been equipped with a new 'eye' to study the Universe. Working in the near-infrared, the new instrument - dubbed HAWK-I - covers about 1/10th the area of the Full Moon in a single exposure. It is uniquely suited to the discovery and study of faint objects, such as distant galaxies or small stars and planets. ESO PR Photo 36a/07 ESO PR Photo 36a/07 HAWK-I on the VLT After three years of hard work, HAWK-I (High Acuity, Wide field K-band Imaging) saw First Light on Yepun, Unit Telescope number 4 of ESO's VLT, on the night of 31 July to 1 August 2007. The first images obtained impressively demonstrate its potential. "HAWK-I is a credit to the instrument team at ESO who designed, built and commissioned it," said Catherine Cesarsky, ESO's Director General. "No doubt, HAWK-I will allow rapid progress in very diverse areas of modern astronomy by filling a niche of wide-field, well-sampled near-infrared imagers on 8-m class telescopes." "It's wonderful; the instrument's performance has been terrific," declared Jeff Pirard, the HAWK-I Project Manager. "We could not have hoped for a better start, and look forward to scientifically exciting and beautiful images in the years to come." During this first commissioning period all instrument functions were checked, confirming that the instrument performance is at the level expected. Different astronomical objects were observed to test different characteristics of the instrument. For example, during one period of good atmospheric stability, images were taken towards the central bulge of our Galaxy. Many thousands of stars were visible over the field and allowed the astronomers to obtain stellar images only 3.4 pixels (0.34 arcsecond) wide, uniformly over the whole field of view, confirming the excellent optical quality of HAWK-I. ESO PR Photo 36b/07 ESO PR Photo 36c/07 Nebula in Serpens (HAWK
GBT Dynamic Scheduling System: Algorithms, Metrics, and Simulations
NASA Astrophysics Data System (ADS)
Balser, D. S.; Bignell, C.; Braatz, J.; Clark, M.; Condon, J.; Harnett, J.; O'Neil, K.; Maddalena, R.; Marganian, P.; McCarty, M.; Sessoms, E.; Shelton, A.
2009-09-01
We discuss the scoring algorithm of the Robert C. Byrd Green Bank Telescope (GBT) Dynamic Scheduling System (DSS). Since the GBT is located in a continental, mid-latitude region where weather is dominated by water vapor and small-scale effects, the weather plays an important role in optimizing the observing efficiency of the GBT. We score observing sessions as a product of many factors. Some are continuous functions while others are binary limits taking values of 0 or 1, any one of which can eliminate a candidate session by forcing the score to zero. Others reflect management decisions to expedite observations by visiting observers, ensure the timely completion of projects, etc. Simulations indicate that dynamic scheduling can increase the effective observing time at frequencies higher than 10 GHz by about 50% over one full year. Beta tests of the DSS during Summer 2008 revealed the significance of various scheduling constraints and telescope overhead time to the overall observing efficiency.
Risk taking among diabetic clients.
Joseph, D H; Schwartz-Barcott, D; Patterson, B
1992-01-01
Diabetic clients must make daily decisions about their health care needs. Observational and anecdotal evidence suggests that vast differences exist between the kinds of choices diabetic clients make and the kinds of chances they are willing to take. The purpose of this investigation was to develop a diabetic risk-assessment tool. This instrument, which is based on subjective expected utility theory, measures risk-prone and risk-averse behavior. Initial findings from a pilot study of 18 women clients who are on insulin indicate that patterns of risk behavior exist in the areas of exercise, skin care, and diet. PMID:1729123
NASA Technical Reports Server (NTRS)
2004-01-01
This animation, made with images from the Mars Exploration Rover Spirit hazard-identification camera, shows the rover's perspective of its first post-egress drive on Mars Sunday. Engineers drove Spirit approximately 3 meters (10 feet) toward its first rock target, a football-sized, mountain-shaped rock called Adirondack. The drive took approximately 30 minutes to complete, including time stopped to take images. Spirit first made a series of arcing turns totaling approximately 1 meter (3 feet). It then turned in place and made a series of short, straightforward movements totaling approximately 2 meters (6.5 feet).
Semioptimal practicable algorithmic cooling
NASA Astrophysics Data System (ADS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
NASA Technical Reports Server (NTRS)
1931-01-01
Wing and nacelle set-up in Full-Scale Tunnel (FST). The NACA conducted drag tests in 1931 on a P3M-1 nacelle which were presented in a special report to the Navy. Smith DeFrance described this work in the report's introduction: 'Tests were conducted in the full-scale wind tunnel on a five to four geared Pratt and Whitney Wasp engine mounted in a P3M-1 nacelle. In order to simulate the flight conditions the nacelle was assembled on a 15-foot span of wing from the same airplane. The purpose of the tests was to improve the cooling of the engine and to reduce the drag of the nacelle combination. Thermocouples were installed at various points on the cylinders and temperature readings were obtained from these by the power plants division. These results will be reported in a memorandum by that division. The drag results, which are covered by this memorandum, were obtained with the original nacelle condition as received from the Navy with the tail of the nacelle modified, with the nose section of the nacelle modified, with a Curtiss anti-drag ring attached to the engine, with a Type G ring developed by the N.A.C.A., and with a Type D cowling which was also developed by the N.A.C.A.' (p. 1)
Achieving and sustaining full employment.
Rosen, S M
1995-01-01
Human rights and public health considerations provide strong support for policies that maximize employment. Ample historical and conceptual evidence supports the feasibility of full employment policies. New factors affecting the labor force, the rate of technological change, and the globalization of economic activity require appropriate policies--international as well as national--but do not invalidate the ability of modern states to apply the measures needed. Among these the most important include: (I) systematic reduction in working time with no loss of income, (2) active labor market policies, (3) use of fiscal and monetary measures to sustain the needed level of aggregate demand, (4) restoration of equal bargaining power between labor and capital, (5) social investment in neglected and outmoded infrastructure, (6) accountability of corporations for decisions to shift or reduce capital investment, (7) major reductions in military spending, to be replaced by socially needed and economically productive expenditures, (8) direct public sector job creation, (9) reform of monetary policy to restore emphasis on minimizing unemployment and promoting full employment. None are without precedent in modern economies. The obstacles are ideological and political. To overcome them will require intellectual clarity and effective advocacy. PMID:7499512
Analysis of a parallel multigrid algorithm
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tuminaro, Ray S.
1989-01-01
The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.
NASA Technical Reports Server (NTRS)
1929-01-01
Modified propeller and spinner in Full-Scale Tunnel (FST) model. On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel. 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow. This model can be constructed in a comparatively short time, using 2 by 4 framing with matched sheathing inside, and where circular sections are desired they can be obtained by nailing sheet metal to wooden ribs, which can be cut on the band saw. It is estimated that three months will be required for the construction and testing of such a model and that the cost will be approximately three thousand dollars, one thousand dollars of which will be for the motors. No suitable location appears to exist in any of our present buildings, and it may be necessary to build it outside and cover it with a roof.' George Lewis responded immediately (June 27) granting the authority to proceed. He urged Langley to expedite construction and to employ extra carpenters if necessary. Funds for the model came from the FST project
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
takes into account the specific maturities of each system"s (sensor and algorithm) technology to provide for a program that contains continuous improvement while retaining its manageability.
Reconstruction algorithm for limited-angle diffraction tomography for microwave NDE
Paladhi, P. Roy; Klaser, J.; Tayebi, A.; Udpa, L.; Udpa, S.
2014-02-18
Microwave tomography is becoming a popular imaging modality in nondestructive evaluation and medicine. A commonly encountered challenge in tomography in general is that in many practical situations a full 360° angular access is not possible and with limited access, the quality of reconstructed image is compromised. This paper presents an approach for reconstruction with limited angular access in diffraction tomography. The algorithm takes advantage of redundancies in image Fourier space data obtained from diffracted field measurements and couples it to an error minimization technique using a constrained total variation (CTV) minimization. Initial results from simulated data have been presented here to validate the approach.
Inhomogeneous phase shifting: an algorithm for nonconstant phase displacements
Tellez-Quinones, Alejandro; Malacara-Doblado, Daniel
2010-11-10
In this work, we have developed a different algorithm than the classical one on phase-shifting interferometry. These algorithms typically use constant or homogeneous phase displacements and they can be quite accurate and insensitive to detuning, taking appropriate weight factors in the formula to recover the wrapped phase. However, these algorithms have not been considered with variable or inhomogeneous displacements. We have generalized these formulas and obtained some expressions for an implementation with variable displacements and ways to get partially insensitive algorithms with respect to these arbitrary error shifts.
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
The simplification of fuzzy control algorithm and hardware implementation
NASA Technical Reports Server (NTRS)
Wu, Z. Q.; Wang, P. Z.; Teh, H. H.
1991-01-01
The conventional interface composition algorithm of a fuzzy controller is very time and memory consuming. As a result, it is difficult to do real time fuzzy inference, and most fuzzy controllers are realized by look-up tables. Here, researchers derive a simplified algorithm using the defuzzification mean of maximum. This algorithm takes shorter computation time and needs less memory usage, thus making it possible to compute the fuzzy inference on real time and easy to tune the control rules on line. A hardware implementation based on a simplified fuzzy inference algorithm is described.
50 CFR 216.161 - Specified activity and incidental take levels by species.
Code of Federal Regulations, 2010 CFR
2010-10-01
... GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS Taking of Marine Mammals Incidental to Shock Testing... one full ship-shock trial (FSST) of the USS MESA VERDE (LPD 19) during the time period between July...
Full-bridge capacitive extensometer
NASA Astrophysics Data System (ADS)
Peters, Randall D.
1993-08-01
Capacitive transducers have proven to be very effective sensors of small displacements, because of inherent stability and noninvasive high resolution. The most versatile ones have been those of a differential type, in which two elements are altered in opposite directions in response to change of the system parameter being monitored. Oftentimes, this differential pair has been incorporated into a bridge circuit, which is a useful means for employing synchronous detection to improve signal to noise ratios. Unlike previous differential capacitive dilatometers which used only two active capacitors, the present sensor is a full-bridge type, which is well suited to measuring low-level thermal expansions. This analog sensor is capable of 0.1 μm resolution anywhere within a range of several centimeters, with a linearity of 0.1%. Its user friendly output can be put on a strip chart recorder or directed to a computer for sophisticated data analysis.
NASA Technical Reports Server (NTRS)
1930-01-01
Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).
NASA Technical Reports Server (NTRS)
1930-01-01
Construction of Full-Scale Tunnel (FST): 120-Foot Truss hoisting, one and two point suspension. In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)
Full-field vibrometry with digital Fresnel holography
Leval, Julien; Picart, Pascal; Boileau, Jean Pierre; Pascal, Jean Claude
2005-09-20
A setup that permits full-field vibration amplitude and phase retrieval with digital Fresnel holography is presented. Full reconstruction of the vibration is achieved with a three-step stroboscopic holographic recording, and an extraction algorithm is proposed. The finite temporal width of the illuminating light is considered in an investigation of the distortion of the measured amplitude and phase. In particular, a theoretical analysis is proposed and compared with numerical simulations that show good agreement. Experimental results are presented for a loudspeaker under sinusoidal excitation; the mean quadratic velocity extracted from amplitude evaluation under two different measuring conditions is presented. Comparison with time averaging validates the full-field vibrometer.
Integrated Resilient Aircraft Control Project Full Scale Flight Validation
NASA Technical Reports Server (NTRS)
Bosworth, John T.
2009-01-01
Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.
A full-scale STOVL ejector experiment
NASA Technical Reports Server (NTRS)
Barankiewicz, Wendy S.
1993-01-01
The design and development of thrust augmenting short take-off and vertical landing (STOVL) ejectors has typically been an iterative process. In this investigation, static performance tests of a full-scale vertical lift ejector were performed at primary flow temperatures up to 1560 R (1100 F). Flow visualization (smoke generators, yarn tufts and paint dots) was used to assess inlet flowfield characteristics, especially around the primary nozzle and end plates. Performance calculations are presented for ambient temperatures close to 480 R (20 F) and 535 R (75 F) which simulate 'seasonal' aircraft operating conditions. Resulting thrust augmentation ratios are presented as functions of nozzle pressure ratio and temperature. Full-scale experimental tests such as this are expensive, and difficult to implement at engine exhaust temperatures. For this reason the utility of using similarity principles -- in particular, the Munk and Prim similarity principle for isentropic flow -- was explored. At different primary temperatures, exit pressure contours are compared for similarity. A nondimensional flow parameter is then shown to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments. Under the assumption that an appropriate similarity principle can be established, then properly chosen performance parameters should be similar for both hot flow and cold flow model tests.
Full-color holographic 3D printer
NASA Astrophysics Data System (ADS)
Takano, Masami; Shigeta, Hiroaki; Nishihara, Takashi; Yamaguchi, Masahiro; Takahashi, Susumu; Ohyama, Nagaaki; Kobayashi, Akihiko; Iwata, Fujio
2003-05-01
A holographic 3D printer is a system that produces a direct hologram with full-parallax information using the 3-dimensional data of a subject from a computer. In this paper, we present a proposal for the reproduction of full-color images with the holographic 3D printer. In order to realize the 3-dimensional color image, we selected the 3 laser wavelength colors of red (λ=633nm), green (λ=533nm), and blue (λ=442nm), and we built a one-step optical system using a projection system and a liquid crystal display. The 3-dimensional color image is obtained by synthesizing in a 2D array the multiple exposure with these 3 wavelengths made on each 250mm elementary hologram, and moving recording medium on a x-y stage. For the natural color reproduction in the holographic 3D printer, we take the approach of the digital processing technique based on the color management technology. The matching between the input and output colors is performed by investigating first, the relation between the gray level transmittance of the LCD and the diffraction efficiency of the hologram and second, by measuring the color displayed by the hologram to establish a correlation. In our first experimental results a non-linear functional relation for single and multiple exposure of the three components were found. These results are the first step in the realization of a natural color 3D image produced by the holographic color 3D printer.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Wahbeh, V.N.; Clark, J.H.; Naydo, W.R.; Horii, R.S.
1993-09-01
The high-purity-oxygen activated sludge process will be used to expand secondary treatment capacity and improve water quality in Santa Monica Bay. The facility is operated by the city of Los Angeles Department of Public Works` Bureau of Sanitation. The overall Hyperion Full Secondary Project is 30% complete, including a new headworks, a new primary clarifier battery, an electrical switch yard, and additional support facilities. The upgrading of secondary facilities is 50% complete, and construction of the digester facilities, the waste-activated sludge thickening facility, and the second phase of the three-phase modification to existing primary clarifier batteries has just begun. The expansion program will provide a maximum monthly design capacity of 19,723 L/s(450 mgd). Hyperion`s expansion program uses industrial treatment techniques rarely attempted in a municipal facility, particularly on such a large scale, including: a user-friendly intermediate pumping station featuring 3.8-m Archimedes screw pumps with a capacity of 5479 L/s each; space-efficient, high-purity-oxygen reactors; a one-of-a-kind, 777-Mg/d oxygen-generating facility incorporating several innovative features that not only save money and energy, but reduce noise; design improvements in 36 new final clarifiers to enhance settling and provide high effluent quality; and egg-shaped digesters to respond to technical and aesthetic design parameters.
Information filtering via weighted heat conduction algorithm
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng
2011-06-01
In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.
Taking charge: a personal responsibility.
Newman, D M
1987-01-01
Women can adopt health practices that will help them to maintain good health throughout their various life stages. Women can take charge of their health by maintaining a nutritionally balanced diet, exercising, and using common sense. Women can also employ known preventive measures against osteoporosis, stroke, lung and breast cancer and accidents. Because women experience increased longevity and may require long-term care with age, the need for restructuring the nation's care system for the elderly becomes an important women's health concern. Adult day care centers, home health aides, and preventive education will be necessary, along with sufficient insurance to maintain quality care and self-esteem without depleting a person's resources. PMID:3120224
Modified Cholesky factorizations in interior-point algorithms for linear programming.
Wright, S.; Mathematics and Computer Science
1999-01-01
We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Primary Care Sports Medicine: A Full-Timer's Perspective.
ERIC Educational Resources Information Center
Moats, William E.
1988-01-01
This article describes the history and structure of a sports medicine facility, the patient care services it offers, and the types of injuries treated at the center. Opportunities and potentials for physicians who wish to enter the field of sports medicine on a full-time basis are described, as are steps to take to prepare to do so. (Author/JL)
Algorithms to Automate LCLS Undulator Tuning
Wolf, Zachary
2010-12-03
Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.
Bergmeir, Christoph; García Silvente, Miguel; Benítez, José Manuel
2012-09-01
In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.
Conjugate gradient algorithms using multiple recursions
Barth, T.; Manteuffel, T.
1996-12-31
Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
Totally parallel multilevel algorithms
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Vestibular symptoms and history taking.
Bisdorff, A
2016-01-01
History taking is an essential part in the diagnostic process of vestibular disorders. The approach to focus strongly on the quality of symptoms, like vertigo, dizziness, or unsteadiness, is not that useful as these symptoms often coexist and are all nonspecific, as each of them may arise from vestibular and nonvestibular diseases (like cardiovascular disease) and do not permit to distinguish potentially dangerous from benign causes. Instead, patients should be categorized if they have an acute, episodic, or chronic vestibular syndrome (AVS, EVS, or CVS) to narrow down the spectrum of differential diagnosis. Typical examples of disorders provoking an AVS would be vestibular neuritis or stroke of peripheral or central vestibular structures, of an EVS Menière's disease, benign paroxysmal positional vertigo, or vestibular migraine and of a CVS long-standing uni- or bilateral vestibular failure or cerebellar degeneration. The presence of triggers should be established with a main distinction between positional (change of head orientation with respect to gravity), head motion-induced (time-locked to head motion regardless of direction) and orthostatic position change as the underlying disorders are quite different. Accompanying symptoms also help to orient to the underlying cause, like aural or neurologic symptoms, but also chest pain or dyspnea. PMID:27638064
Microgravity Smoldering Combustion Takes Flight
NASA Technical Reports Server (NTRS)
1996-01-01
The Microgravity Smoldering Combustion (MSC) experiment lifted off aboard the Space Shuttle Endeavour in September 1995 on the STS-69 mission. This experiment is part of series of studies focused on the smolder characteristics of porous, combustible materials in a microgravity environment. Smoldering is a nonflaming form of combustion that takes place in the interior of combustible materials. Common examples of smoldering are nonflaming embers, charcoal briquettes, and cigarettes. The objective of the study is to provide a better understanding of the controlling mechanisms of smoldering, both in microgravity and Earth gravity. As with other forms of combustion, gravity affects the availability of air and the transport of heat, and therefore, the rate of combustion. Results of the microgravity experiments will be compared with identical experiments carried out in Earth's gravity. They also will be used to verify present theories of smoldering combustion and will provide new insights into the process of smoldering combustion, enhancing our fundamental understanding of this frequently encountered combustion process and guiding improvement in fire safety practices.
Hernández-Restrepo, M; Groenewald, J Z; Elliott, M L; Canning, G; McMillan, V E; Crous, P W
2016-01-01
Take-all disease of Poaceae is caused by Gaeumannomyces graminis (Magnaporthaceae). Four varieties are recognised in G. graminis based on ascospore size, hyphopodial morphology and host preference. The aim of the present study was to clarify boundaries among species and varieties in Gaeumannomyces by combining morphology and multi-locus phylogenetic analyses based on partial gene sequences of ITS, LSU, tef1 and rpb1. Two new genera, Falciphoriella and Gaeumannomycella were subsequently introduced in Magnaporthaceae. The resulting phylogeny revealed several cryptic species previously overlooked within Gaeumannomyces. Isolates of Gaeumannomyces were distributed in four main clades, from which 19 species could be delimited, 12 of which were new to science. Our results show that the former varieties Gaeumannomyces graminis var. avenae and Gaeumannomyces graminis var. tritici represent species phylogenetically distinct from G. graminis, for which the new combinations G. avenae and G. tritici are introduced. Based on molecular data, morphology and host preferences, Gaeumannomyces graminis var. maydis is proposed as a synonym of G. radicicola. Furthermore, an epitype for Gaeumannomyces graminis var. avenae was designated to help stabilise the application of that name. PMID:27504028
Improvements of satellite SST retrievals at full swath
NASA Astrophysics Data System (ADS)
McBride, Walton; Arnone, Robert; Cayula, Jean-François
2013-06-01
The ultimate goal of the prediction of Sea Surface Temperature (SST) from satellite data is to attain an accuracy of 0.3°K or better when compared to floating or drifting buoys located around the globe. Current daytime SST algorithms are able to routinely achieve an accuracy of 0.5°K for satellite zenith angles up to 53°. The full scan swath of VIIRS (Visible Infrared Imaging Radiometer Suite) results in satellite zenith angles up to 70°, so that successful retrieval of SST from VIIRS at these higher angles would greatly increase global coverage. However, the accuracy of present SST algorithms steadily degrades to nearly 0.7°K as the satellite zenith angle reaches 70°, due mostly to the effects of increased atmospheric path length. We investigated the use of Tfield, a gap-free first guess temperature field used in NLSST, as a separate predictor to the MCSST algorithm in order to clearly evaluate its effects. Results of this new algorithm, TfieldSST, showed how its rms error is heavily dependent on the aggressiveness of the pre-filtering of buoy matchup data with respect to Tfield. It also illustrated the importance of fully exploiting the a priori satellite-only information contained in Tfield, presently tamed in the NLSST algorithm due to the fact that it shows up as a multiplier to another predictor. Preliminary results show that SST retrievals using TfieldSST could be obtained using the full satellite swath with a 30% improvement in accuracy at large satellite zenith angles and that a fairly aggressive pre-filtering scheme could help attain the desired accuracy of 0.3°K or better using over 75% of the buoy matchup data.
Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices
NASA Astrophysics Data System (ADS)
Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly
2010-01-01
In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.
Solving SAT Problem Based on Hybrid Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan
Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.
The study on the synthesis approximate algorithm of GPS height conversion
NASA Astrophysics Data System (ADS)
Wu, Xiang-Yang; Wang, Qing; Gao, Bing; Liang, Hongbao
2009-12-01
Studying on the GPS height transformation algorithm to improve the accuracy of GPS height conversion has always been a hotspot in the field of geodesy. At present, there are many methods of converses GPS height into normal height, the most common of which is the method of numerical approximation, and its algorithm is mostly confined to how to choose a suitable function model or statistical model to numerical approximation. Aiming at the singularity of GPS height conversion methods, this article presents a comprehensive approximation algorithm, which combine the functional approximation models with statistical approximation models in order to take full advantage of the regularity of function approximation model and the flexibility of the statistical approximation model. Based on the analysis of the actual engineering data, the results show that the accuracy of GPS height transformation based on the algorithm of integrated approximation approach is superior to the single function transformation model or a single statistical model. At the same time, it also relaxes the selection requirements of function model and statistical model.
Full waveform inversion of solar interior flows
Hanasoge, Shravan M.
2014-12-10
The inference of flows of material in the interior of the Sun is a subject of major interest in helioseismology. Here, we apply techniques of full waveform inversion (FWI) to synthetic data to test flow inversions. In this idealized setup, we do not model seismic realization noise, training the focus entirely on the problem of whether a chosen supergranulation flow model can be seismically recovered. We define the misfit functional as a sum of L {sub 2} norm deviations in travel times between prediction and observation, as measured using short-distance filtered f and p {sub 1} and large-distance unfiltered p modes. FWI allows for the introduction of measurements of choice and iteratively improving the background model, while monitoring the evolution of the misfit in all desired categories. Although the misfit is seen to uniformly reduce in all categories, convergence to the true model is very slow, possibly because it is trapped in a local minimum. The primary source of error is inaccurate depth localization, which, due to density stratification, leads to wrong ratios of horizontal and vertical flow velocities ({sup c}ross talk{sup )}. In the present formulation, the lack of sufficient temporal frequency and spatial resolution makes it difficult to accurately localize flow profiles at depth. We therefore suggest that the most efficient way to discover the global minimum is to perform a probabilistic forward search, involving calculating the misfit associated with a broad range of models (generated, for instance, by a Monte Carlo algorithm) and locating the deepest minimum. Such techniques possess the added advantage of being able to quantify model uncertainty as well as realization noise (data uncertainty).
Algorithms for High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Morookian, John-Michael; Lambert, James
2010-01-01
Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea
Real-time anomaly detection in full motion video
NASA Astrophysics Data System (ADS)
Konowicz, Glenn; Li, Jiang
2012-06-01
Improvement in sensor technology such as charge-coupled devices (CCD) as well as constant incremental improvements in storage space has enabled the recording and storage of video more prevalent and lower cost than ever before. However, the improvements in the ability to capture and store a wide array of video have required additional manpower to translate these raw data sources into useful information. We propose an algorithm for automatically detecting anomalous movement patterns within full motion video thus reducing the amount of human intervention required to make use of these new data sources. The proposed algorithm tracks all of the objects within a video sequence and attempts to cluster each object's trajectory into a database of existing trajectories. Objects are tracked by first differentiating them from a Gaussian background model and then tracked over subsequent frames based on a combination of size and color. Once an object is tracked over several frames, its trajectory is calculated and compared with other trajectories earlier in the video sequence. Anomalous trajectories are differentiated by their failure to cluster with other well-known movement patterns. Adding the proposed algorithm to an existing surveillance system could increase the likelihood of identifying an anomaly and allow for more efficient collection of intelligence data. Additionally, by operating in real-time, our algorithm allows for the reallocation of sensing equipment to those areas most likely to contain movement that is valuable for situational awareness.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
NASA Astrophysics Data System (ADS)
Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo
1999-05-01
This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.
Aerodynamics of a beetle in take-off flights
NASA Astrophysics Data System (ADS)
Lee, Boogeon; Park, Hyungmin; Kim, Sun-Tae
2015-11-01
In the present study, we investigate the aerodynamics of a beetle in its take-off flights based on the three-dimensional kinematics of inner (hindwing) and outer (elytron) wings, and body postures, which are measured with three high-speed cameras at 2000 fps. To track the highly deformable wing motions, we distribute 21 morphological markers and use the modified direct linear transform algorithm for the reconstruction of measured wing motions. To realize different take-off conditions, we consider two types of take-off flights; that is, one is the take-off from a flat ground and the other is from a vertical rod mimicking a branch of a tree. It is first found that the elytron which is flapped passively due to the motion of hindwing also has non-negligible wing-kinematic parameters. With the ground, the flapping amplitude of elytron is reduced and the hindwing changes its flapping angular velocity during up and downstrokes. On the other hand, the angle of attack on the elytron and hindwing increases and decreases, respectively, due to the ground. These changes in the wing motion are critically related to the aerodynamic force generation, which will be discussed in detail. Supported by the grant to Bio-Mimetic Robot Research Center funded by Defense Acquisition Program Administration (UD130070ID).
Aerodynamic Shape Optimization using an Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Hoist, Terry L.; Pulliam, Thomas H.
2003-01-01
A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.
Aerodynamic Shape Optimization using an Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.
A new algorithm for agile satellite-based acquisition operations
NASA Astrophysics Data System (ADS)
Bunkheila, Federico; Ortore, Emiliano; Circi, Christian
2016-06-01
Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.
Men: Take Charge of Your Health
... charge of your health. Make small changes every day. Small changes can add up to big results – ... screening . Ask your doctor about taking aspirin every day. If you are age 50 to 59, taking ...
Guide for Patients Taking Nonsteroidal Immunosuppressive Drugs
... taking adalimumab, etanercept, or in iximab: Check your temperature frequently, and report a fever to your physician ... Receptor Antagonists For patients taking basiliximab: Check your temperature frequently, and report a fever to your physician ...
Taking medicines - what to ask your doctor
... medicine you take. Know what medicines, vitamins, and herbal supplements you take. Make a list of your medicines ... Will this medicine change how any of my herbal or dietary supplements work? Ask if your new medicine interferes with ...
Take-off of heavily loaded airplanes
NASA Technical Reports Server (NTRS)
Proll, A
1928-01-01
In the present article, several suggestions will be made for shortening the otherwise long take-off distance. For the numerical verification of the process, I will use a graphic method for determining the take-off distance of seaplanes.
Taking your blood pressure at home (image)
... sure you are taking your blood pressure correctly. Compare your home machine with the one at your ... sure you are taking your blood pressure correctly. Compare your home machine with the one at your ...
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
Taking stock of water resources
NASA Astrophysics Data System (ADS)
Nuttle, William
You can only manage what you measure. If this maxim is correct, then a recent report by the U.S. Geological Survey [2002] promises a vast improvement in water management in the United States. The report proposes a consolidated, national accounting of availability and use of fresh water. The proposed accounting clearly will be superior to the present absence of a nationwide assessment of fresh water resources. But is it enough? Traditionally, water managers have measured the availability of fresh water by comparing the volume of water available from various sources against estimated demand. The proposed national assessment adheres to this approach. Gauging water by volume is fine if we are only interested in whether our glasses will be full or empty. But throw an endangered species or wetland preservation into the mix, and the picture becomes less clear.
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks. PMID:24852272
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
50 CFR 216.11 - Prohibited taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...
50 CFR 216.11 - Prohibited taking.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...
50 CFR 216.11 - Prohibited taking.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...
50 CFR 216.11 - Prohibited taking.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...
A limited-memory algorithm for bound-constrained optimization
Byrd, R.H.; Peihuang, L.; Nocedal, J. |
1996-03-01
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.
Wire Detection Algorithms for Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.
2002-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning
A fully scalable online pre-processing algorithm for short oligonucleotide microarray atlases
Lahti, Leo; Torrente, Aurora; Elo, Laura L.; Brazma, Alvis; Rung, Johan
2013-01-01
Rapid accumulation of large and standardized microarray data collections is opening up novel opportunities for holistic characterization of genome function. The limited scalability of current preprocessing techniques has, however, formed a bottleneck for full utilization of these data resources. Although short oligonucleotide arrays constitute a major source of genome-wide profiling data, scalable probe-level techniques have been available only for few platforms based on pre-calculated probe effects from restricted reference training sets. To overcome these key limitations, we introduce a fully scalable online-learning algorithm for probe-level analysis and pre-processing of large microarray atlases involving tens of thousands of arrays. In contrast to the alternatives, our algorithm scales up linearly with respect to sample size and is applicable to all short oligonucleotide platforms. The model can use the most comprehensive data collections available to date to pinpoint individual probes affected by noise and biases, providing tools to guide array design and quality control. This is the only available algorithm that can learn probe-level parameters based on sequential hyperparameter updates at small consecutive batches of data, thus circumventing the extensive memory requirements of the standard approaches and opening up novel opportunities to take full advantage of contemporary microarray collections. PMID:23563154
Full-waveform data for building roof step edge localization
NASA Astrophysics Data System (ADS)
Słota, Małgorzata
2015-08-01
Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
Evolutionary pattern search algorithms
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.
Branfoot, T
1994-01-01
Injured motorcyclists may have a damaged and unstable cervical spine (C-spine). This paper looks at whether a helmet can be safely removed, how and when should this be done? The literature is reviewed and the recommendations of the Trauma Working party of the Joint Colleges Ambulance Liaison Committee are presented. PMID:7921566
A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)
NASA Astrophysics Data System (ADS)
Cantó, J.; Curiel, S.; Martínez-Gómez, E.
2009-07-01
Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.
A New Approximate Chimera Donor Cell Search Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Nixon, David (Technical Monitor)
1998-01-01
The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.
Predictive search algorithm for vector quantization of images
NASA Astrophysics Data System (ADS)
Kuo, Chung-Ming; Hsieh, Chaur-Heh; Weng, Shiuh-Ku
2002-05-01
We present a fast predictive search algorithm for vectorquantization (VQ) based on a wavelet transform and weighted average Kalman filter (WAKF). With the proposed algorithm, the minimum distortion code word can be found by searching only a portion of the wavelet transformed code book. If the minimum distortion code word found falls within a predicted search area obtained by the WAKF algorithm, the relative address that is shorter than the absolute address for a full search range is sent to the decoder. Simulation results indicate that the proposed algorithm achieves a significant reduction in computations and about a 30% bit-rate reduction, as compared to conventional full search VQs. In addition, the reconstructed quality is equivalent to that of the full search algorithm.
Accuracy metrics for judging time scale algorithms
NASA Technical Reports Server (NTRS)
Douglas, R. J.; Boulanger, J.-S.; Jacques, C.
1994-01-01
Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.
Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging
NASA Astrophysics Data System (ADS)
Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ
2015-01-01
Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)
Full-search-equivalent pattern matching with incremental dissimilarity approximations.
Tombari, Federico; Mattoccia, Stefano; Di Stefano, Luigi
2009-01-01
This paper proposes a novel method for fast pattern matching based on dissimilarity functions derived from the Lp norm, such as the Sum of Squared Differences (SSD) and the Sum of Absolute Differences (SAD). The proposed method is full-search equivalent, i.e. it yields the same results as the Full Search (FS) algorithm. In order to pursue computational savings the method deploys a succession of increasingly tighter lower bounds of the adopted Lp norm-based dissimilarity function. Such bounding functions allow for establishing a hierarchy of pruning conditions aimed at skipping rapidly those candidates that cannot satisfy the matching criterion. The paper includes an experimental comparison between the proposed method and other full-search equivalent approaches known in literature, which proves the remarkable computational efficiency of our proposal. PMID:19029551
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Optical rate sensor algorithms
NASA Astrophysics Data System (ADS)
Uhde-Lacovara, Jo A.
1989-12-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Fast algorithms for transport models. Final report
Manteuffel, T.A.
1994-10-01
This project has developed a multigrid in space algorithm for the solution of the S{sub N} equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell {mu}-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE`s. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M)).
Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao
2015-07-01
Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.
TakeTwo: an indexing algorithm suited to still images with known crystal parameters.
Ginn, Helen Mary; Roedig, Philip; Kuo, Anling; Evans, Gwyndaf; Sauter, Nicholas K; Ernst, Oliver; Meents, Alke; Mueller-Werkmeister, Henrike; Miller, R J Dwayne; Stuart, David Ian
2016-08-01
The indexing methods currently used for serial femtosecond crystallography were originally developed for experiments in which crystals are rotated in the X-ray beam, providing significant three-dimensional information. On the other hand, shots from both X-ray free-electron lasers and serial synchrotron crystallography experiments are still images, in which the few three-dimensional data available arise only from the curvature of the Ewald sphere. Traditional synchrotron crystallography methods are thus less well suited to still image data processing. Here, a new indexing method is presented with the aim of maximizing information use from a still image given the known unit-cell dimensions and space group. Efficacy for cubic, hexagonal and orthorhombic space groups is shown, and for those showing some evidence of diffraction the indexing rate ranged from 90% (hexagonal space group) to 151% (cubic space group). Here, the indexing rate refers to the number of lattices indexed per image. PMID:27487826
TakeTwo: an indexing algorithm suited to still images with known crystal parameters
Ginn, Helen Mary; Roedig, Philip; Kuo, Anling; Evans, Gwyndaf; Sauter, Nicholas K.; Ernst, Oliver; Meents, Alke; Mueller-Werkmeister, Henrike; Miller, R. J. Dwayne; Stuart, David Ian
2016-01-01
The indexing methods currently used for serial femtosecond crystallography were originally developed for experiments in which crystals are rotated in the X-ray beam, providing significant three-dimensional information. On the other hand, shots from both X-ray free-electron lasers and serial synchrotron crystallography experiments are still images, in which the few three-dimensional data available arise only from the curvature of the Ewald sphere. Traditional synchrotron crystallography methods are thus less well suited to still image data processing. Here, a new indexing method is presented with the aim of maximizing information use from a still image given the known unit-cell dimensions and space group. Efficacy for cubic, hexagonal and orthorhombic space groups is shown, and for those showing some evidence of diffraction the indexing rate ranged from 90% (hexagonal space group) to 151% (cubic space group). Here, the indexing rate refers to the number of lattices indexed per image. PMID:27487826
TakeTwo: an indexing algorithm suited to still images with known crystal parameters.
Ginn, Helen Mary; Roedig, Philip; Kuo, Anling; Evans, Gwyndaf; Sauter, Nicholas K; Ernst, Oliver P; Meents, Alke; Mueller-Werkmeister, Henrike; Miller, R J Dwayne; Stuart, David Ian
2016-08-01
The indexing methods currently used for serial femtosecond crystallography were originally developed for experiments in which crystals are rotated in the X-ray beam, providing significant three-dimensional information. On the other hand, shots from both X-ray free-electron lasers and serial synchrotron crystallography experiments are still images, in which the few three-dimensional data available arise only from the curvature of the Ewald sphere. Traditional synchrotron crystallography methods are thus less well suited to still image data processing. Here, a new indexing method is presented with the aim of maximizing information use from a still image given the known unit-cell dimensions and space group. Efficacy for cubic, hexagonal and orthorhombic space groups is shown, and for those showing some evidence of diffraction the indexing rate ranged from 90% (hexagonal space group) to 151% (cubic space group). Here, the indexing rate refers to the number of lattices indexed per image.
An adaptive algorithm for low contrast infrared image enhancement
NASA Astrophysics Data System (ADS)
Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi
2013-08-01
An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex
Automatic design of decision-tree algorithms with evolutionary algorithms.
Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A
2013-01-01
This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.
Development of sensor-based nitrogen recommendation algorithms for cereal crops
NASA Astrophysics Data System (ADS)
Asebedo, Antonio Ray
through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.
Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance
NASA Astrophysics Data System (ADS)
Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.
2015-04-01
Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover
Full reconstruction of a 14-qubit state within four hours
NASA Astrophysics Data System (ADS)
Hou, Zhibo; Zhong, Han-Sen; Tian, Ye; Dong, Daoyi; Qi, Bo; Li, Li; Wang, Yuanlong; Nori, Franco; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can
2016-08-01
Full quantum state tomography (FQST) plays a unique role in the estimation of the state of a quantum system without a priori knowledge or assumptions. Unfortunately, since FQST requires informationally (over)complete measurements, both the number of measurement bases and the computational complexity of data processing suffer an exponential growth with the size of the quantum system. A 14-qubit entangled state has already been experimentally prepared in an ion trap, and the data processing capability for FQST of a 14-qubit state seems to be far away from practical applications. In this paper, the computational capability of FQST is pushed forward to reconstruct a 14-qubit state with a run time of only 3.35 hours using the linear regression estimation (LRE) algorithm, even when informationally overcomplete Pauli measurements are employed. The computational complexity of the LRE algorithm is first reduced from ∼1019 to ∼1015 for a 14-qubit state, by dropping all the zero elements, and its computational efficiency is further sped up by fully exploiting the parallelism of the LRE algorithm with parallel Graphic Processing Unit (GPU) programming. Our result demonstrates the effectiveness of using parallel computation to speed up the postprocessing for FQST, and can play an important role in quantum information technologies with large quantum systems.
Sequence comparisons via algorithmic mutual information.
Milosavljević, A
1994-01-01
One of the main problems in DNA and protein sequence comparisons is to decide whether observed similarity of two sequences should be explained by their relatedness or by mere presence of some shared internal structure, e.g., shared internal tandem repeats. The standard methods that are based on statistics or classical information theory can be used to discover either internal structure or mutual sequence similarity, but cannot take into account both. Consequently, currently used methods for sequence comparison employ "masking" techniques that simply eliminate sequences that exhibit internal repetitive structure prior to sequence comparisons. The "masking" approach precludes discovery of homologous sequences of moderate or low complexity, which abound at both DNA and protein levels. As a solution to this problem, we propose a general method that is based on algorithmic information theory and minimal length encoding. We show that algorithmic mutual information factors out the sequence similarity that is due to shared internal structure and thus enables discovery of truly related sequences. We extend that recently developed algorithmic significance method (Milosavljević & Jurka 1993) to show that significance depends exponentially on algorithmic mutual information.
Obstacle Detection Algorithms for Rotorcraft Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)
2001-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.
Algorithms for sparse nonnegative Tucker decompositions.
Mørup, Morten; Hansen, Lars Kai; Arnfred, Sidse M
2008-08-01
There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to its part-based and easy interpretable representation. While NMF has been extended to the PARAFAC model, no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed are nonnegative, it may well be relevant to consider purely additive (i.e., nonnegative) Tucker decompositions). To reduce ambiguities of this type of decomposition, we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse nonnegative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms for Tucker decompositions when the data and interactions can be considered nonnegative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is more appropriate for the data as well as to select the number of components by turning off excess components. The algorithms for SN-TUCKER can be downloaded from Mørup (2007).
Full-color hologram using spatial multiplexing of dielectric metasurface.
Zhao, Wenyu; Liu, Bingyi; Jiang, Huan; Song, Jie; Pei, Yanbo; Jiang, Yongyuan
2016-01-01
In this Letter, we demonstrate theoretically a full-color hologram using spatial multiplexing of dielectric metasurface for three primary colors, capable of reconstructing arbitrary RGB images. The discrete phase maps for the red, green, and blue components of the target image are extracted through a classical Gerchberg-Saxton algorithm and reside in the corresponding subcells of each pixel. Silicon nanobars supporting narrow spectral response at the wavelengths of the three primary colors are employed as the basic meta-atoms to imprint the Pancharatnam-Berry phase while maintaining minimum crosstalk between different colors. The reconstructed holographic images agree well with the target images making it promising for colorful display.
Fast Density Inversion Solution for Full Tensor Gravity Gradiometry Data
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Wei, Xiaohui; Huang, Danian
2016-02-01
We modify the classical preconditioned conjugate gradient method for full tensor gravity gradiometry data. The resulting parallelized algorithm is implemented on a cluster to achieve rapid density inversions for various scenarios, overcoming the problems of computation time and memory requirements caused by too many iterations. The proposed approach is mainly based on parallel programming using the Message Passing Interface, supplemented by Open Multi-Processing. Our implementation is efficient and scalable, enabling its use with large-scale data. We consider two synthetic models and real survey data from Vinton Dome, US, and demonstrate that our solutions are reliable and feasible.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
An Assessment of Current Satellite Precipitation Algorithms
NASA Technical Reports Server (NTRS)
Smith, Eric A.
2007-01-01
The H-SAF Program requires an experimental operational European-centric Satellite Precipitation Algorithm System (E-SPAS) that produces medium spatial resolution and high temporal resolution surface rainfall and snowfall estimates over the Greater European Region including the Greater Mediterranean Basin. Currently, there are various types of experimental operational algorithm methods of differing spatiotemporal resolutions that generate global precipitation estimates. This address will first assess the current status of these methods and then recommend a methodology for the H-SAF Program that deviates somewhat from the current approach under development but one that takes advantage of existing techniques and existing software developed for the TRMM Project and available through the public domain.
Landau-Zener type surface hopping algorithms
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Lasser, Caroline; Trigila, Giulio
2014-06-01
A class of surface hopping algorithms is studied comparing two recent Landau-Zener (LZ) formulas for the probability of nonadiabatic transitions. One of the formulas requires a diabatic representation of the potential matrix while the other one depends only on the adiabatic potential energy surfaces. For each classical trajectory, the nonadiabatic transitions take place only when the surface gap attains a local minimum. Numerical experiments are performed with deterministically branching trajectories and with probabilistic surface hopping. The deterministic and the probabilistic approach confirm the affinity of both the LZ probabilities, as well as the good approximation of the reference solution computed by solving the Schrödinger equation via a grid based pseudo-spectral method. Visualizations of position expectations and superimposed surface hopping trajectories with reference position densities illustrate the effective dynamics of the investigated algorithms.
CAST: Contraction Algorithm for Symmetric Tensors
Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-09-22
Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking
NASA Astrophysics Data System (ADS)
Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu
2015-03-01
Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image
Saleh, Marwan D; Eswaran, C
2012-01-01
Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.
Saleh, Marwan D; Eswaran, C; Mueen, Ahmed
2011-08-01
This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.
NASA Technical Reports Server (NTRS)
Tielking, John T.
1989-01-01
Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
Quantum gate decomposition algorithms.
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Data Structures and Algorithms.
ERIC Educational Resources Information Center
Wirth, Niklaus
1984-01-01
Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)
ERIC Educational Resources Information Center
Drake, Michael
2011-01-01
One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
A Dynamic Construction Algorithm for the Compact Patricia Trie Using the Hierarchical Structure.
ERIC Educational Resources Information Center
Jung, Minsoo; Shishibori, Masami; Tanaka, Yasuhiro; Aoe, Jun-ichi
2002-01-01
Discussion of information retrieval focuses on the use of binary trees and how to compact it to use less memory and take less time. Explains retrieval algorithms and describes data structure and hierarchical structure. (LRW)
Teach Kids Test-Taking Tactics
ERIC Educational Resources Information Center
Glenn, Robert E.
2004-01-01
Teachers can do something to help ensure students will do better on tests. They can actively teach test-taking skills so pupils will be better armed in the battle to acquire knowledge. The author challenges teachers to use the suggestions provided in this article in the classroom, and to share them with their students. Test-taking strategies will…
On the Duty of Not Taking Offence
ERIC Educational Resources Information Center
Barrow, Robin
2005-01-01
People take offence too easily and are encouraged to do so by, e.g., institutional harassment policies. "Offensive" is sometimes equated with "anything that offends someone", sometimes with a definitive list of specific behaviours. When is it justifiable to take offence? Distinctions need to be drawn: between offensive to the senses and to the…
Take Steps Toward a Healthier Life | Poster
The National Institutes of Health (NIH) is promoting wellness by encouraging individuals to take the stairs. In an effort to increase participation in this program, NIH has teamed up with Occupational Health Services (OHS). OHS is placing NIH-sponsored “Take the Stairs” stickers on stair entrances, stair exits, and elevators.
Note Taking as a Generative Activity.
ERIC Educational Resources Information Center
Peper, Richard J.; Mayer, Richard E.
1978-01-01
Three experiments investigated the effects of note taking on "what is learned" by college undergraduates from videotaped lectures. The results suggest that note taking can result in a broader learning outcome, rather than just more learning overall, because an assimilative encoding process is encouraged. (Author/GDC)
Does Anticipation Training Affect Drivers' Risk Taking?
ERIC Educational Resources Information Center
McKenna, Frank P.; Horswill, Mark S.; Alexander, Jane L.
2006-01-01
Skill and risk taking are argued to be independent and to require different remedial programs. However, it is possible to contend that skill-based training could be associated with an increase, a decrease, or no change in risk-taking behavior. In 3 experiments, the authors examined the influence of a skill-based training program (hazard…
Academic Risk Taking, Development, and External Constraint.
ERIC Educational Resources Information Center
Clifford, Margaret M.; And Others
1990-01-01
Academic risk taking--the selection of schoollike tasks ranging in difficulty and probability of success--was examined for 602 students in grades 4, 6, and 8 in Taiwan. Results of a self-report measure of tolerance for failure and a risk-taking task are discussed concerning self-enhancement versus self-assessment goals, metacognitive skills, and…
Giving Ourselves Permission to Take Risks
ERIC Educational Resources Information Center
Jones, Elizabeth
2012-01-01
What's a risk? It's when one doesn't know what will happen when she/he takes action. Risks can be little or big, calculated or stupid. Every new idea carries risks--and the challenge to face them and see what will happen. Nobody becomes smart, creative, self-confident, and respectful of others without taking risks--remaining open to possibilities…
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA. PMID:27408827
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2014-01-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees.
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2012-09-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis.
Disentangling adolescent pathways of sexual risk taking.
Brookmeyer, Kathryn A; Henrich, Christopher C
2009-11-01
Using data from the National Longitudinal Survey of Youth, the authors aimed to describe the pathways of risk within sexual risk taking, alcohol use, and delinquency, and then identify how the trajectory of sexual risk is linked to alcohol use and delinquency. Risk trajectories were measured with adolescents aged 15-24 years (N = 1,778). Using Latent Class Growth Analyses (LCGA), models indicated that the majority of adolescents engaged in sexual risk and alcohol use. In joint trajectory analyses, LCGA revealed six risk taking classes: sex and alcohol, moderate risk taking, joint risk taking, moderate alcohol, alcohol risk, and alcohol and delinquency experimentation. Editors' Strategic Implications: School administrators and curriculum designers should pay attention to the study's findings with respect to the need for prevention programs to target early adolescents and integrate prevention messages about alcohol use and sexual risk taking.
Algorithms for physical segregation of coal
NASA Astrophysics Data System (ADS)
Ganguli, Rajive
The capability for on-line measurement of the quality characteristics of conveyed coal now enables mine operators to take advantage of the inherent heterogeneity of those streams and split them into wash and no-wash stocks. Relative to processing the entire stream, this reduces the amount of coal that must be washed at the mine and thereby reduces processing costs, recovery losses, and refuse generation levels. In this dissertation, two classes of segregation algorithms, using time series models and moving windows are developed and demonstrated using field and simulated data. In all of the developed segregation algorithms, a "cut-off" ash value was computed for coal scanned on the running conveyor belt by the ash analyzer. It determined if the coal was sent to the wash pile or to the nowash pile. Forecasts from time series models, at various lead times ahead, were used in one class of the developed algorithms, to determine the cut-off ash levels. The time series models were updated from time to time to reflect changes in process. Statistical Process Control (SPC) techniques were used to determine if an update was necessary at a given time. When an update was deemed necessary, optimization techniques were used to determine the next best set of model parameters. In the other class of segregation algorithms, "few" of the immediate past observations were used to determine the cut-off ash value. These "few" observations were called the window width . The window width was kept constant in some variants of this class of algorithms. The other variants of this class were an improvement over the fixed window width algorithms. Here, the window widths were varied rather than kept constant. In these cases, SPC was used to determine the window width at any instant. Statistics of the empirical distribution and the normal distribution were used in computation of the cut-off ash value in all the variants of this class of algorithms. The good performance of the developed algorithms
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
A fast full constraints unmixing method
NASA Astrophysics Data System (ADS)
Ye, Zhang; Wei, Ran; Wang, Qing Yan
2012-10-01
Mixed pixels are inevitable due to low-spatial resolutions of hyperspectral image (HSI). Linear spectrum mixture model (LSMM) is a classical mathematical model to relate the spectrum of mixing substance to corresponding individual components. The solving of LSMM, namely unmixing, is essentially a linear optimization problem with constraints, which is usually consisting of iterations implemented on decent direction and stopping criterion to terminate algorithms. Such criterion must be properly set in order to balance the accuracy and speed of solution. However, the criterion in existing algorithm is too strict, which maybe lead to convergence rate reducing. In this paper, by broaden constraints in unmixing, a new stopping rule is proposed, which can reduce rate of convergence. The experiments results prove both in runtime and iteration numbers that our method can accelerate convergence processing with only cost of little quality decrease in resulting.
A Hybrid Graph Representation for Recursive Backtracking Algorithms
NASA Astrophysics Data System (ADS)
Abu-Khzam, Faisal N.; Langston, Michael A.; Mouawad, Amer E.; Nolan, Clinton P.
Many exact algorithms for NP-hard graph problems adopt the old Davis-Putman branch-and-reduce paradigm. The performance of these algorithms often suffers from the increasing number of graph modifications, such as deletions, that reduce the problem instance and have to be "taken back" frequently during the search process. The use of efficient data structures is necessary for fast graph modification modules as well as fast take-back procedures. In this paper, we investigate practical implementation-based aspects of exact algorithms by providing a hybrid graph representation that addresses the take-back challenge and combines the advantage of {O}(1) adjacency-queries in adjacency-matrices with the advantage of efficient neighborhood traversal in adjacency-lists.
A Mathematical Basis for the Safety Analysis of Conflict Prevention Algorithms
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Butler, Ricky W.; Munoz, Cesar A.; Dowek, Gilles
2009-01-01
In air traffic management systems, a conflict prevention system examines the traffic and provides ranges of guidance maneuvers that avoid conflicts. This guidance takes the form of ranges of track angles, vertical speeds, or ground speeds. These ranges may be assembled into prevention bands: maneuvers that should not be taken. Unlike conflict resolution systems, which presume that the aircraft already has a conflict, conflict prevention systems show conflicts for all maneuvers. Without conflict prevention information, a pilot might perform a maneuver that causes a near-term conflict. Because near-term conflicts can lead to safety concerns, strong verification of correct operation is required. This paper presents a mathematical framework to analyze the correctness of algorithms that produce conflict prevention information. This paper examines multiple mathematical approaches: iterative, vector algebraic, and trigonometric. The correctness theories are structured first to analyze conflict prevention information for all aircraft. Next, these theories are augmented to consider aircraft which will create a conflict within a given lookahead time. Certain key functions for a candidate algorithm, which satisfy this mathematical basis are presented; however, the proof that a full algorithm using these functions completely satisfies the definition of safety is not provided.
NASA Astrophysics Data System (ADS)
Alfonso, Lester; Zamora, Jose; Cruz, Pedro
2015-04-01
The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.
Quantum algorithm for an additive approximation of Ising partition functions
NASA Astrophysics Data System (ADS)
Matsuo, Akira; Fujii, Keisuke; Imoto, Nobuyuki
2014-08-01
We investigate quantum-computational complexity of calculating partition functions of Ising models. We construct a quantum algorithm for an additive approximation of Ising partition functions on square lattices. To this end, we utilize the overlap mapping developed by M. Van den Nest, W. Dür, and H. J. Briegel [Phys. Rev. Lett. 98, 117207 (2007), 10.1103/PhysRevLett.98.117207] and its interpretation through measurement-based quantum computation (MBQC). We specify an algorithmic domain, on which the proposed algorithm works, and an approximation scale, which determines the accuracy of the approximation. We show that the proposed algorithm performs a nontrivial task, which would be intractable on any classical computer, by showing that the problem that is solvable by the proposed quantum algorithm is BQP-complete. In the construction of the BQP-complete problem coupling strengths and magnetic fields take complex values. However, the Ising models that are of central interest in statistical physics and computer science consist of real coupling strengths and magnetic fields. Thus we extend the algorithmic domain of the proposed algorithm to such a real physical parameter region and calculate the approximation scale explicitly. We found that the overlap mapping and its MBQC interpretation improve the approximation scale exponentially compared to a straightforward constant-depth quantum algorithm. On the other hand, the proposed quantum algorithm also provides partial evidence that there exist no efficient classical algorithm for a multiplicative approximation of the Ising partition functions even on the square lattice. This result supports the observation that the proposed quantum algorithm also performs a nontrivial task in the physical parameter region.
Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers
NASA Technical Reports Server (NTRS)
Lind, Rick; Balas, Gary J.
1995-01-01
This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.
Optimization of data collection taking radiation damage into account
Bourenkov, Gleb P.
2010-04-01
Software implementing a new method for the optimal choice of data-collection parameters, accounting for the effects of radiation damage, is presented. To take into account the effects of radiation damage, new algorithms for the optimization of data-collection strategies have been implemented in the software package BEST. The intensity variation related to radiation damage is approximated by log-linear functions of resolution and cumulative X-ray dose. Based on an accurate prediction of the basic characteristics of data yet to be collected, BEST establishes objective relationships between the accessible data completeness, resolution and signal-to-noise statistics that can be achieved in an experiment and designs an optimal plan for data collection.
Source-independent full waveform inversion of seismic data
Lee, Ki Ha; Kim, Hee Joon
2002-03-20
A rigorous full waveform inversion of seismic data has been a challenging subject partly because of the lack of precise knowledge of the source. Since currently available approaches involve some form of approximations to the source, inversion results are subject to the quality and the choice of the source information used. We propose a new full waveform inversion methodology that does not involve source spectrum information. Thus potential inversion errors due to source estimation can be eliminated. A gather of seismic traces is first Fourier-transformed into the frequency domain and a normalized wavefield is obtained for each trace in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the gather, so the complex-valued normalized wavefield is dimensionless. The source spectrum is eliminated during the normalization procedure. With its source spectrum eliminated, the normalized wavefield allows us construction of an inversion algorithm without the source information. The inversion algorithm minimizes misfits between measured normalized wavefield and numerically computed normalized wavefield. The proposed approach has been successfully demonstrated using a simple two-dimensional scalar problem.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
AKITA: Application Knowledge Interface to Algorithms
NASA Astrophysics Data System (ADS)
Barros, Paul; Mathis, Allison; Newman, Kevin; Wilder, Steven
2013-05-01
We propose a methodology for using sensor metadata and targeted preprocessing to optimize which selection from a large suite of algorithms are most appropriate for a given data set. Rather than applying several general purpose algorithms or requiring a human operator to oversee the analysis of the data, our method allows the most effective algorithm to be automatically chosen, conserving both computational, network and human resources. For example, the amount of video data being produced daily is far greater than can ever be analyzed. Computer vision algorithms can help sift for the relevant data, but not every algorithm is suited to every data type nor is it efficient to run them all. A full body detector won't work well when the camera is zoomed in or when it is raining and all the people are occluded by foul weather gear. However, leveraging metadata knowledge of the camera settings and the conditions under which the data was collected (generated by automatic preprocessing), face or umbrella detectors could be applied instead, increasing the likelihood of a correct reading. The Lockheed Martin AKITA™ system is a modular knowledge layer which uses knowledge of the system and environment to determine how to most efficiently and usefully process whatever data it is given.
Oxytocin and vasopressin modulate risk-taking.
Patel, Nilam; Grillon, Christian; Pavletic, Nevia; Rosen, Dana; Pine, Daniel S; Ernst, Monique
2015-02-01
The modulation of risk-taking is critical for adaptive and optimal behavior. This study examined how oxytocin (OT) and arginine vasopressin (AVP) influence risk-taking in function of three parameters: sex, risk-valence, and social context. Twenty-nine healthy adults (14 males) completed a risk-taking task, the Stunt task, both in a social-stress (evaluation by unfamiliar peers) and non-social context, in three separate drug treatment sessions. During each session, one of three drugs, OT, AVP, or placebo (PLC), was administered intra-nasally. OT and AVP relative to PLC reduced betting-rate (risk-averse effect). This risk-averse effect was further qualified: AVP reduced risk-taking in the positive risk-valence (high win-probability), and regardless of social context or sex. In contrast, OT reduced risk-taking in the negative risk-valence (low win-probability), and only in the social-stress context and men. The reduction in risk-taking might serve a role in defensive behavior. These findings extend the role of these neuromodulators to behaviors beyond the social realm. How the behavioral modulation of risk-taking maps onto the function of the neural targets of OT and AVP may be the next step in this line of research. PMID:25446228
Cooperative Cross-Hole Ert and 2-D Full-Waveform Gpr Inversion
NASA Astrophysics Data System (ADS)
Bouchedda, A.; Chouteau, M.
2012-12-01
Recent advances in high-performance computing make full-waveform inversion (FWI) of cross-hole ground penetrating data feasible. FWI, where high-resolution imaging at half the propagated wavelength is expected, allows a better resolution in comparison to ray-based tomography. The inverse problem is generally solved using local optimization algorithms that can converge to local minimum depending on the selection of starting model, nonlinearity of the problem, lack of low frequencies, presence of noise, and approximate modeling of the wave-physics complexity. In this work, multiscale FWI strategy is combined cooperatively with electrical resistivity tomography (ERT) to mitigate the nonlinearity and ill-posedness of FWI and improve the ERT resolution. In the FWI, the gradient of the misft function is generally dominated by the high frequencies. This behaviour can potentially be the cause of convergence into local minima, as the determination of the high frequencies depends in turn on the accuracy of the low frequencies. Different from taking advantage of low frequencies in the data, the proposed multiscale FWI reduces the number of model parameters and yields low frequencies in the model space using a regularization method that consists of imposing an L1-norm penalty in the wavelet domain. The minimization of the L1-norm penalty is carried out using an accelerated iterative soft-thresholding algorithm. As wavelet transforms provide estimates of the local frequency content of the conductivity or permittivity images, the thresholds are used to control the frequency content in the model space. Generally, a high threshold value is chosen for the 20th first iterations in order to enhance the update of the low frequencies. After that the soft thresholding step tries to find the best thresholds to maximize the structural similarities between conductivity and permittivity images. The initial velocity model for FWI is built from first-arrival traveltime tomography, whereas the
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; García, Daniel F.; Molleda, Julio; Sainz, Ignacio; Bulnes, Francisco G.
2011-01-01
Advances in the image processing field have brought new methods which are able to perform complex tasks robustly. However, in order to meet constraints on functionality and reliability, imaging application developers often design complex algorithms with many parameters which must be finely tuned for each particular environment. The best approach for tuning these algorithms is to use an automatic training method, but the computational cost of this kind of training method is prohibitive, making it inviable even in powerful machines. The same problem arises when designing testing procedures. This work presents methods to train and test complex image processing algorithms in parallel execution environments. The approach proposed in this work is to use existing resources in offices or laboratories, rather than expensive clusters. These resources are typically non-dedicated, heterogeneous and unreliable. The proposed methods have been designed to deal with all these issues. Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing. These methods are capable of harnessing the available computational power resources, giving more work to more powerful machines, while taking its unreliable nature into account. Both methods have been tested using real applications.
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1987-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.
Algorithms, games, and evolution.
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-07-22
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: "What algorithm could possibly achieve all this in a mere three and a half billion years?" In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution.
Tomasz Plawski, J. Hovater
2010-09-01
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
Adaptive continuous twisting algorithm
NASA Astrophysics Data System (ADS)
Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid
2016-09-01
In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
Basic cluster compression algorithm
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Lee, J.
1980-01-01
Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
NASA Astrophysics Data System (ADS)
Evertz, Hans Gerd
1998-03-01
Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Applying the take-grant protection model
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
The Take-Grant Protection Model has in the past been used to model multilevel security hierarchies and simple protection systems. The models are extended to include theft of rights and sharing information, and additional security policies are examined. The analysis suggests that in some cases the basic rules of the Take-Grant Protection Model should be augmented to represent the policy properly; when appropriate, such modifications are made and their efforts with respect to the policy and its Take-Grant representation are discussed.
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Evaluating super resolution algorithms
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun
2011-01-01
This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.
Taking Care of Your Diabetes Means Taking Care of Your Heart (Tip Sheet)
... Your Heart: Manage the ABCs of Diabetes Taking Care of Your Diabetes Means Taking Care of Your Heart (Tip Sheet) Diabetes and Heart ... What you can do now Ask your health care team these questions: What can I do to ...
Academic Journal Embargoes and Full Text Databases.
ERIC Educational Resources Information Center
Brooks, Sam
2003-01-01
Documents the reasons for embargoes of academic journals in full text databases (i.e., publisher-imposed delays on the availability of full text content) and provides insight regarding common misconceptions. Tables present data on selected journals covering a cross-section of subjects and publishers and comparing two full text business databases.…
Magnetotelluric inversion via reverse time migration algorithm of seismic data
Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.
Private algorithms for the protected in social network search.
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-26
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.
Design of robust systolic algorithms
Varman, P.J.; Fussell, D.S.
1983-01-01
A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.
Multipartite entanglement in quantum algorithms
Bruss, D.; Macchiavello, C.
2011-05-15
We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.
Two Meanings of Algorithmic Mathematics.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1984-01-01
Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…
Algorithm for Constructing Contour Plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1984-01-01
General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.
Taking medicine at home - create a routine
... page: //medlineplus.gov/ency/patientinstructions/000613.htm Taking medicine at home - create a routine To use the ... teeth. Find Ways to Help You Remember Your Medicines You can: Set the alarm on your clock, ...
... Home Drugs Resources for You Special Features Don't take this with that! Share Tweet Linkedin Pin ... bushel of problems. How it does or doesn’t work Depending on the active ingredient, grapefruit can ...
The Solar Constant: A Take Home Lab
ERIC Educational Resources Information Center
Eaton, B. G.; And Others
1977-01-01
Describes a method that uses energy from the sun, absorbed by aluminum discs, to melt ice, and allows the determination of the solar constant. The take-home equipment includes Styrofoam cups, a plastic syringe, and aluminum discs. (MLH)
Take Care of Your Child's Teeth
... Baby teeth hold space for adult teeth. Take care of your child’s teeth to protect your child from tooth decay (cavities). Tooth decay can: Cause your child pain Make it hard for your child to chew ...
Taking Care of You: Support for Caregivers
... Are Reading Upsetting News Reports? What to Say Vaccines: Which Ones & When? Smart School Lunches Emmy-Nominated Video "Cerebral Palsy: Shannon's Story" 5 Things to Know About Zika & Pregnancy Taking Care of ...
LRO's Diviner Takes the Eclipse's Temperature
During the June 15, 2011, total lunar eclipse, LRO's Diviner instrument will take temperature measurements of eclipsed areas of the moon, giving scientists a new look at rock distribution on the su...
Tips for Taking Care of Your Limb
... Technorati Yahoo MyWeb by Paddy Rossbach, RN, Former Amputee Coalition President & CEO, and Terrence P. Sheehan, MD ... crisis. Limb Care If you are a new amputee, it's better to take a bath or shower ...
Gateway to New Atlantis Attraction Takes Shape
The home of space shuttle Atlantis continues taking shape at the Kennedy Space Center Visitor Complex. Crews placed the nose cone atop the second of a replica pair of solid rocket boosters. A life-...
Take Steps to Prevent Type 2 Diabetes
... En español Take Steps to Prevent Type 2 Diabetes Browse Sections The Basics Overview Types of Diabetes ... 1 of 9 sections The Basics: Types of Diabetes What is diabetes? Diabetes is a disease. People ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
... Sanctuary (MBNMS) to incidentally take, by Level B harassment only, California sea lions (Zalophus... such taking. Regulations governing the taking of California sea lions and harbor seals, by Level B... more than Level B behavioral harassment of small numbers of California sea lions and harbor seals...
NASA Astrophysics Data System (ADS)
Allner, S.; Koehler, T.; Fehringer, A.; Birnbacher, L.; Willner, M.; Pfeiffer, F.; Noël, P. B.
2016-05-01
The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.
The clinical algorithm nosology: a method for comparing algorithmic guidelines.
Pearson, S D; Margolis, C Z; Davis, S; Schreier, L K; Gottlieb, L K
1992-01-01
Concern regarding the cost and quality of medical care has led to a proliferation of competing clinical practice guidelines. No technique has been described for determining objectively the degree of similarity between alternative guidelines for the same clinical problem. The authors describe the development of the Clinical Algorithm Nosology (CAN), a new method to compare one form of guideline: the clinical algorithm. The CAN measures overall design complexity independent of algorithm content, qualitatively describes the clinical differences between two alternative algorithms, and then scores the degree of similarity between them. CAN algorithm design-complexity scores correlated highly with clinicians' estimates of complexity on an ordinal scale (r = 0.86). Five pairs of clinical algorithms addressing three topics (gallstone lithotripsy, thyroid nodule, and sinusitis) were selected for interrater reliability testing of the CAN clinical-similarity scoring system. Raters categorized the similarity of algorithm pathways in alternative algorithms as "identical," "similar," or "different." Interrater agreement was achieved on 85/109 scores (80%), weighted kappa statistic, k = 0.73. It is concluded that the CAN is a valid method for determining the structural complexity of clinical algorithms, and a reliable method for describing differences and scoring the similarity between algorithms for the same clinical problem. In the future, the CAN may serve to evaluate the reliability of algorithm development programs, and to support providers and purchasers in choosing among alternative clinical guidelines.
Ego depletion increases risk-taking.
Fischer, Peter; Kastenmüller, Andreas; Asal, Kathrin
2012-01-01
We investigated how the availability of self-control resources affects risk-taking inclinations and behaviors. We proposed that risk-taking often occurs from suboptimal decision processes and heuristic information processing (e.g., when a smoker suppresses or neglects information about the health risks of smoking). Research revealed that depleted self-regulation resources are associated with reduced intellectual performance and reduced abilities to regulate spontaneous and automatic responses (e.g., control aggressive responses in the face of frustration). The present studies transferred these ideas to the area of risk-taking. We propose that risk-taking is increased when individuals find themselves in a state of reduced cognitive self-control resources (ego-depletion). Four studies supported these ideas. In Study 1, ego-depleted participants reported higher levels of sensation seeking than non-depleted participants. In Study 2, ego-depleted participants showed higher levels of risk-tolerance in critical road traffic situations than non-depleted participants. In Study 3, we ruled out two alternative explanations for these results: neither cognitive load nor feelings of anger mediated the effect of ego-depletion on risk-taking. Finally, Study 4 clarified the underlying psychological process: ego-depleted participants feel more cognitively exhausted than non-depleted participants and thus are more willing to take risks. Discussion focuses on the theoretical and practical implications of these findings. PMID:22931000
NASA Astrophysics Data System (ADS)
Weber, James Daniel
1999-11-01
This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is
NASA Astrophysics Data System (ADS)
Abdelazim, S.; Santoro, D.; Arend, M.; Moshary, F.; Ahmed, S.
2015-05-01
In this paper, we present two signal processing algorithms implemented using the FPGA. The first algorithm involves explicate time gating of received signals that correspond to a desired spatial resolution, performing a Fast Fourier Transform (FFT) calculation on each individual time gate, taking the square modulus of the FFT to form a power spectrum and then accumulating these power spectra for 10k return signals. The second algorithm involves calculating the autocorrelation of the backscattered signals and then accumulating the autocorrelation for 10k pulses. Efficient implementation of each of these two signal processing algorithms on an FPGA is challenging because it requires there to be tradeoffs between retaining the full data word width, managing the amount of on chip memory used and respecting the constraints imposed by the data width of the FPGA. A description of the approach used to manage these tradeoffs for each of the two signal processing algorithms are presented and explained in this article. Results of atmospheric measurements obtained through these two embedded programming techniques are also presented.
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Implementing a self-structuring data learning algorithm
NASA Astrophysics Data System (ADS)
Graham, James; Carson, Daniel; Ternovskiy, Igor
2016-05-01
In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.
Equilibrium points in the full three-body problem
NASA Astrophysics Data System (ADS)
Woo, Pamela; Misra, Arun K.
2014-06-01
The orbital motion of a spacecraft in the vicinity of a binary asteroid system can be modelled as the full three-body problem. The circular restricted case is considered here. Taking into account the shape, size, and mass distribution of arbitrarily shaped primary bodies, the locations of the equilibrium points are computed and are found to be offset from those of the classical CR3BP with point-masses. Through numerical computations, it was found that in cases with highly aspherical primaries, additional collinear and noncollinear equilibrium points exist. Examples include systems with pear-shaped and peanut-shaped bodies.
Traffic sharing algorithms for hybrid mobile networks
NASA Technical Reports Server (NTRS)
Arcand, S.; Murthy, K. M. S.; Hafez, R.
1995-01-01
In a hybrid (terrestrial + satellite) mobile personal communications networks environment, a large size satellite footprint (supercell) overlays on a large number of smaller size, contiguous terrestrial cells. We assume that the users have either a terrestrial only single mode terminal (SMT) or a terrestrial/satellite dual mode terminal (DMT) and the ratio of DMT to the total terminals is defined gamma. It is assumed that the call assignments to and handovers between terrestrial cells and satellite supercells take place in a dynamic fashion when necessary. The objectives of this paper are twofold, (1) to propose and define a class of traffic sharing algorithms to manage terrestrial and satellite network resources efficiently by handling call handovers dynamically, and (2) to analyze and evaluate the algorithms by maximizing the traffic load handling capability (defined in erl/cell) over a wide range of terminal ratios (gamma) given an acceptable range of blocking probabilities. Two of the algorithms (G & S) in the proposed class perform extremely well for a wide range of gamma.
Image enhancement based on edge boosting algorithm
NASA Astrophysics Data System (ADS)
Ngernplubpla, Jaturon; Chitsobhuk, Orachat
2015-12-01
In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.
NASA Astrophysics Data System (ADS)
Strzałka, Dominik; Grabowski, Franciszek
Tsallis entropy introduced in 1988 is considered to have obtained new possibilities to construct generalized thermodynamical basis for statistical physics expanding classical Boltzmann-Gibbs thermodynamics for nonequilibrium states. During the last two decades this q-generalized theory has been successfully applied to considerable amount of physically interesting complex phenomena. The authors would like to present a new view on the problem of algorithms computational complexity analysis by the example of the possible thermodynamical basis of the sorting process and its dynamical behavior. A classical approach to the analysis of the amount of resources needed for algorithmic computation is based on the assumption that the contact between the algorithm and the input data stream is a simple system, because only the worst-case time complexity is considered to minimize the dependency on specific instances. Meanwhile the article shows that this process can be governed by long-range dependencies with thermodynamical basis expressed by the specific shapes of probability distributions. The classical approach does not allow to describe all properties of processes (especially the dynamical behavior of algorithms) that can appear during the computer algorithmic processing even if one takes into account the average case analysis in computational complexity. The importance of this problem is still neglected especially if one realizes two important things. The first one: nowadays computer systems work also in an interactive mode and for better understanding of its possible behavior one needs a proper thermodynamical basis. The second one: computers from mathematical point of view are Turing machines but in reality they have physical implementations that need energy for processing and the problem of entropy production appears. That is why the thermodynamical analysis of the possible behavior of the simple insertion sort algorithm will be given here.
D Multicomponent Time Domain Elastic Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Silva, R. U.; De Basabe, J. D.; Gallardo, L. A.
2015-12-01
The search of hydrocarbon reservoirs between the finest stratigraphic and structural traps relies on the detailed surveying and interpretation of multicomponent seismic waves. This need makes Full Waveform Inversion (FWI) one of the most active topics in seismic exploration research and there are a limited number of FWI algorithms that undertake the elastic approach required to model these multicomponent data. We developed an iterative Gauss-Newton 2D time-domain elastic FWI scheme that reproduces the vertical and horizontal particle velocity as measured by common seismic surveys and obtains simultaneously the distribution of three elastic parameters of our subsurface model (density ρ and the Lame parameters λ and μ). The elastic wave is propagated in a heterogeneous elastic media using a time domain 2D velocity-stress staggered grid finite difference method. Our code observes the necessary stability conditions and includes absorbing boundary conditions and basic multi-thread parallelization. The same forward modeling code is also used to calculate the Frechet's derivatives with respect to the three parameters of our model following the sensitivity equation approach and perturbation theory. We regularized our FWI algorithm applying two different criteria: (1) First order Tikhonov regularization (maximum smoothness) and (2) Minimum Gradient Support (MGS) that adopts an approximate zero-norm of the several property gradients. We applied our algorithm to various test models and demonstrated that their structural information resemble closely those of the original three synthetic model parameters (λ, µ and ρ). Finally, we compared the role of both regularization criteria in terms of data fit, model stability and structural resemblance.
Assessing allowable take of migratory birds
Runge, M.C.; Sauer, J.R.; Avery, M.L.; Blackwell, B.F.; Koneff, M.D.
2009-01-01
Legal removal of migratory birds from the wild occurs for several reasons, including subsistence, sport harvest, damage control, and the pet trade. We argue that harvest theory provides the basis for assessing the impact of authorized take, advance a simplified rendering of harvest theory known as potential biological removal as a useful starting point for assessing take, and demonstrate this approach with a case study of depredation control of black vultures (Coragyps atratus) in Virginia, USA. Based on data from the North American Breeding Bird Survey and other sources, we estimated that the black vulture population in Virginia was 91,190 (95% credible interval = 44,520?212,100) in 2006. Using a simple population model and available estimates of life-history parameters, we estimated the intrinsic rate of growth (rmax) to be in the range 7?14%, with 10.6% a plausible point estimate. For a take program to seek an equilibrium population size on the conservative side of the yield curve, the rate of take needs to be less than that which achieves a maximum sustained yield (0.5 x rmax). Based on the point estimate for rmax and using the lower 60% credible interval for population size to account for uncertainty, these conditions would be met if the take of black vultures in Virginia in 2006 was <3,533 birds. Based on regular monitoring data, allowable harvest should be adjusted annually to reflect changes in population size. To initiate discussion about how this assessment framework could be related to the laws and regulations that govern authorization of such take, we suggest that the Migratory Bird Treaty Act requires only that take of native migratory birds be sustainable in the long-term, that is, sustained harvest rate should be
Unquenched Studies Using the Truncated Determinant Algorithm
A. Duncan, E. Eichten and H. Thacker
2001-11-29
A truncated determinant algorithm is used to study the physical effects of the quark eigenmodes associated with eigenvalues below 420 MeV. This initial high statistics study focuses on coarse (6{sup 4}) lattices (with O(a{sup 2}) improved gauge action), light internal quark masses and large physical volumes. Three features of full QCD are examined: topological charge distributions, string breaking as observed in the static energy and the eta prime mass.
Development of a Distributed Routing Algorithm for a Digital Telephony Switch.
NASA Astrophysics Data System (ADS)
Al-Wakeel, Sami Saleh
This research has developed a distributed routing algorithm and distributed control software to be implemented in modular digital telephone switching systems. The routing algorithm allows the routing information and the computer calculations for determining the route of switch calls to be divided evenly among the individual units of the digital switch, thus eliminating the need for the centralized complex routing logic. In addition a "routing language" for the storage of routing information has been developed that both compresses the routing information to conserve computer memory and speeds up the search through the routing information. A fully modular microprocessor-based digital switch that takes advantage of the routing algorithm was designed. The switch design achieves several objectives that include the reduction of digital telephone switch cost by taking full advantage of VLSI technology enabling manufacture by developing countries. By utilization of the technical advantages of the distributive routing algorithm, the modular switch can easily reach a capacity of 400,000 lines without degrading the system call processing or exceeding the system loading limits. A distributive control software was also designed to provide the main software protocols and routines necessary for a fully modular telephone switch. The design has several advantages over normal stored program control switches since it eliminates the need for centralized control software and allows the switch units to operate in any signaling environment. As a result, the possibility of total system breakdown is reduced, the switch software can be easily tested or modified, and the switch can interface any of the currently available communication technologies; namely, cable, VHF, satellite, R-1 or R-2 trunks and trunked radio phones. A second development of this research is a mathematical scheme to evaluate the performance of microprocessor-based digital telephone switches. The scheme evaluates various
NASA Technical Reports Server (NTRS)
Vardi, A.
1984-01-01
The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.
Parallel algorithm development
Adams, T.F.
1996-06-01
Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.
MLP iterative construction algorithm
NASA Astrophysics Data System (ADS)
Rathbun, Thomas F.; Rogers, Steven K.; DeSimio, Martin P.; Oxley, Mark E.
1997-04-01
The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICA's training technique yields novel feature selection technique and hidden node pruning technique
Advanced signal separation and recovery algorithms for digital x-ray spectroscopy
NASA Astrophysics Data System (ADS)
Mahmoud, Imbaby I.; El Tokhy, Mohamed S.
2015-02-01
X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
ERIC Educational Resources Information Center
Cotton, P. L.
1987-01-01
Defines two types of online databases: source, referring to those intended to be complete in themselves, whether full-text or abstracts; and bibliographic, meaning those that are not complete. Predictions are made about the future growth rate of these two types of databases, as well as full-text versus abstract databases. (EM)
The Weaknesses of Full-Text Searching
ERIC Educational Resources Information Center
Beall, Jeffrey
2008-01-01
This paper provides a theoretical critique of the deficiencies of full-text searching in academic library databases. Because full-text searching relies on matching words in a search query with words in online resources, it is an inefficient method of finding information in a database. This matching fails to retrieve synonyms, and it also retrieves…
Inspiring a Life Full of Learning
ERIC Educational Resources Information Center
Nasse, Saul
2010-01-01
After being appointed as Controller of BBC Learning, this author reflected on how the BBC had inspired his own love of learning. He realised that unlocking the learning potential of the full range of BBC outputs would be the key to inspiring a "life full of learning" for all its audiences. In this article, the author describes four new programme…
About Reformulation in Full-Text IRS.
ERIC Educational Resources Information Center
Debili, Fathi; And Others
1989-01-01
Analyzes different kinds of reformulations used in information retrieval systems where full text databases are accessed through natural language queries. Tests of these reformulations on large full text databases managed by the Syntactic and Probabilistic Indexing and Retrieval of Information in Texts (SPIRIT) system are described, and an expert…
[Risk-taking behaviors among young people].
Le Breton, David
2004-01-01
Risk-taking behaviors are often an ambivalent way of calling for help from close friends or family - those who count. It is an ultimate means of finding meaning and a system of values; it is a sign of an adolescent's active resistance and attempts to re-establish his or her place in the world. It contrasts with the far more incisive risk of depression and the radical collapse of meaning. In spite of the suffering it engenders, risk-taking nevertheless has a positive side, fostering independence in adolescents and a search for reference points. It leads to a better self-image and is a means of developing one's identity. It is nonetheless painful in terms of its repercussions in terms of injuries, death or addiction. The turbulence caused by risk-taking behaviors illustrates a determination to be rid of one's suffering and to fight on so that life can, at last, be lived. PMID:15918660
Full inclusion and students with autism.
Mesibov, G B; Shea, V
1996-06-01
The concept of "full inclusion" is that students with special needs can and should be educated in the same settings as their normally developing peers with appropriate support services, rather than being placed in special education classrooms or schools. According to advocates the benefits of full inclusion are increased expectations by teachers, behavioral modeling of normally developing peers, more learning, and greater self-esteem. Although the notion of full inclusion has appeal, especially for parents concerned about their children's rights, there is very little empirical evidence for this approach, especially as it relates to children with autism. This manuscript addresses the literature on full inclusion and its applicability for students with autism. Although the goals and values underlying full inclusion are laudable, neither the research literature nor thoughtful analysis of the nature of autism supports elimination of smaller, highly structured learning environments for some students with autism. PMID:8792264
Automated Simplification of Full Chemical Mechanisms
NASA Technical Reports Server (NTRS)
Norris, A. T.
1997-01-01
A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.
Acute marijuana effects on human risk taking.
Lane, Scott D; Cherek, Don R; Tcheremissine, Oleg V; Lieving, Lori M; Pietras, Cythia J
2005-04-01
Previous studies have established a relationship between marijuana use and risky behavior in natural settings. A limited number of laboratory investigations of marijuana effects on human risk taking have been conducted. The present study was designed to examine the acute effects of smoked marijuana on human risk taking, and to identify behavioral mechanisms that may be involved in drug-induced changes in the probability of risky behavior. Using a laboratory measure of risk taking designed to address acute drug effects, 10 adults were administered placebo cigarettes and three doses of active marijuana cigarettes (half placebo and half 1.77%; 1.77%; and 3.58% Delta9-THC) in a within-subject repeated-measures experimental design. The risk-taking task presented subjects with a choice between two response options operationally defined as risky and nonrisky. Data analyses examined cardiovascular and subjective effects, response rates, distribution of choices between the risky and nonrisky option, and first-order transition probabilities of trial-by-trial data. The 3.58% THC dose increased selection of the risky response option, and uniquely shifted response probabilities following both winning and losing outcomes following selection of the risky option. Acute marijuana administration thereby produced measurable changes in risky decision making under laboratory conditions. Consistent with previous risk-taking studies, shifts in trial-by-trial response probabilities at the highest dose suggested a change in sensitivity to both reinforced and losing risky outcomes. Altered sensitivity to consequences may be a mechanism in drug-induced changes in risk taking. Possible neurobiological sites of action related to THC are discussed.
NASA Technical Reports Server (NTRS)
Shull, Forrest; Godfrey, Sally; Bechtel, Andre; Feldmann, Raimund L.; Regardie, Myrna; Seaman, Carolyn
2008-01-01
A viewgraph presentation describing the NASA Software Assurance Research Program (SARP) project, with a focus on full life-cycle defect management, is provided. The topics include: defect classification, data set and algorithm mapping, inspection guidelines, and tool support.
[Algorithm for assessment of exposure to asbestos].
Martines, V; Fioravanti, M; Anselmi, A; Attili, F; Battaglia, D; Cerratti, D; Ciarrocca, M; D'Amelio, R; De Lorenzo, G; Ferrante, E; Gaudioso, F; Mascia, E; Rauccio, A; Siena, S; Palitti, T; Tucci, L; Vacca, D; Vigliano, R; Zelano, V; Tomei, F; Sancini, A
2010-01-01
There is no universally approved method in the scientific literature to identify subjects exposed to asbestos and divide them in classes according to intensity of exposure. The aim of our work is to study and develope an algorithm based on the findings of occupational anamnestical information provided by a large group of workers. The algorithm allows to discriminate, in a probabilistic way, the risk of exposure by the attribution of a code for each worker (ELSA Code--work estimated exposure to asbestos). The ELSA code has been obtained through a synthesis of information that the international scientific literature identifies as the most predictive for the onset of asbestos-related abnormalities. Four dimensions are analyzed and described: 1) present and/or past occupation; 2) type of materials and equipment used in performing working activity; 3) environment where these activities are carried out; 4) period of time when activities are performed. Although it is possible to have informations in a subjective manner, the decisional procedure is objective and is based on the systematic evaluation of asbestos exposure. From the combination of the four identified dimensions it is possible to have 108 ELSA codes divided in three typological profiles of estimated risk of exposure. The application of the algorithm offers some advantages compared to other methods used for identifying individuals exposed to asbestos: 1) it can be computed both in case of present and past exposure to asbestos; 2) the classification of workers exposed to asbestos using ELSA code is more detailed than the one we have obtained with Job Exposure Matrix (JEM) because the ELSA Code takes in account other indicators of risk besides those considered in the JEM. This algorithm was developed for a project sponsored by the Italian Armed Forces and is also adaptable to other work conditions for in which it could be necessary to assess risk for asbestos exposure.
Should I take this course online?
O'Neil, Carol; Fisher, Cheryl
2008-02-01
As the number of online nursing courses increases, students are faced with the daunting question, "Should I take this course online?" Although online courses are convenient, convenience should not be the sole factor for making this decision. Students and their advisors should discuss the characteristics of successful online students before deciding to take a course online. A study was conducted in which the same face-to-face and online version of a course were compared using Ragan's framework. The results of the study describe characteristics that can serve as useful criteria for predicting student success in an online course. PMID:18320955
ERIC Educational Resources Information Center
Votruba-Drzal, Elizabeth; Li-Grining, Christine P.; Maldonado-Carreno, Carolina
2008-01-01
Children's kindergarten experiences are increasingly taking place in full- versus part-day programs, yet important questions remain about whether there are significant and meaningful benefits to full-day kindergarten. Using the Early Childhood Longitudinal Study's Kindergarten Cohort (N= 13,776), this study takes a developmental approach to…
An efficient parallel algorithm for matrix-vector multiplication
Hendrickson, B.; Leland, R.; Plimpton, S.
1993-03-01
The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.
An improved harmony search algorithm with dynamically varying bandwidth
NASA Astrophysics Data System (ADS)
Kalivarapu, J.; Jain, S.; Bag, S.
2016-07-01
The present work demonstrates a new variant of the harmony search (HS) algorithm where bandwidth (BW) is one of the deciding factors for the time complexity and the performance of the algorithm. The BW needs to have both explorative and exploitative characteristics. The ideology is to use a large BW to search in the full domain and to adjust the BW dynamically closer to the optimal solution. After trying a series of approaches, a methodology inspired by the functioning of a low-pass filter showed satisfactory results. This approach was implemented in the self-adaptive improved harmony search (SIHS) algorithm and tested on several benchmark functions. Compared to the existing HS algorithm and its variants, SIHS showed better performance on most of the test functions. Thereafter, the algorithm was applied to geometric parameter optimization of a friction stir welding tool.
An Exact Quantum Search Algorithm with Arbitrary Database
NASA Astrophysics Data System (ADS)
Liu, Yang
2014-08-01
In standard Grover's algorithm for quantum searching, the probability of finding a marked state is not exactly 1, and some modified versions of Grover's algorithm that search a marked state from an evenly distributed database with full successful rate have been presented. In this article, we present a generalized quantum search algorithm that searches M marked states from an arbitrary distributed N-item quantum database with a zero theoretical failure rate, where N is not necessary to be the power of 2. We analyze the general properties of our search algorithm, we find that our algorithm has periodicity with a period of 2 J + 1, and it is effective with certainty for J + (2 J + 1) m times of iteration, where m is an arbitrary nonnegative number.
FSP (Full Space Parameterization), Version 2.0
Fries, G.A.; Hacker, C.J.; Pin, F.G.
1995-10-01
This paper describes the modifications made to FSPv1.0 for the Full Space Parameterization (FSP) method, a new analytical method used to resolve underspecified systems of algebraic equations. The optimized code recursively searches for the necessary number of linearly independent vectors that are necessary to form the solution space. While doing this, it ensures that all possible combinations of solutions are checked, if needed, and handles complications which arise due to particular cases. In addition, two particular cases which cause failure of the FSP algorithm were discovered during testing of this new code. These cases are described in the context of how they are recognized and how they are handled by the new code. Finally, testing was performed on the new code using both isolated movements and complex trajectories for various mobile manipulators.
Fast full resolution saliency detection based on incoherent imaging system
NASA Astrophysics Data System (ADS)
Lin, Guang; Zhao, Jufeng; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-08-01
Image saliency detection is widely applied in many tasks in the field of the computer vision. In this paper, we combine the saliency detection with the Fourier optics to achieve acceleration of saliency detection algorithm. An actual optical saliency detection system is constructed within the framework of incoherent imaging system. Additionally, the application of our system to implement the bottom-up rapid pre-saliency process of primate visual saliency is discussed with dual-resolution camera. A set of experiments over our system are conducted and discussed. We also demonstrate the comparisons between our method and pure computer methods. The results show our system can produce full resolution saliency maps faster and more effective.
Planck 2015 results. XII. Full focal plane simulations
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Castex, G.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Karakci, A.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Welikala, N.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 104 mission realizations reduced to about 106 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms, FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.
A full 2D IDCT with extreme low complexity
NASA Astrophysics Data System (ADS)
Navarro, Antonio; Silva, Antonio; Reznik, Yuriy
2007-09-01
In the context of a Call for Proposal for integer IDCTs issued by MPEG in July 2005, a full 2D integer IDCT based on a previous Feig and Winograd's work has been proposed. It achieves a high precision by meeting all IEEE1180 conditions and is suitable of implementation on hardware since it can be performed only with shifts and additions. Furthermore, it can be useful in high video resolution scenarios like in 720p/1080i/p due to its feedforward operation mode without any loop as usual in row-column implementations. The proposed transformation can be implemented without changing other functional blocks either at the encoder or at the decoder or alternatively as a scaled version incorporating the scaling factors into the dequantization stage. Our algorithm uses only 1328 operations for 8x8 blocks, including scaling factors.
Accelerating Full Configuration Interaction Calculations for Nuclear Structure
Yang, Chao; Sternberg, Philip; Maris, Pieter; Ng, Esmond; Sosonkina, Masha; Le, Hung Viet; Vary, James; Yang, Chao
2008-04-14
One of the emerging computational approaches in nuclear physics is the full configuration interaction (FCI) method for solving the many-body nuclear Hamiltonian in a sufficiently large single-particle basis space to obtain exact answers - either directly or by extrapolation. The lowest eigenvalues and correspondingeigenvectors for very large, sparse and unstructured nuclear Hamiltonian matrices are obtained and used to evaluate additional experimental quantities. These matrices pose a significant challenge to the design and implementation of efficient and scalable algorithms for obtaining solutions on massively parallel computer systems. In this paper, we describe the computational strategies employed in a state-of-the-art FCI code MFDn (Many Fermion Dynamics - nuclear) as well as techniques we recently developed to enhance the computational efficiency of MFDn. We will demonstrate the current capability of MFDn and report the latest performance improvement we have achieved. We will also outline our future research directions.
Minimising biases in full configuration interaction quantum Monte Carlo.
Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W
2015-03-14
We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step. PMID:25770522
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
Streamlining Shor's algorithm for potential hardware savings
NASA Astrophysics Data System (ADS)
Nam, Y. S.; Blümel, R.
2013-06-01
We constructed a virtual quantum computer by running a complete, scaling, quantum-gate-by-quantum-gate implementation of Shor's algorithm on a 128-core classical cluster computer. In mode A [quantum period finding (PF) only, supplied with classical results for the modular exponentiation (ME) part of Shor's algorithm], factoring semiprimes up to N=557993 with up to n=39 qubits, we confirm earlier, smaller-n results concerning the performance scaling of Shor's algorithm equipped with a truncated (banded) quantum Fourier transform. Running our virtual quantum computer in mode B (full quantum implementation of ME and PF), we find that a large number of gates may be discarded in a scalable way in both the ME and PF parts of Shor's algorithm in exchange for only a small reduction in performance. We explicitly state the associated scaling laws. Implying significant savings in quantum gates, we suggest that these results are of importance for future experimental and technical large-n implementations of quantum computers.
NASA Technical Reports Server (NTRS)
Delaat, J. C.
1984-01-01
An advanced, sensor failure detection, isolation, and accomodation algorithm has been developed by NASA for the F100 turbofan engine. The algorithm takes advantage of the analytical redundancy of the sensors to improve the reliability of the sensor set. The method requires the controls computer, to determine when a sensor failure has occurred without the help of redundant hardware sensors in the control system. The controls computer provides an estimate of the correct value of the output of the failed sensor. The algorithm has been programmed in FORTRAN using a real-time microprocessor-based controls computer. A detailed description of the algorithm and its implementation on a microprocessor is given.
Full employment maintenance in the private sector
NASA Technical Reports Server (NTRS)
Young, G. A.
1976-01-01
Operationally, full employment can be accomplished by applying modern computer capabilities, game and decision concepts, and communication feedback possibilities, rather than accepted economic tools, to the problem of assuring invariant full employment. The government must provide positive direction to individual firms concerning the net number of employees that each firm must hire or refrain from hiring to assure national full employment. To preserve free enterprise and the decision making power of the individual manager, this direction must be based on each private firm's own numerical employment projections.
New principles in nuclear medicine imaging: a full aperture stereoscopic imaging technique.
Strocovsky, Sergio G; Otero, Dino
2010-01-01
In nuclear medicine, images of planar scintigraphy and single photon emission computerized tomography (SPECT) obtained through gamma camera (GC) appear to be blurred. Alternatively, coded aperture imaging (CAI) can surpass the quality of GC images, but still it is not extensively used due to the decoding complexity of some images and the difficulty in controlling the noise. Summing up, the images obtained through GC are low quality and it is still difficult to implement CAI technique. Here we present a full aperture imaging (FAI) technique which overcomes the problems of CAI ordinary systems. The gamma radiation transmitted through a large single aperture is edge-encoded, taking advantage of the fact that nuclear radiation is spatially incoherent. The novel technique is tested by means of Monte Carlo method with simple and complex sources. Spatial resolution tests and parallax tests of GC versus FAI were made, and three-dimensional capacities of GC versus FAI were analyzed. Simulations have allowed comparison of both techniques under ideal, identical conditions. The results show that FAI technique has greater sensitivity (approximately 100 times) and greater spatial resolution (>2.6 times at 40 cm source-detector distance) than that of GC. FAI technique allows to obtain images with typical resolution of GC short source-detector distance but at longer source-detector distance. The FAI decoding algorithm simultaneously reconstructs four different projections, while GC produces only one projection per acquisition. Our results show it is possible to apply an extremely simple encoded imaging technique, and get three-dimensional radioactivity information. Thus GC-based systems could be substituted, given that FAI technique is simple and it produces four images which may feed stereoscopic systems, substituting in some cases, tomographic reconstructions.
3D elastic full waveform inversion: case study from a land seismic survey
NASA Astrophysics Data System (ADS)
Kormann, Jean; Marti, David; Rodriguez, Juan-Esteban; Marzan, Ignacio; Ferrer, Miguel; Gutierrez, Natalia; Farres, Albert; Hanzich, Mauricio; de la Puente, Josep; Carbonell, Ramon
2016-04-01
Full Waveform Inversion (FWI) is one of the most advanced processing methods that is recently reaching a mature state after years of solving theoretical and technical issues such as the non-uniqueness of the solution and harnessing the huge computational power required by realistic scenarios. BSIT (Barcelona Subsurface Imaging Tools, www.bsc.es/bsit) includes a FWI algorithm that can tackle with very complex problems involving large datasets. We present here the application of this system to a 3D dataset acquired to constrain the shallow subsurface. This is where the wavefield is the most complicated, because most of the wavefield conversions takes place in the shallow region and also because the media is much more laterally heterogeneous. With this in mind, at least isotropic elastic approximation would be suitable as kernel engine for FWI. The current study explores the possibilities to apply elastic isotropic FWI using only the vertical component of the recorded seismograms. The survey covers an area of 500×500 m2, and consists in a receivers grid of 10 m×20 m combined with a 250 kg accelerated weight-drop as source on a displaced grid of 20 m×20 m. One of the main challenges in this case study is the costly 3D modeling that includes topography and substantial free surface effects. FWI is applied to a data subset (shooting lines 4 to 12), and is performed for 3 frequencies ranging from 15 to 25 Hz. The starting models are obtained from travel-time tomography and the all computation is run on 75 nodes of Mare Nostrum supercomputer during 3 days. The resulting models provide a higher resolution of the subsurface structures, and show a good correlation with the available borehole measurements. FWI allows to extend in a reliable way this 1D knowledge (borehole) to 3D.
Conservative Patch Algorithm and Mesh Sequencing for PAB3D
NASA Technical Reports Server (NTRS)
Pao, S. P.; Abdol-Hamid, K. S.
2005-01-01
A mesh-sequencing algorithm and a conservative patched-grid-interface algorithm (hereafter Patch Algorithm ) have been incorporated into the PAB3D code, which is a computer program that solves the Navier-Stokes equations for the simulation of subsonic, transonic, or supersonic flows surrounding an aircraft or other complex aerodynamic shapes. These algorithms are efficient, flexible, and have added tremendously to the capabilities of PAB3D. The mesh-sequencing algorithm makes it possible to perform preliminary computations using only a fraction of the grid cells (provided the original cell count is divisible by an integer) along any grid coordinate axis, independently of the other axes. The patch algorithm addresses another critical need in multi-block grid situation where the cell faces of adjacent grid blocks may not coincide, leading to errors in calculating fluxes of conserved physical quantities across interfaces between the blocks. The patch algorithm, based on the Stokes integral formulation of the applicable conservation laws, effectively matches each of the interfacial cells on one side of the block interface to the corresponding fractional cell area pieces on the other side. This approach is comprehensive and unified such that all interface topology is automatically processed without user intervention. This algorithm is implemented in a preprocessing code that creates a cell-by-cell database that will maintain flux conservation at any level of full or reduced grid density as the user may choose by way of the mesh-sequencing algorithm. These two algorithms have enhanced the numerical accuracy of the code, reduced the time and effort for grid preprocessing, and provided users with the flexibility of performing computations at any desired full or reduced grid resolution to suit their specific computational requirements.
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
37 CFR 41.157 - Taking testimony.
Code of Federal Regulations, 2013 CFR
2013-07-01
... OF COMMERCE PRACTICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Contested Cases § 41.157 Taking... deposition must be filed at least two business days before a deposition. The notice limits the scope of the... the scope and nature of the testimony to be elicited. (5) Motion to quash. Objection to a defect...
37 CFR 41.157 - Taking testimony.
Code of Federal Regulations, 2014 CFR
2014-07-01
... OF COMMERCE PRACTICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Contested Cases § 41.157 Taking... deposition must be filed at least two business days before a deposition. The notice limits the scope of the... the scope and nature of the testimony to be elicited. (5) Motion to quash. Objection to a defect...
Taking the Steam off Pressure Groups.
ERIC Educational Resources Information Center
Ledell, Marjorie A.
1993-01-01
School administrators must speak out when single-issue or "stealth" groups threaten to take over a school board. Administrators can help ensure that election campaigns stimulate community debate, discussion, and consensus about educational directions. They must know how to remove the cover from stealth candidates, respond to the public, and keep…
Promoting Knowledge Transfer with Electronic Note Taking
ERIC Educational Resources Information Center
Katayama, Andrew D.; Shambaugh, R. Neal; Doctor, Tasneem
2005-01-01
We investigated the differences between (a) copying and pasting text versus typed note-taking methods of constructing study notes simultaneously with (b) vertically scaffolded versus horizontally scaffold notes on knowledge transfer. Forty-seven undergraduate educational psychology students participated. Materials included 2 electronic…
Teen Risk-Taking: A Statistical Portrait.
ERIC Educational Resources Information Center
Lindberg, Laura Duberstein; Boggess, Scott; Porter, Laura; Williams, Sean
This report provides a statistical portrait of teen participation in 10 of the most prevalent risk behaviors. It focuses on the overall participation in each behavior and in multiple risk taking. The booklet presents the overall incidence and patterns of teen involvement in the following risk behaviors: (1) regular alcohol use; (2) regular tobacco…
Taking Math Anxiety out of Math Instruction
ERIC Educational Resources Information Center
Shields, Darla J.
2007-01-01
To take math anxiety out of math instruction, teachers need to first know how to easily diagnose it in their students and second, how to analyze causes. Results of a recent study revealed that while students believed that their math anxiety was largely related to a lack of mathematical understanding, they often blamed their teachers for causing…
ERIC Educational Resources Information Center
Rosow, La Vergne
1992-01-01
Describes a literacy expert's frustrating experience with Harry, an intelligent, resourceful, and illiterate man who was more afraid of tackling reading and writing than of jumping out of a helicopter. Unfortunately, some adults who have been mistreated in school are eager to become literate but lack courage to take the first step. (six…
Take the Search out of Research.
ERIC Educational Resources Information Center
Giese, Ronald N.; And Others
1992-01-01
Provides a model that maps out five stages of relating library and scientific research: (1) establish an interest; (2) narrow a topic; (3) clarify the variables; (4) refine the procedures; and (5) interpret the unexpected. Provides a student questionnaire for selecting a topic and a format for general note taking. (MDH)
Mehta, Neil B; Atreja, Ashish; Jain, Anil
2008-08-01
Although e-mail is supposed to help save time and increase efficiency, for many it has become a burden. You can fight e-mail overload by taking steps to decrease the amount of unwanted e-mail you receive and by managing your in-box in an organized manner.
What Predicts Skill in Lecture Note Taking?
ERIC Educational Resources Information Center
Peverly, Stephen T.; Ramaswamy, Vivek; Brown, Cindy; Sumowski, James; Alidoost, Moona; Garner, Joanna
2007-01-01
Despite the importance of good lecture notes to test performance, very little is known about the cognitive processes that underlie effective lecture note taking. The primary purpose of the 2 studies reported (a pilot study and Study 1) was to investigate 3 processes hypothesized to be significantly related to quality of notes: transcription…
Note Taking in Multi-Media Settings
ERIC Educational Resources Information Center
Black, Kelly; Yao, Guangming
2014-01-01
We provide a preliminary exploration into the use of note taking when combined with video examples. Student volunteers were divided into three groups and asked to perform two problems. The first problem was explored in a classroom setting and the other problem was a novel problem. The students were asked to complete the two questions. Furthermore,…
ERIC Educational Resources Information Center
Fornaciari, James
2016-01-01
As legendary Cubs manager Joe Maddon did with his players, seeing students as people first works for teachers who hope to build cohesive classes that achieve. Maddon's strength was his emphasis on cultivating positive relationships among his players. Taking a tip from Maddon's strategy, Fornaciari, an Advanced Placement history teacher, shares…
NASA Technical Reports Server (NTRS)
2008-01-01
Hundreds of children participated in the annual Take Our Children to Work Day at Stennis Space Center on July 29. During the day, children of Stennis employees received a tour of facilities and took part in various activities, including demonstrations in cryogenics and robotics.
Picture THIS: Taking Human Impact Seriously
ERIC Educational Resources Information Center
Patrick, Patricia; Patrick, Tammy
2010-01-01
Unfortunately, middle school students often view human impact as an abstract idea over which they have no control and do not see themselves as contributing to the Earth's environmental decline. How better to uncover students' ideas concerning human impact in their local community than to have them take photographs. With this objective in mind, the…
Renew! Take a Break in Kindergarten
ERIC Educational Resources Information Center
Charlesworth, Rosalind
2005-01-01
A university child development/early childhood education professor renews her relationship with young children and with current public school teaching by spending 5 weeks in kindergarten. This article describes some highlights of her experience: the children's daily journal writing, an in-class and take-home math activity, and teaching the…
How to Prepare for and Take Examinations.
ERIC Educational Resources Information Center
Association of American Publishers, New York, NY.
This guide to preparing for, and taking, examinations was designed for college students. The booklet recommends a series of techniques for proper preparation. These include making a semester, or quarter, study plan; and appropriate scheduling of study time and determination of a study area. Good review techniques involve planning a systematic…
Teachable Moment: Google Earth Takes Us There
ERIC Educational Resources Information Center
Williams, Ann; Davinroy, Thomas C.
2015-01-01
In the current educational climate, where clearly articulated learning objectives are required, it is clear that the spontaneous teachable moment still has its place. Authors Ann Williams and Thomas Davinroy think that instructors from almost any discipline can employ Google Earth as a tool to take advantage of teachable moments through the…
Taking Perspective: Context, Culture, and History
ERIC Educational Resources Information Center
Suárez-Orozco, Marcelo M.; Suárez-Orozco, Carola
2013-01-01
There are important lessons to be learned from taking a comparative perspective in considering migration. Comparative examination of immigration experiences provides a way to glean common denominators of adaptation while considering the specificity of sending and receiving contexts and cultures. Equally important is a historical perspective that…
NASA Technical Reports Server (NTRS)
Schrenk, Martin
1933-01-01
As a result of previous reports, it was endeavored to obtain, along with the truest possible comprehension of the course of thrust, a complete, simple and clear formula for the whole take-off distance up to a certain altitude, which shall give the correct relative weight to all the factors.
Disentangling Adolescent Pathways of Sexual Risk Taking
ERIC Educational Resources Information Center
Brookmeyer, Kathryn A.; Henrich, Christopher C.
2009-01-01
Using data from the National Longitudinal Survey of Youth, the authors aimed to describe the pathways of risk within sexual risk taking, alcohol use, and delinquency, and then identify how the trajectory of sexual risk is linked to alcohol use and delinquency. Risk trajectories were measured with adolescents aged 15-24 years (N = 1,778). Using…
Please Take Note: Teaching Low Level Notetaking.
ERIC Educational Resources Information Center
Stanley, Karen
An introductory course in notetaking for low intermediate students of English as a second language (ESL) is described. The course is designed to give practice in notetaking techniques to college-bound ESI students before they are required to take notes with a competence equal to that of native speakers. The class begins with a discussion of common…
Role Taking in Childhood: Some Methodological Considerations
ERIC Educational Resources Information Center
Rubin, Kenneth H.
1978-01-01
Examines the convergent and discriminant validity of six widely used measures of role-taking skill. The Borke; Rothenberg; Miller, Kessel, and Flavell; Chandler; DeVries; and Glucksberg and Krauss tasks were administered to children in preschool and grades 1, 3, and 5. (Author/JMB)
String theorist takes over as Lucasian Professor
NASA Astrophysics Data System (ADS)
Banks, Michael
2009-11-01
String theorist Michael Green will be the next Lucasian Professor of Mathematics at Cambridge University. Green, 63, will succeed Stephen Hawking, who held the chair from 1980 before retiring last month at the age of 67 and taking up a distinguished research chair at the Perimeter Institute for Theoretical Physics in Canada (see above).
Perspective Taking Promotes Action Understanding and Learning
ERIC Educational Resources Information Center
Lozano, Sandra C.; Martin Hard, Bridgette; Tversky, Barbara
2006-01-01
People often learn actions by watching others. The authors propose and test the hypothesis that perspective taking promotes encoding a hierarchical representation of an actor's goals and subgoals-a key process for observational learning. Observers segmented videos of an object assembly task into coarse and fine action units. They described what…
Note taking, review, memory, and comprehension.
Bohay, Mark; Blakely, Daniel P; Tamplin, Andrea K; Radvansky, Gabriel A
2011-01-01
In previous work assessing memory at various levels of representation, namely the surface form, textbase, and situation model levels, participants read texts but were otherwise not actively engaged with the texts. The current study tested the influence of active engagement with the material via note taking, along with the opportunity to review such notes, and the modality of presentation (text vs. spoken). The influence of these manipulations was assessed both immediately and 1 week later. In Experiment 1 participants read a text, whereas in Experiment 2 participants watched a video recording of the material being read as a lecture. For each experiment the opportunity to take notes was manipulated within participants, and the opportunity to review these notes before the test was manipulated between participants. Note taking improved performance at the situation model level in both experiments, although there was also some suggestion of benefit for the surface form. Thus, active engagement with material, such as note taking, appears to have the greatest benefit at the deeper levels of understanding.
Taking It Online, and Making It Pay.
ERIC Educational Resources Information Center
Online & CD-ROM Review, 1996
1996-01-01
Discusses taking content online and payment models online based on sessions at the 1996 Internet World International conference in London (England). Highlights include publishers' decisions to reproduce materials on the World Wide Web; designing Web sites; guidelines for online content; online pricing; and the pros and cons of charging online…
Kenojuak Ashevak: "Young Owl Takes a Ride."
ERIC Educational Resources Information Center
Schwartz, Bernard
1988-01-01
Describes a lesson plan used to introduce K-3 students to a Canadian Inuit artist, to the personal and cultural context of the artwork, and to a simple printmaking technique. Includes background information on the artist, instructional strategies, and a print of the artist's "Young Owl Takes a Ride." (GEA)
Risk-Taking Behavior in Children.
ERIC Educational Resources Information Center
Kopfstein, Donald
The relationship between sex of the experimenter and of a child's cognitive style on risk-taking behavior is reported. The Subjects were 30 boys and 30 girls in the fourth grade. An adult female experimenter administered Kagan's Matching Familiar Figures task to half the children of each sex to give a measure of the childrens' reflective or…
Taking a Pulse on Your Practice.
Hoagland-Smith, Leanne
2015-01-01
Each medical practice, like a living organism, occasionally requires reading of its vital signs. As with human beings, one of those vital signs is the pulse. For your medical practice, just like your patients, there are numerous places from which to take that reading. This article reviews seven key pulses that provide insight into what is happening within the workplace culture of your practice.
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
NASA Astrophysics Data System (ADS)
Neta, B.; Mansager, B.
1992-08-01
Audio information concerning targets generally includes direction, frequencies, and energy levels. One use of audio cueing is to use direction information to help determine where more sensitive visual direction and acquisition sensors should be directed. Generally, use of audio cueing will shorten times required for visual detection, although there could be circumstances where the audio information is misleading and degrades visual performance. Audio signatures can also be useful for helping classify the emanating platform, as well as to provide estimates of its velocity. The Janus combat simulation is the premier high resolution model used by the Army and other agencies to conduct research. This model has a visual detection model which essentially incorporates algorithms as described by Hartman(1985). The model in its current form does not have any sound cueing capability. This report is part of a research effort to investigate the utility of developing such a capability.
Fighting Censorship with Algorithms
NASA Astrophysics Data System (ADS)
Mahdian, Mohammad
In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.
Astronaut Eileen Collins in Full Fuselage Trainer
NASA Technical Reports Server (NTRS)
1993-01-01
Astronaut Eileen M. Collins, pilot for the STS-63 mission, participates in STS-63 training at JSC's Shuttle mockup and integration laboratory. Collins is seated at the pilot's station in the Full Fuselage Trainer (FFT).
Full CI benchmark calculations on CH3
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Taylor, Peter R.
1987-01-01
Full CI calculations have been performed on the CH3 radical. The full CI results are compared to those obtained using CASSCF/multireference CI and coupled-pair functional methods, both at the equilibrium CH distance and at geometries with the three CH bonds extended. In general, the performance of the approximate methods is similar to that observed in calculations on other molecules in which one or two bonds were stretched.
Ozone Uncertainties Study Algorithm (OUSA)
NASA Technical Reports Server (NTRS)
Bahethi, O. P.
1982-01-01
An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).
Advanced methods in global gyrokinetic full f particle simulation of tokamak transport
Ogando, F.; Heikkinen, J. A.; Henriksson, S.; Janhunen, S. J.; Kiviniemi, T. P.; Leerink, S.
2006-11-30
A new full f nonlinear gyrokinetic simulation code, named ELMFIRE, has been developed for simulating transport phenomena in tokamak plasmas. The code is based on a gyrokinetic particle-in-cell algorithm, which can consider electrons and ions jointly or separately, as well as arbitrary impurities. The implicit treatment of the ion polarization drift and the use of full f methods allow for simulations of strongly perturbed plasmas including wide orbit effects, steep gradients and rapid dynamic changes. This article presents in more detail the algorithms incorporated into ELMFIRE, as well as benchmarking comparisons to both neoclassical theory and other codes.Code ELMFIRE calculates plasma dynamics by following the evolution of a number of sample particles. Because of using an stochastic algorithm its results are influenced by statistical noise. The effect of noise on relevant magnitudes is analyzed.Turbulence spectra of FT-2 plasma has been calculated with ELMFIRE, obtaining results consistent with experimental data.
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
NOSS Altimeter Detailed Algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Mcmillan, J. D.
1982-01-01
The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.
Redefining Full-Time in College: Evidence on 15-Credit Strategies
ERIC Educational Resources Information Center
Klempin, Serena
2014-01-01
Because federal financial aid guidelines stipulate that students must be enrolled in a minimum of 12 credits per semester in order to receive the full amount of aid, many colleges and universities define full-time enrollment as 12 credits per semester. Yet, if a student takes only 12 credits each fall and spring term, it is impossible to complete…
Full-Polarization 3D Metasurface Cloak with Preserved Amplitude and Phase.
Yang, Yihao; Jing, Liqiao; Zheng, Bin; Hao, Ran; Yin, Wenyan; Li, Erping; Soukoulis, Costas M; Chen, Hongsheng
2016-08-01
A full-polarization arbitrary-shaped 3D metasurface cloak with preserved amplitude and phase in microwave frequencies is experimentally demonstrated. By taking the unique feature of metasurfaces, it is shown that the cloak can completely restore the polarization, amplitude, and phase of light for full polarization as if light was incident on a flat mirror. PMID:27218885
A computational study of routing algorithms for realistic transportation networks
Jacob, R.; Marathe, M.V.; Nagel, K.
1998-12-01
The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.
A quantum search algorithm for future spacecraft attitude determination
NASA Astrophysics Data System (ADS)
Tsai, Jack; Hsiao, Fu-Yuen; Li, Yi-Ju; Shen, Jen-Fu
2011-04-01
In this paper we study the potential application of a quantum search algorithm to spacecraft navigation with a focus on attitude determination. Traditionally, attitude determination is achieved by recognizing the relative position/attitude with respect to the background stars using sun sensors, earth limb sensors, or star trackers. However, due to the massive celestial database, star pattern recognition is a complicated and power consuming job. We propose a new method of attitude determination by applying the quantum search algorithm to the search for a specific star or star pattern. The quantum search algorithm, proposed by Grover in 1996, could search the specific data out of an unstructured database containing a number N of data in only O(√{N}) steps, compared to an average of N/2 steps in conventional computers. As a result, by taking advantage of matching a particular star in a vast celestial database in very few steps, we derive a new algorithm for attitude determination, collaborated with Grover's search algorithm and star catalogues of apparent magnitude and absorption spectra. Numerical simulations and examples are also provided to demonstrate the feasibility and robustness of our new algorithm.
An adaptive phase alignment algorithm for cartesian feedback loops
NASA Astrophysics Data System (ADS)
Gimeno-Martin, A.; Pardo-Martin, J.; Ortega-Gonzalez, F.
2010-01-01
An adaptive algorithm to correct phase misalignments in Cartesian feedback linearization loops for power amplifiers has been presented. It yields an error smaller than 0.035 rad between forward and feedback loop signals once convergence is reached. Because this algorithm enables a feedback system to process forward and feedback samples belonging to almost the same algorithm iteration, it is suitable to improve the performance not only of power amplifiers but also any other digital feedback system for communications systems and circuits such as all digital phase locked loops. Synchronizing forward and feedback paths of Cartesian feedback loops takes a small period of time after the system starts up. The phase alignment algorithm needs to converge before the feedback Cartesian loop can start its ideal behavior. However, once the steady state is reached, both paths can be considered synchronized, and the Cartesian feedback loop will only depend on the loop parameters (open-loop gain, loop bandwidth, etc.). It means that the linearization process will also depend only on these parameters since the misalignment effect disappears. Therefore, this algorithm relieves the power amplifier linearizer circuit design of any task required for solving phase misalignment effects inherent to Cartesian feedback systems. Furthermore, when a feedback Cartesian loop has to be designed, the designer can consider that forward and feedback paths are synchronized, since the phase alignment algorithm will do this task. This will reduce the simulation complexity. Then, all efforts are applied to determining the suitable loop parameters that will make the linearization process more efficient.
Audit, Benjamin; Baker, Antoine; Chen, Chun-Long; Rappailles, Aurélien; Guilbaud, Guillaume; Julienne, Hanna; Goldar, Arach; d'Aubenton-Carafa, Yves; Hyrien, Olivier; Thermes, Claude; Arneodo, Alain
2013-01-01
In this protocol, we describe the use of the LastWave open-source signal-processing command language (http://perso.ens-lyon.fr/benjamin.audit/LastWave/) for analyzing cellular DNA replication timing profiles. LastWave makes use of a multiscale, wavelet-based signal-processing algorithm that is based on a rigorous theoretical analysis linking timing profiles to fundamental features of the cell's DNA replication program, such as the average replication fork polarity and the difference between replication origin density and termination site density. We describe the flow of signal-processing operations to obtain interactive visual analyses of DNA replication timing profiles. We focus on procedures for exploring the space-scale map of apparent replication speeds to detect peaks in the replication timing profiles that represent preferential replication initiation zones, and for delimiting U-shaped domains in the replication timing profile. In comparison with the generally adopted approach that involves genome segmentation into regions of constant timing separated by timing transition regions, the present protocol enables the recognition of more complex patterns of the spatio-temporal replication program and has a broader range of applications. Completing the full procedure should not take more than 1 h, although learning the basics of the program can take a few hours and achieving full proficiency in the use of the software may take days.
Developing A Navier-Stokes Algorithm For Supercomputers
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1992-01-01
Report discusses development of algorithm for solution of Navier-Stokes equations of flow on parallel-processing supercomputers. Involves combination of prior techniques to form algorithm to compute flows in complicated three-dimensional configurations. Includes explicit finite-difference numerical-integration scheme applicable to flows represented by hierarchy of mathematical models ranging from Euler to full Navier-Stokes. Of interest to researchers looking for ways to structure problems for greater computational efficiency.
Internal quantum efficiency analysis of solar cell by genetic algorithm
Xiong, Kanglin; Yang, Hui; Lu, Shulong; Zhou, Taofei; Wang, Rongxin; Qiu, Kai; Dong, Jianrong; Jiang, Desheng
2010-11-15
To investigate factors limiting the performance of a GaAs solar cell, genetic algorithm is employed to fit the experimentally measured internal quantum efficiency (IQE) in the full spectra range. The device parameters such as diffusion lengths and surface recombination velocities are extracted. Electron beam induced current (EBIC) is performed in the base region of the cell with obtained diffusion length agreeing with the fit result. The advantage of genetic algorithm is illustrated. (author)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-07
...NMFS received an application from Shell Offshore Inc. (Shell) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Chukchi Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to Shell to take, by......
The Social Perspective Taking Process: What Motivates Individuals to Take Another's Perspective?
ERIC Educational Resources Information Center
Gehlbach, Hunter; Brinkworth, Maureen E.; Wang, Ming-Te
2012-01-01
Background/Context: A growing literature describes multiple benefits of social perspective taking--many of which are particularly important for schools. Despite these potential benefits for administrators, counselors, teachers, and students, little is known about social perspective taking as a process. Purpose/Research Question: If educational…
Teaching Test-Taking and Note-Taking Skills to Learning Disabled High School Students.
ERIC Educational Resources Information Center
Anderman, Robert C.; Williams, Jane M.
The materials were developed to help prepare eleventh and twelfth graders to be successful in an academic environment when their school history indicated little chance for success. The booklet includes instructional materials to teach test-taking and note-taking, two skills many failing students lack. A syllabus is included for each unit along…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
...NMFS received an application from ConocoPhillips Company (COP) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Chukchi Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to COP to take, by......
ERIC Educational Resources Information Center
Gehlbach, Hunter; Brinkworth, Maureen E.
2012-01-01
Background/Context: Research indicates that social perspective taking--the capacity to discern the thoughts and feelings of others--plays a role in many important outcomes in schools. Despite the potential benefits for students and educators, little is known about social perspective taking (SPT) as a process. Purpose/Research Question: If…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-20
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration RIN 0648-XC564 Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to Marine Seismic Survey in the Beaufort Sea,...
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
NASA Technical Reports Server (NTRS)
Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.
2009-01-01
Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the
ICESat-2 / ATLAS Flight Science Receiver Algorithms
NASA Astrophysics Data System (ADS)
Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.
2013-12-01
. This Simulator makes it possible to check all logic paths that could be encountered by the Algorithms on orbit. In addition the NASA airborne instrument MABEL is collecting data with characteristics similar to what ATLAS will see. MABEL data is being used to test the ATLAS Receiver Algorithms. Further verification will be performed during Integration and Testing of the ATLAS instrument and during Environmental Testing on the full ATLAS instrument. Results from testing to date show the Receiver Algorithms have the ability to handle a wide range of signal and noise levels with a very good sensitivity at relatively low signal to noise ratios. In addition, preliminary tests have demonstrated, using the ICESat-2 Science Team's selected land ice and sea ice test cases, the capability of the Algorithms to successfully find and telemeter the surface echoes. In this presentation we will describe the ATLAS Flight Science Receiver Algorithms and the Software Simulator, and will present results of the testing to date. The onboard databases (DEM, DRM and the Surface Reference Mask) are being developed at the University of Texas at Austin as part of the ATLAS Flight Science Receiver Algorithms. Verification of the onboard databases is being performed by ATLAS Receiver Algorithms team members Claudia Carabajal and Jack Saba.
Stable Algorithm For Estimating Airdata From Flush Surface Pressure Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen, A. (Inventor); Cobleigh, Brent R. (Inventor); Haering, Edward A., Jr. (Inventor)
2001-01-01
An airdata estimation and evaluation system and method, including a stable algorithm for estimating airdata from nonintrusive surface pressure measurements. The airdata estimation and evaluation system is preferably implemented in a flush airdata sensing (FADS) system. The system and method of the present invention take a flow model equation and transform it into a triples formulation equation. The triples formulation equation eliminates the pressure related states from the flow model equation by strategically taking the differences of three surface pressures, known as triples. This triples formulation equation is then used to accurately estimate and compute vital airdata from nonintrusive surface pressure measurements.
Large-scale sequential quadratic programming algorithms
Eldersveld, S.K.
1992-09-01
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.
A Two-Tier Full-Information Item Factor Analysis Model with Applications
ERIC Educational Resources Information Center
Cai, Li
2010-01-01
Motivated by Gibbons et al.'s (Appl. Psychol. Meas. 31:4-19, "2007") full-information maximum marginal likelihood item bifactor analysis for polytomous data, and Rijmen, Vansteelandt, and De Boeck's (Psychometrika 73:167-182, "2008") work on constructing computationally efficient estimation algorithms for latent variable models, a two-tier item…
Using Nearest-Neighbour Searching Techniques to Access Full-Text Documents.
ERIC Educational Resources Information Center
Al-Hawamdeh, Suliman; And Others
1991-01-01
Describes a project at the University of Sheffield that is investigating the use of nearest-neighbor retrieval algorithms for full-text searching. Nearest-neighbor searching is compared with Boolean retrieval and hypertext, and an experimental text retrieval system called INSTRUCT (Interactive System for Teaching Retrieval Using Computational…
A full multigrid method for eigenvalue problems
NASA Astrophysics Data System (ADS)
Chen, Hongtao; Xie, Hehu; Xu, Fei
2016-10-01
In this paper, a full (nested) multigrid scheme is proposed to solve eigenvalue problems. The idea here is to use a correction method to transform the eigenvalue problem solving to a series of corresponding boundary value problem solving and eigenvalue problems defined on a very low-dimensional finite element space. The boundary value problems which are defined on a sequence of multilevel finite element spaces can be solved by some multigrid iteration steps. The computational work of this new scheme can reach the same optimal order as solving the corresponding boundary value problem by the full multigrid method. Therefore, this type of full multigrid method improves the overfull efficiency of the eigenvalue problem solving.
Full-duplex optical communication system
NASA Technical Reports Server (NTRS)
Shay, Thomas M. (Inventor); Hazzard, David A. (Inventor); Horan, Stephen (Inventor); Payne, Jason A. (Inventor)
2004-01-01
A method of full-duplex electromagnetic communication wherein a pair of data modulation formats are selected for the forward and return data links respectively such that the forward data electro-magnetic beam serves as a carrier for the return data. A method of encoding optical information is used wherein right-hand and left-hand circular polarizations are assigned to optical information to represent binary states. An application for an earth to low earth orbit optical communications system is presented which implements the full-duplex communication and circular polarization keying modulation format.
First Months of Data Taking at LHC
Parodi, Fabrizio
2005-10-12
The ATLAS and CMS detector will start taking data at the LHC collider (proton-proton collider working at a center of mass energy of 14 TeV) in summer 2007. In this article I will review the commissioning of the two detectors before the starting of LHC and the analysis of the first pp collisions data (10 pb-1) devoted, mainly, to calibration purposes. I will also briefly review the first physics measurements aiming at the understanding of the detectors performance.
Astronaut Jack Lousma taking hot bath
NASA Technical Reports Server (NTRS)
1973-01-01
A closeup view of Astronaut Jack R. Lousma, Skylab 3 pilot, taking a hot bath in the crew quarters of the Orbital Workshop (OWS) of the Skylab space station cluster in Earth orbit. In deploying the shower facility, the shower curtain is pulled up from the floor and attached to the ceiling. The water comes through a push-button shower head attached to a flexible hose. Water is drawn off by a vacuum system.
Full-color 3D display using binary phase modulation and speckle reduction
NASA Astrophysics Data System (ADS)
Matoba, Osamu; Masuda, Kazunobu; Harada, Syo; Nitta, Kouichi
2016-06-01
One of the 3D display systems for full-color reconstruction by using binary phase modulation is presented. The improvement of reconstructed objects is achieved by optimizing the binary phase modulation and accumulating the speckle patterns by changing the random phase distributions. The binary phase pattern is optimized by the modified Frenel ping-pong algorithm. Numerical and experimental demonstrations of full color reconstruction are presented.
Algorithm Engineering - An Attempt at a Definition
NASA Astrophysics Data System (ADS)
Sanders, Peter
This paper defines algorithm engineering as a general methodology for algorithmic research. The main process in this methodology is a cycle consisting of algorithm design, analysis, implementation and experimental evaluation that resembles Popper’s scientific method. Important additional issues are realistic models, algorithm libraries, benchmarks with real-world problem instances, and a strong coupling to applications. Algorithm theory with its process of subsequent modelling, design, and analysis is not a competing approach to algorithmics but an important ingredient of algorithm engineering.
Algorithm Calculates Cumulative Poisson Distribution
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
Interpolation algorithms for machine tools
Burleson, R.R.
1981-08-01
There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.
FORTRAN Algorithm for Image Processing
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hull, David R.
1987-01-01
FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
Are Full-Time MBAs Performing?
ERIC Educational Resources Information Center
Rowland, Caroline Ann; Hall, Roger David
2012-01-01
Full-time MBA students amount to about one-third of the 26,000 students enrolled on MBA programmes at UK universities. The programmes have become increasingly international in student composition and concerns have been expressed about performance, quality and comparability between programmes. Research into predictors of MBA success has been…
Treatment of Childhood Encopresis: Full Cleanliness Training
ERIC Educational Resources Information Center
Arnold, Susan; Doleys, Daniel M.
1975-01-01
Full Cleanliness Training (a procedure in which the trainee is required to correct the results of inappropriate toileting behavior by cleaning himself and his clothing) was used in combination with positive reinforcement to deal with a trainable retarded 8 year old boy with encopresis and a toilet phobia. (Author/CL)
Full Inclusion: The Least Restrictive Environment
ERIC Educational Resources Information Center
Mullings, Shirley E.
2011-01-01
The purpose of the phenomenological study was to examine elementary educators' perceptions of full inclusion as the least restrictive environment for students with disabilities. Thirty-six teachers and administrators participated in interviews and responded to multiple-choice survey items. The recorded data from the interviews were…
Full Text Journal Subscriptions: An Evolutionary Process.
ERIC Educational Resources Information Center
Luther, Judy
1997-01-01
Provides an overview of companies offering Web accessible subscriptions to full text electronic versions of scientific, technical, and medical journals (Academic Press, Blackwell, EBSCO, Elsevier, Highwire Press, Information Quest, Institute of Physics, Johns Hopkins University Press, OCLC, OVID, Springer, and SWETS). Also lists guidelines for…
Towards Full Employment in a Modern Society.
ERIC Educational Resources Information Center
Department for Education and Employment, London (England).
This document outlines how the government of the United Kingdom intends to achieve and sustain full employment and social justice across the country. Chapter 1 discusses the United Kingdom's economic, educational, and social problems and details plans to solve them through a policy based on the following principles: (1) building an economy with…
Full Disclosure: New and Responsible Attitudes.
ERIC Educational Resources Information Center
Arnold, Joanne E.
Broad influences impinge upon the question of "full disclosure," a question that asks what information about the conduct of public affairs should be made available to the public. Increasingly, state laws require the disclosure of all information about the conduct of public business, the receipt and expenditure of public funds, and the outcomes of…
Keeping Rural Schools up to Full Speed
ERIC Educational Resources Information Center
Beesley, Andrea
2011-01-01
Rural schools are long accustomed to meeting challenges in innovative ways. For them, the challenge is not so much a lack of technology as it is adequate internet access, which affects both teachers and students. In this article, the author discusses how to keep rural schools up to full speed. The author suggests that the best approach when…
Full and Partial Cloaking in Electromagnetic Scattering
NASA Astrophysics Data System (ADS)
Deng, Youjun; Liu, Hongyu; Uhlmann, Gunther
2016-08-01
In this paper, we consider two regularized transformation-optics cloaking schemes for electromagnetic (EM) waves. Both schemes are based on the blowup construction with the generating sets being, respectively, a generic curve and a planar subset. We derive sharp asymptotic estimates in assessing the cloaking performances of the two constructions in terms of the regularization parameters and the geometries of the cloaking devices. The first construction yields an approximate full-cloak, whereas the second construction yields an approximate partial-cloak. Moreover, by incorporating properly chosen conducting layers, both cloaking constructions are capable of nearly cloaking arbitrary EM contents. This work complements the existing results in Ammari et al. (SIAM J Appl Math 73:2055-2076, 2013), Bao and Liu (SIAM J Appl Math 74:724-742, 2014), Bao et al. (J Math Pure Appl (9) 101:716-733, 2014) on approximate EM cloaks with the generating set being a singular point, and it also extends Deng et al. (On regularized full- and partial-cloaks in acoustic scat- tering. Preprint, arXiv:1502.01174, 2015), Li et al. (Commun Math Phys, 335:671-712, 2015) on regularized full and partial cloaks for acoustic waves governed by the Helmholtz system to the more challenging EM case governed by the full Maxwell system.
Strontium Removal: Full-Scale Ohio Demonstrations
The objectives of this presentation are to present a brief overview of past bench-scale research to evaluate the impact lime softening on strontium removal from drinking water and present full-scale drinking water treatment studies to impact of lime softening and ion exchange sof...
Aircraft Engineering Conference 1934 - Full Scale Tunnel
NASA Technical Reports Server (NTRS)
1934-01-01
Gathered together in the only facility big enough to hold them, attendees at Langleys 1934 aircraft Engineering Conference pose in the Full Scale Wind Tunnel underneath a Boeing P-26A Peashooter. Present, among other notables, were Orville Wright, Charles Lindbergh, and Howard Hughes.
Astronaut Eileen Collins in Full Fuselage Trainer
NASA Technical Reports Server (NTRS)
1993-01-01
Astronaut Eileen M. Collins, pilot for the STS-63 mission, participates in STS-63 training at JSC's Shuttle mockup and integration laboratory. Collins is seated at the pilot's station in the Full Fuselage Trainer (FFT) (48403-4); Collins looks out the aft flight deck window in the Shuttle mockup trainer (48405).
Reconfigurable Full-Page Braille Displays
NASA Technical Reports Server (NTRS)
Garner, H. Douglas
1994-01-01
Electrically actuated braille display cells of proposed type arrayed together to form full-page braille displays. Like other braille display cells, these provide changeable patterns of bumps driven by digitally recorded text stored on magnetic tapes or in solid-state electronic memories. Proposed cells contain electrorheological fluid. Viscosity of such fluid increases in strong electrostatic field.
The case for full practice authority.
Holmes, Olivia; Kinsey-Weathers, Shanieka
2016-03-01
The Institute of Medicine (IOM) recommended in its 2010 report on the future of nursing that advanced practice registered nurses (APRNs) should factor prominently in providing care to the millions of Americans who access healthcare services under the Affordable Care Act (ACA). The IOM also recommended that APRNs practice to the full extent of their education and training.However, many states have laws in place that limit full practice authority for APRNs, specifically NPs, in providing basic health services such as primary care. These laws place restrictions on independent practice and Medicaid and Medicare reimbursement, which prevent nurses from “responding effectively to rapidly changing health care settings and an evolving health care system.” Less than half of the United States has adopted full practice authority licensure and practice laws (see APRN practice authority at a glance). This article discusses how the primary care needs of millions of Americans can be met by granting full practice authority to APRNs nationwide and provides evidence to support the high level of care these practitioners can provide independently. PMID:26910092
Community Schools: A Full-Spectrum Resource
ERIC Educational Resources Information Center
Govmez, David; Gonzales, Lisa; Niebuhr, Deanna; Villarreal, Lisa
2012-01-01
Meeting the needs of the whole child is the goal of community schools, which partner with other agencies to offer a range of services and opportunities. In this article the authors discuss community schools as a full-spectrum resource for families and children, and give an example of how community schools are able to provide for the needs of…
Generalized Full-Information Item Bifactor Analysis
ERIC Educational Resources Information Center
Cai, Li; Yang, Ji Seung; Hansen, Mark
2011-01-01
Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single-group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of…
MAMA Full Field Sensitivity Monitor & PSF Check
NASA Astrophysics Data System (ADS)
Proffitt, Charles
2009-07-01
The purpose of this program is to monitor the sensitivity of the MAMAdetectors over the full field. This is achieved by observing globularcluster NGC6681 once during Cycle 17. The data can be directlycompared with similar data obtained in Cycles 7, 8, 9, 10, 11, and 12.
Adaptive, full-spectrum solar energy system
Muhs, Jeffrey D.; Earl, Dennis D.
2003-08-05
An adaptive full spectrum solar energy system having at least one hybrid solar concentrator, at least one hybrid luminaire, at least one hybrid photobioreactor, and a light distribution system operably connected to each hybrid solar concentrator, each hybrid luminaire, and each hybrid photobioreactor. A lighting control system operates each component.