A sparse matrix based full-configuration interaction algorithm.
Rolik, Zoltán; Szabados, Agnes; Surján, Péter R
2008-04-14
We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers.
Full motion video geopositioning algorithm integrated test bed
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Braun, Aaron; Theiss, Henry; Gurson, Adam
2015-05-01
In order to better understand the issues associated with Full Motion Video (FMV) geopositioning and to develop corresponding strategies and algorithms, an integrated test bed is required. It is used to evaluate the performance of various candidate algorithms associated with registration of the video frames and subsequent geopositioning using the registered frames. Major issues include reliable error propagation or predicted solution accuracy, optimal vs. suboptimal vs. divergent solutions, robust processing in the presence of poor or non-existent a priori estimates of sensor metadata, difficulty in the measurement of tie points between adjacent frames, poor imaging geometry including small field-of-view and little vertical relief, and no control (points). The test bed modules must be integrated with appropriate data flows between them. The test bed must also ingest/generate real and simulated data and support evaluation of corresponding performance based on module-internal metrics as well as comparisons to real or simulated "ground truth". Selection of the appropriate modules and algorithms must be both operator specifiable and specifiable as automatic. An FMV test bed has been developed and continues to be improved with the above characteristics. The paper describes its overall design as well as key underlying algorithms, including a recent update to "A matrix" generation, which allows for the computation of arbitrary inter-frame error cross-covariance matrices associated with Kalman filter (KF) registration in the presence of dynamic state vector definition, necessary for rigorous error propagation when the contents/definition of the KF state vector changes due to added/dropped tie points. Performance of a tested scenario is also presented.
Complex algorithm of optical flow determination by weighted full search
NASA Astrophysics Data System (ADS)
Panin, S. V.; Chemezov, V. O.; Lyubutin, P. S.
2016-11-01
An optical flow determination algorithm is proposed, developed and tested in the article. The algorithm is aimed at improving the accuracy of displacement determination at the scene element boundaries (objects). The results show that the application of the proposed algorithm is rather promising for stereo vision applications. Variations in calculating parameters have allowed determining their rational values and reducing the average absolute error of the end point displacement determination (AEE). The peculiarity of the proposed algorithm is performing calculations within the local regions, which makes it possible to carry out such calculations simultaneously (to attract parallel calculations).
Full design of fuzzy controllers using genetic algorithms
NASA Technical Reports Server (NTRS)
Homaifar, Abdollah; Mccormick, ED
1992-01-01
This paper examines the applicability of genetic algorithms (GA) in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.
Algorithm for registration of full Scanning Laser Ophthalmoscope video sequences.
Mariño, C; Ortega, M; Barreira, N; Penedo, M G; Carreira, M J; González, F
2011-04-01
Fluorescein angiography is an established technique for examining the functional integrity of the retinal microcirculation for early detection of changes due to retinopathy. This paper describes a new method for the registration of large Scanning Laser Ophthalmoscope sequences (SLO), where the patient has been injected with a fluorescent dye. This allows the measurement of parameters such as the arteriovenous passage time. Due to the long time needed to acquire these sequences, there will inevitably be eye movement, which must be corrected prior to the application of quantitative analysis. The algorithm described here combines mutual information-based registration and landmark-based registration. The former will allow the alignment of the darkest frames of the sequence, where the dye has not still arrived to the retina, because of its ability to work with images without a preprocessing or segmentation, while the latter uses relevant features (the vessels) extracted by means of a robust creaseness operator, to get a very fast and accurate registration. The algorithm only detects rigid transformations but proves to be robust against the slight alterations derived from the eye location perspective during acquisition. Results were validated by expert clinicians.
NASA Astrophysics Data System (ADS)
Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu
2015-09-01
We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.
The wavenumber algorithm for full-matrix imaging using an ultrasonic array.
Hunter, Alan J; Drinkwater, Bruce W; Wilcox, Paul D
2008-11-01
Ultrasonic imaging using full-matrix capture, e.g., via the total focusing method (TFM), has been shown to increase angular inspection coverage and improve sensitivity to small defects in nondestructive evaluation. In this paper, we develop a Fourier-domain approach to full-matrix imaging based on the wavenumber algorithm used in synthetic aperture radar and sonar. The extension to the wavenumber algorithm for full-matrix data is described and the performance of the new algorithm compared with the TFM, which we use as a representative benchmark for the time-domain algorithms. The wavenumber algorithm provides a mathematically rigorous solution to the inverse problem for the assumed forward wave propagation model, whereas the TFM employs heuristic delay-and-sum beamforming. Consequently, the wavenumber algorithm has an improved point-spread function and provides better imagery. However, the major advantage of the wavenumber algorithm is its superior computational performance. For large arrays and images, the wavenumber algorithm is several orders of magnitude faster than the TFM. On the other hand, the key advantage of the TFM is its flexibility. The wavenumber algorithm requires a regularly sampled linear array, while the TFM can handle arbitrary imaging geometries. The TFM and the wavenumber algorithm are compared using simulated and experimental data.
The Wavenumber Algorithm: Fast Fourier-Domain Imaging Using Full Matrix Capture
NASA Astrophysics Data System (ADS)
Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.
2009-03-01
We develop a Fourier-domain approach to full matrix imaging based on the wavenumber algorithm used in synthetic aperture radar and sonar. The extension to the wavenumber algorithm for full matrix capture is described and the performance of the new algorithm is compared to the total focusing method (TFM), which we use as a representative benchmark for the time-domain algorithms. The wavenumber algorithm provides a mathematically rigorous solution to the inverse problem for the assumed forward wave propagation model, whereas the TFM employs heuristic delay-and-sum beamforming. Consequently, the wavenumber algorithm has an improved point-spread function and provides better imagery. However, the major advantage of the wavenumber algorithm is its superior computational performance. For large arrays and images, the wavenumber algorithm is several orders of magnitude faster than the TFM. On the other hand, the key advantage of the TFM is its flexibility. The wavenumber algorithm requires a regularly sampled linear array, while the TFM can handle arbitrary imaging geometries. The TFM and the wavenumber algorithm are compared using simulated and experimental data.
Fast, Conservative Algorithm for Solving the Transonic Full-Potential Equation
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1980-01-01
A fast, fully implicit approximate factorization algorithm designed to solve the conservative, transonic, full-potential equation in either two or three dimensions is described. The algorithm uses an upwind bias of the density coefficient for stability in supersonic regions. This provides an effective upwind difference of the streamwise terms for any orientation of the velocity vector (i.e., rotated differencing), thereby greatly enhancing the reliability of the present algorithm. A numerical transformation is used to establish an arbitrary body-fitted, finite-difference mesh. Computed results for both airfoils and simplified wings demonstrate substantial improvement in convergence speed for the new algorithm relative to standard successive-line over-relaxation algorithms.
NASA Technical Reports Server (NTRS)
Steger, J. L.; Caradonna, F. X.
1980-01-01
An implicit finite difference procedure is developed to solve the unsteady full potential equation in conservation law form. Computational efficiency is maintained by use of approximate factorization techniques. The numerical algorithm is first order in time and second order in space. A circulation model and difference equations are developed for lifting airfoils in unsteady flow; however, thin airfoil body boundary conditions have been used with stretching functions to simplify the development of the numerical algorithm.
Bramble, J.H.; Pasciak, J.E.
1992-03-01
In this paper, we provide uniform estimates for V-cycle algorithms with one smoothing on each level. This theory is based on some elliptic regularity but does not require a smoother interaction hypothesis (sometimes referred to as a strengthened Cauchy Schwarz inequality) assumed in other theories. Thus, it is a natural extension of the full regularity V-cycle estimates provided by Braess and Hackbush.
Application of a Chimera Full Potential Algorithm for Solving Aerodynamic Problems
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1997-01-01
A numerical scheme utilizing a chimera zonal grid approach for solving the three dimensional full potential equation is described. Special emphasis is placed on describing the spatial differencing algorithm around the chimera interface. Results from two spatial discretization variations are presented; one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The presentation is highlighted with a number of transonic wing flow field computations.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
Newton-Krylov-Schwarz algorithms for the 2D full potential equation
Cai, Xiao-Chuan; Gropp, W.D.; Keyes, D.E.
1996-12-31
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The main algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, can be made robust for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report favorable choices for numerical convergence rate and overall execution time on a distributed-memory parallel computer.
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Veléz-Reyes, Miguel
2008-04-01
This paper presents a full algorithm to compute the solution for the unsupervised unmixing problem based on the positive matrix factorization. The algorithm estimates the number of endmembers as the rank of the matrix. The algorithm has an initialization stage using the SVD subset selection algorithm. Testing and validation with real and simulated data show the effectiveness of the method. Application of the approach to environmental remote sensing is shown.
NASA Technical Reports Server (NTRS)
Biermann, David; Hartman, Edwin P
1938-01-01
Tests were made of eight full-scale propellers of different shape at various tip speeds up to about 1,000 feet per second. The range of blade-angle settings investigated was from 10 degrees to 30 degrees at the 0.75 radius. The results indicate that a loss in propulsive efficiency occurred at tip speeds from 0.5 to 0.7 the velocity of sound for the take-off and climbing conditions. As the tip speed increased beyond these critical values, the loss rapidly increased and amounted, in some instances, to more than 20 percent of the thrust power for tip-speed values of 0.8 the speed of sound. In general, as the blade-angle setting was increased, the loss started to occur at lower tip speeds. The maximum loss for a given tip speed occurred at a blade-angle setting of about 20 degrees for the take-off and 25 degrees for the climbing condition. A simplified method for correcting propellers for the effect of compressibility is given in an appendix.
NASA Astrophysics Data System (ADS)
Xiang, Shiming; Zhang, Haijiang
2016-11-01
It is known full-waveform inversion (FWI) is generally ill-conditioned and various strategies including pre-conditioning and regularizing the inversion system have been proposed to obtain a reliable estimation of the velocity model. Here, we propose a new edge-guided strategy for FWI in frequency domain to efficiently and reliably estimate velocity models with structures of the size similar to the seismic wavelength. The edges of the velocity model at the current iteration are first detected by the Canny edge detection algorithm that is widely used in image processing. Then, the detected edges are used for guiding the calculation of FWI gradient as well as enforcing edge-preserving total variation (TV) regularization for next iteration of FWI. Bilateral filtering is further applied to remove noise but keep edges of the FWI gradient. The proposed edge-guided FWI in the frequency domain with edge-guided TV regularization and bilateral filtering is designed to preserve model edges that are recovered from previous iterations as well as from lower frequency waveforms when FWI is conducted from lower to higher frequencies. The new FWI method is validated using the complex Marmousi model that contains several steeply dipping fault zones and hundreds of horizons. Compared to FWI without edge guidance, our proposed edge-guided FWI recovers velocity model anomalies and edges much better. Unlike previous image-guided FWI or edge-guided TV regularization strategies, our method does not require migrating seismic data, thus is more efficient for real applications.
Chlorophyll fluorescence: implementation in the full physics RemoTeC algorithm
NASA Astrophysics Data System (ADS)
Hahne, Philipp; Frankenberg, Christian; Hasekamp, Otto; Landgraf, Jochen; Butz, André
2014-05-01
Several operating and future satellite missions are dedicated to enhancing our understanding of the carbon cycle. They infer the atmospheric concentrations of carbon dioxide and methane from shortwave infrared absorption spectra of sunlight backscattered from Earth's atmosphere and surface. Exhibiting high spatial and temporal resolution, the inferred gas concentration databases provide valuable information for inverse modelling of source and sink processes at the Earth's surface. However, the inversion of sources and sinks requires highly accurate total column CO2 (XCO2) and CH4 (XCH4) measurements, which remains a challenge. Recently, Frankenberg et al., 2012, showed that - beside XCO2 and XCH4 - chlorophyll fluorescence can be retrieved from sounders such as GOSAT exploiting Fraunhofer lines in the vicinity of the O2 A-band. This has two implications: a) chlorophyll fluorescence itself being a proxy for photosynthetic activity yields new information on carbon cycle processes and b) the neglect of the fluorescence signal can induce errors in the retrieved greenhouse gas concentrations. Our RemoTeC full physics algorithm iteratively retrieves the target gas concentrations XCO2 and XCH4 along with atmospheric scattering properties and other auxiliary parameters. The radiative transfer model (RTM) LINTRAN provides RemoTeC with the single and multiple scattered intensity field and its analytically calculated derivatives. Here, we report on the implementation of a fluorescence light source at the lower boundary of our RTM. Processing three years of GOSAT data, we evaluate the performance of the refined retrieval method. To this end, we compare different retrieval configurations, using the s- and p-polarization detectors independently and combined, and validate to independent data sources.
Liu Zhongyi Sun, Wenyu Tian Fangbao
2009-10-15
This paper proposes an infeasible interior-point algorithm with full-Newton step for linear programming, which is an extension of the work of Roos (SIAM J. Optim. 16(4):1110-1136, 2006). The main iteration of the algorithm consists of a feasibility step and several centrality steps. We introduce a kernel function in the algorithm to induce the feasibility step. For parameter p element of [0,1], the polynomial complexity can be proved and the result coincides with the best result for infeasible interior-point methods, that is, O(nlog n/{epsilon})
NASA Astrophysics Data System (ADS)
Adhikari, Loknath; Xie, Feiqin; Haase, Jennifer S.
2016-10-01
With a GPS receiver on board an airplane, the airborne radio occultation (ARO) technique provides dense lower-tropospheric soundings over target regions. Large variations in water vapor in the troposphere cause strong signal multipath, which could lead to systematic errors in RO retrievals with the geometric optics (GO) method. The spaceborne GPS RO community has successfully developed the full-spectrum inversion (FSI) technique to solve the multipath problem. This paper is the first to adapt the FSI technique to retrieve atmospheric properties (bending and refractivity) from ARO signals, where it is necessary to compensate for the receiver traveling on a non-circular trajectory inside the atmosphere, and its use is demonstrated using an end-to-end simulation system. The forward-simulated GPS L1 (1575.42 MHz) signal amplitude and phase are used to test the modified FSI algorithm. The ARO FSI method is capable of reconstructing the fine vertical structure of the moist lower troposphere in the presence of severe multipath, which otherwise leads to large retrieval errors in the GO retrieval. The sensitivity of the modified FSI-retrieved bending angle and refractivity to errors in signal amplitude and errors in the measured refractivity at the receiver is presented. Accurate bending angle retrievals can be obtained from the surface up to ˜ 250 m below the receiver at typical flight altitudes above the tropopause, above which the retrieved bending angle becomes highly sensitive to the phase measurement noise. Abrupt changes in the signal amplitude that are a challenge for receiver tracking and geometric optics bending angle retrieval techniques do not produce any systematic bias in the FSI retrievals when the SNR is high. For very low SNR, the FSI performs as expected from theoretical considerations. The 1 % in situ refractivity measurement errors at the receiver height can introduce a maximum refractivity retrieval error of 0.5 % (1 K) near the receiver, but
Ojala, Jarkko J; Kapanen, Mika K; Hyödynmaa, Simo J; Wigren, Tuija K; Pitkänen, Maunu A
2014-03-06
The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms--pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB)--implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC-calculated dose distributions were compared to corresponding AXB-calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose-volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3mm and 2%/2 mm were applied. The AXB-calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm
Full-Featured Search Algorithm for Negative Electron-Transfer Dissociation.
Riley, Nicholas M; Bern, Marshall; Westphall, Michael S; Coon, Joshua J
2016-08-05
Negative electron-transfer dissociation (NETD) has emerged as a premier tool for peptide anion analysis, offering access to acidic post-translational modifications and regions of the proteome that are intractable with traditional positive-mode approaches. Whole-proteome scale characterization is now possible with NETD, but proper informatic tools are needed to capitalize on advances in instrumentation. Currently only one database search algorithm (OMSSA) can process NETD data. Here we implement NETD search capabilities into the Byonic platform to improve the sensitivity of negative-mode data analyses, and we benchmark these improvements using 90 min LC-MS/MS analyses of tryptic peptides from human embryonic stem cells. With this new algorithm for searching NETD data, we improved the number of successfully identified spectra by as much as 80% and identified 8665 unique peptides, 24 639 peptide spectral matches, and 1338 proteins in activated-ion NETD analyses, more than doubling identifications from previous negative-mode characterizations of the human proteome. Furthermore, we reanalyzed our recently published large-scale, multienzyme negative-mode yeast proteome data, improving peptide and peptide spectral match identifications and considerably increasing protein sequence coverage. In all, we show that new informatics tools, in combination with recent advances in data acquisition, can significantly improve proteome characterization in negative-mode approaches.
Analysis of full charge reconstruction algorithms for x-ray pixelated detectors
Baumbaugh, A.; Carini, G.; Deptuch, G.; Grybos, P.; Hoff, J.; Siddons, P., Maj.; Szczygiel, R.; Trimpl, M.; Yarema, R.; /Fermilab
2011-11-01
Existence of the natural diffusive spread of charge carriers on the course of their drift towards collecting electrodes in planar, segmented detectors results in a division of the original cloud of carriers between neighboring channels. This paper presents the analysis of algorithms, implementable with reasonable circuit resources, whose task is to prevent degradation of the detective quantum efficiency in highly granular, digital pixel detectors. The immediate motivation of the work is a photon science application requesting simultaneous timing spectroscopy and 2D position sensitivity. Leading edge discrimination, provided it can be freed from uncertainties associated with the charge sharing, is used for timing the events. Analyzed solutions can naturally be extended to the amplitude spectroscopy with pixel detectors.
Analysis of Full Charge Reconstruction Algorithms for X-Ray Pixelated Detectors
Baumbaugh, A.; Carini, G.; Deptuch, G.; Grybos, P.; Hoff, J.; Siddons, P., Maj.; Szczygiel, R.; Trimpl, M.; Yarema, R.; /Fermilab
2012-05-21
Existence of the natural diffusive spread of charge carriers on the course of their drift towards collecting electrodes in planar, segmented detectors results in a division of the original cloud of carriers between neighboring channels. This paper presents the analysis of algorithms, implementable with reasonable circuit resources, whose task is to prevent degradation of the detective quantum efficiency in highly granular, digital pixel detectors. The immediate motivation of the work is a photon science application requesting simultaneous timing spectroscopy and 2D position sensitivity. Leading edge discrimination, provided it can be freed from uncertainties associated with the charge sharing, is used for timing the events. Analyzed solutions can naturally be extended to the amplitude spectroscopy with pixel detectors.
Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images
NASA Astrophysics Data System (ADS)
Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas
2015-03-01
The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.
Full Waveform 3D Synthetic Seismic Algorithm for 1D Layered Anelastic Models
NASA Astrophysics Data System (ADS)
Schwaiger, H. F.; Aldridge, D. F.; Haney, M. M.
2007-12-01
Numerical calculation of synthetic seismograms for 1D layered earth models remains a significant aspect of amplitude-offset investigations, surface wave studies, microseismic event location approaches, and reflection interpretation or inversion processes. Compared to 3D finite-difference algorithms, memory demand and execution time are greatly reduced, enabling rapid generation of seismic data within workstation or laptop computational environments. We have developed a frequency-wavenumber forward modeling algorithm adapted to realistic 1D geologic media, for the purpose of calculating seismograms accurately and efficiently. The earth model consists of N layers bounded by two halfspaces. Each layer/halfspace is a homogeneous and isotropic anelastic (attenuative and dispersive) solid, characterized by a rectangular relaxation spectrum of absorption mechanisms. Compressional and shear phase speeds and quality factors are specified at a particular reference frequency. Solution methodology involves 3D Fourier transforming the three coupled, second- order, integro-differential equations for particle displacements to the frequency-horizontal wavenumber domain. An analytic solution of the resulting ordinary differential system is obtained. Imposition of welded interface conditions (continuity of displacement and stress) at all interfaces, as well as radiation conditions in the two halfspaces, yields a system of 6(N+1) linear algebraic equations for the coefficients in the ODE solution. An optimized inverse 2D Fourier transform to the space domain gives the seismic wavefield on a horizontal plane. Finally, three-component seismograms are obtained by accumulating frequency spectra at designated receiver positions on this plane, followed by a 1D inverse FFT from angular frequency ω to time. Stress-free conditions may be applied at the top or bottom interfaces, and seismic waves are initiated by force or moment density sources. Examples reveal that including attenuation
NASA Astrophysics Data System (ADS)
Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong
2013-12-01
X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.
NASA Astrophysics Data System (ADS)
Stratoudaki, Theodosia; Clark, Matt; Wilcox, Paul D.
2017-02-01
Laser ultrasonics is a technique where lasers are used for the generation and detection of ultrasound instead of conventional piezoelectric transducers. The technique is broadband, non-contact, and couplant free, suitable for large stand-off distances, inspection of components of complex geometries and hazardous environments. In this paper, array imaging is presented by obtaining the full matrix of all possible laser generation, laser detection combinations in the array (Full Matrix Capture), at the nondestructive, thermoelastic regime. An advanced imaging technique developed for conventional ultrasonic transducers, the Total Focusing Method (TFM), is adapted for laser ultrasonics and then applied to the captured data, focusing at each point of the reconstruction area. In this way, the beamforming and steering of the ultrasound is done during the post processing. A 1-D laser induced ultrasonic phased array is synthesized with significantly improved spatial resolution and defect detectability. In this study, shear waves are used for the imaging, since they are more efficiently produced than longitudinal waves in the nondestructive, thermoelastic regime. Experimental results are presented from nondestructive, laser ultrasonic inspection of aluminum samples with side drilled holes and slots at depths varying between 5 and 20mm from the surface.
Determination of full piezoelectric complex parameters using gradient-based optimization algorithm
NASA Astrophysics Data System (ADS)
Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.
2016-02-01
At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.
Hayer, Katharina E.; Pizarro, Angel; Lahens, Nicholas F.; Hogenesch, John B.; Grant, Gregory R.
2015-01-01
Motivation: Because of the advantages of RNA sequencing (RNA-Seq) over microarrays, it is gaining widespread popularity for highly parallel gene expression analysis. For example, RNA-Seq is expected to be able to provide accurate identification and quantification of full-length splice forms. A number of informatics packages have been developed for this purpose, but short reads make it a difficult problem in principle. Sequencing error and polymorphisms add further complications. It has become necessary to perform studies to determine which algorithms perform best and which if any algorithms perform adequately. However, there is a dearth of independent and unbiased benchmarking studies. Here we take an approach using both simulated and experimental benchmark data to evaluate their accuracy. Results: We conclude that most methods are inaccurate even using idealized data, and that no method is highly accurate once multiple splice forms, polymorphisms, intron signal, sequencing errors, alignment errors, annotation errors and other complicating factors are present. These results point to the pressing need for further algorithm development. Availability and implementation: Simulated datasets and other supporting information can be found at http://bioinf.itmat.upenn.edu/BEERS/bp2 Supplementary information: Supplementary data are available at Bioinformatics online. Contact: hayer@upenn.edu PMID:26338770
Optimized MPPT algorithm for boost converters taking into account the environmental variables
NASA Astrophysics Data System (ADS)
Petit, Pierre; Sawicki, Jean-Paul; Saint-Eve, Frédéric; Maufay, Fabrice; Aillerie, Michel
2016-07-01
This paper presents a study on the specific behavior of the Boost DC-DC converters generally used for powering conversion of PV panels connected to a HVDC (High Voltage Direct Current) Bus. It follows some works pointing out that converter MPPT (Maximum Power Point Tracker) is severely perturbed by output voltage variations due to physical dependency of parameters as the input voltage, the output voltage and the duty cycle of the PWM switching control of the MPPT. As a direct consequence many converters connected together on a same load perturb each other because of the output voltage variations induced by fluctuations on the HVDC bus essentially due to a not insignificant bus impedance. In this paper we show that it is possible to include an internal computed variable in charge to compensate local and external variations to take into account the environment variables.
Madsen, Niels K; Godtliebsen, Ian H; Christiansen, Ove
2017-04-07
Vibrational coupled-cluster (VCC) theory provides an accurate method for calculating vibrational spectra and properties of small to medium-sized molecules. Obtaining these properties requires the solution of the non-linear VCC equations which can in some cases be hard to converge depending on the molecule, the basis set, and the vibrational state in question. We present and compare a range of different algorithms for solving the VCC equations ranging from a full Newton-Raphson method to approximate quasi-Newton models using an array of different convergence-acceleration schemes. The convergence properties and computational cost of the algorithms are compared for the optimization of VCC states. This includes both simple ground-state problems and difficult excited states with strong non-linearities. Furthermore, the effects of using tensor-decomposed solution vectors and residuals are investigated and discussed. The results show that for standard ground-state calculations, the conjugate residual with optimal trial vectors algorithm has the shortest time-to-solution although the full Newton-Raphson method converges in fewer macro-iterations. Using decomposed tensors does not affect the observed convergence rates in our test calculations as long as the tensors are decomposed to sufficient accuracy.
NASA Astrophysics Data System (ADS)
Brossier, R.
2011-04-01
Full waveform inversion (FWI) is an appealing seismic data-fitting procedure for the derivation of high-resolution quantitative models of the subsurface at various scales. Full modelling and inversion of visco-elastic waves from multiple seismic sources allow for the recovering of different physical parameters, although they remain computationally challenging tasks. An efficient massively parallel, frequency-domain FWI algorithm is implemented here on large-scale distributed-memory platforms for imaging two-dimensional visco-elastic media. The resolution of the elastodynamic equations, as the forward problem of the inversion, is performed in the frequency domain on unstructured triangular meshes, using a low-order finite element discontinuous Galerkin method. The linear system resulting from discretization of the forward problem is solved with a parallel direct solver. The inverse problem, which is presented as a non-linear local optimization problem, is solved in parallel with a quasi-Newton method, and this allows for reliable estimation of multiple classes of visco-elastic parameters. Two levels of parallelism are implemented in the algorithm, based on message passing interfaces and multi-threading, for optimal use of computational time and the core-memory resources available on modern distributed-memory multi-core computational platforms. The algorithm allows for imaging of realistic targets at various scales, ranging from near-surface geotechnic applications to crustal-scale exploration.
ERIC Educational Resources Information Center
Philadelphia Youth Network, 2006
2006-01-01
The title of this year's annual report has particular meaning for all of the staff at the Philadelphia Youth Network. The phrase derives from Philadelphia Youth Network's (PYN's) new vision statement, developed as part of its recent strategic planning process, which reads: All of our city's young people take their rightful places as full and…
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
Klymenko, M. V.; Remacle, F.
2014-10-28
A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables for the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.
Spencer, W.A.; Goode, S.R.
1997-10-01
ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors` studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.
Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo
2016-06-01
New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23.
Ahn, Hye Shin; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young
2014-01-01
Objective To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. Materials and Methods During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Results Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. Conclusion The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software. PMID:24843234
Kress, R.L.; Jansen, J.F.; Noakes, M.W.
1994-05-01
When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller.
Zhukov, V A; Shishkina, L N; Safatov, A S; Sergeev, A A; P'iankov, O V; Petrishchenko, V A; Zaĭtsev, B N; Toporkov, V S; Sergeev, A N; Nesvizhskiĭ, Iu V; Vorob'ev, A A
2010-01-01
The paper presents results of testing a modified algorithm for predicting virus ID50 values in a host of interest by extrapolation from a model host taking into account immune neutralizing factors and thermal inactivation of the virus. The method was tested for A/Aichi/2/68 influenza virus in SPF Wistar rats, SPF CD-1 mice and conventional ICR mice. Each species was used as a host of interest while the other two served as model hosts. Primary lung and trachea cells and secretory factors of the rats' airway epithelium were used to measure parameters needed for the purpose of prediction. Predicted ID50 values were not significantly different (p = 0.05) from those experimentally measured in vivo. The study was supported by ISTC/DARPA Agreement 450p.
Felton, Mistique C; Cashin, Cheryl E; Brown, Timothy T
2010-10-01
The need to move mental health systems toward more recovery-oriented treatment modes is well established. Progress has been made to define needed changes but evidence is lacking about the resources required to implement them. The Mental Health Services Act (MHSA) in California was designed to implement more recovery-oriented treatment modes. We use data from county funding requests and annual updates to examine how counties budgeted for recovery-oriented programs targeted to different age groups under MHSA. Findings indicate that initial per-client budgeting for Full Services Partnerships under MHSA was maintained in future cycles and counties budgeted less per client for children. With this analysis, we begin to benchmark resource allocation for programs that are intended to be recovery-oriented, which should be evaluated against appropriate outcome measures in the future to determine the degree of recovery-orientation.
Taking Full Advantage of Children's Literature
ERIC Educational Resources Information Center
Serafini, Frank
2012-01-01
Teachers need a deeper understanding of the texts being discussed, in particular the various textual and visual aspects of picturebooks themselves, including the images, written text and design elements, to support how readers made sense of these texts. As teachers become familiar with aspects of literary criticism, art history, visual grammar,…
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor
... remembering to take them. Some over-the-counter products, supplements, or natural remedies can interfere with the effectiveness of your prescribed medicines. Tell your diabetes educator about ANY supplements you are taking so ...
ERIC Educational Resources Information Center
Merson, Martha, Ed.; Reuys, Steve, Ed.
1999-01-01
Following an introduction on "Taking Risks" (Martha Merson), this journal contains 11 articles on taking risks in teaching adult literacy, mostly by educators in the Boston area. The following are included: "My Dreams Are Bigger than My Fears Now" (Sharon Carey); "Making a Pitch for Poetry in ABE [Adult Basic…
A full field, 3-D velocimeter for microgravity crystallization experiments
NASA Technical Reports Server (NTRS)
Brodkey, Robert S.; Russ, Keith M.
1991-01-01
The programming and algorithms needed for implementing a full-field, 3-D velocimeter for laminar flow systems and the appropriate hardware to fully implement this ultimate system are discussed. It appears that imaging using a synched pair of video cameras and digitizer boards with synched rails for camera motion will provide a viable solution to the laminar tracking problem. The algorithms given here are simple, which should speed processing. On a heavily loaded VAXstation 3100 the particle identification can take 15 to 30 seconds, with the tracking taking less than one second. It seeems reasonable to assume that four image pairs can thus be acquired and analyzed in under one minute.
... magnesium may cause diarrhea. Brands with calcium or aluminum may cause constipation. Rarely, brands with calcium may ... you take large amounts of antacids that contain aluminum, you may be at risk for calcium loss, ...
ERIC Educational Resources Information Center
Educational Leadership, 2011
2011-01-01
This paper begins by discussing the results of two studies recently conducted in Australia. According to the two studies, taking a gap year between high school and college may help students complete a degree once they return to school. The gap year can involve such activities as travel, service learning, or work. Then, the paper presents links to…
ERIC Educational Resources Information Center
Hopkins, Brian
2010-01-01
Two people take turns selecting from an even number of items. Their relative preferences over the items can be described as a permutation, then tools from algebraic combinatorics can be used to answer various questions. We describe each person's optimal selection strategies including how each could make use of knowing the other's preferences. We…
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
Is searching full text more effective than searching abstracts?
Lin, Jimmy
2009-01-01
Background With the growing availability of full-text articles online, scientists and other consumers of the life sciences literature now have the ability to go beyond searching bibliographic records (title, abstract, metadata) to directly access full-text content. Motivated by this emerging trend, I posed the following question: is searching full text more effective than searching abstracts? This question is answered by comparing text retrieval algorithms on MEDLINE® abstracts, full-text articles, and spans (paragraphs) within full-text articles using data from the TREC 2007 genomics track evaluation. Two retrieval models are examined: bm25 and the ranking algorithm implemented in the open-source Lucene search engine. Results Experiments show that treating an entire article as an indexing unit does not consistently yield higher effectiveness compared to abstract-only search. However, retrieval based on spans, or paragraphs-sized segments of full-text articles, consistently outperforms abstract-only search. Results suggest that highest overall effectiveness may be achieved by combining evidence from spans and full articles. Conclusion Users searching full text are more likely to find relevant articles than searching only abstracts. This finding affirms the value of full text collections for text retrieval and provides a starting point for future work in exploring algorithms that take advantage of rapidly-growing digital archives. Experimental results also highlight the need to develop distributed text retrieval algorithms, since full-text articles are significantly longer than abstracts and may require the computational resources of multiple machines in a cluster. The MapReduce programming model provides a convenient framework for organizing such computations. PMID:19192280
Transitional Division Algorithms.
ERIC Educational Resources Information Center
Laing, Robert A.; Meyer, Ruth Ann
1982-01-01
A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris
2005-01-01
FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.
A Full Bayesian Approach for Boolean Genetic Network Inference
Han, Shengtong; Wong, Raymond K. W.; Lee, Thomas C. M.; Shen, Linghao; Li, Shuo-Yen R.; Fan, Xiaodan
2014-01-01
Boolean networks are a simple but efficient model for describing gene regulatory systems. A number of algorithms have been proposed to infer Boolean networks. However, these methods do not take full consideration of the effects of noise and model uncertainty. In this paper, we propose a full Bayesian approach to infer Boolean genetic networks. Markov chain Monte Carlo algorithms are used to obtain the posterior samples of both the network structure and the related parameters. In addition to regular link addition and removal moves, which can guarantee the irreducibility of the Markov chain for traversing the whole network space, carefully constructed mixture proposals are used to improve the Markov chain Monte Carlo convergence. Both simulations and a real application on cell-cycle data show that our method is more powerful than existing methods for the inference of both the topology and logic relations of the Boolean network from observed data. PMID:25551820
License plate detection algorithm
NASA Astrophysics Data System (ADS)
Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds
2013-12-01
A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.
Incremental full configuration interaction
NASA Astrophysics Data System (ADS)
Zimmerman, Paul M.
2017-03-01
The incremental expansion provides a polynomial scaling method for computing electronic correlation energies. This article details a new algorithm and implementation for the incremental expansion of full configuration interaction (FCI), called iFCI. By dividing the problem into n-body interaction terms, accurate correlation energies can be recovered at low n in a highly parallel computation. Additionally, relatively low-cost approximations are possible in iFCI by solving for each incremental energy to within a specified threshold. Herein, systematic tests show that FCI-quality energies can be asymptotically reached for cases where dynamic correlation is dominant as well as where static correlation is vital. To further reduce computational costs and allow iFCI to reach larger systems, a select-CI approach (heat-bath CI) requiring two parameters is incorporated. Finally, iFCI provides the first estimate of FCI energies for hexatriene with a polarized double zeta basis set, which has 32 electrons correlated in 118 orbitals, corresponding to a FCI dimension of over 1038.
Taking multiple medicines safely
... medlineplus.gov/ency/patientinstructions/000883.htm Taking multiple medicines safely To use the sharing features on this ... directed. Why you may Need More Than one Medicine You may take more than one medicine to ...
ERIC Educational Resources Information Center
Brown, Marshall A.
2013-01-01
Today's work world is full of uncertainty. Every day, people hear about another organization going out of business, downsizing, or rightsizing. To prepare for these uncertain times, one must take charge of their own career. This article presents some tips for surviving in today's world of work: (1) Be self-managing; (2) Know what you…
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
ERIC Educational Resources Information Center
Bennett, Robert B., Jr.
2010-01-01
Legal studies faculty need to take the long view in their academic and professional lives. Taking the long view would seem to be a cliched piece of advice, but too frequently legal studies faculty, like their students, get focused on meeting the next short-term hurdle--getting through the next class, grading the next stack of papers, making it…
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
... Committed to Quality in Patient Care TAKE THE IBS TEST Do you have recurrent abdominal pain or ... have a real and treatable medical condition called irritable bowel syndrome (IBS). Your doctor now has new information and ...
... was a good idea.) I Wider use of electronic prescription pills boxes and reminder devices that can ... to help you take your medicines are proliferating. Electronic pill reminder devices are available at most large ...
JWST Full Scale Model Being Built
: The full-scale model of the James Webb Space Telescope is constructed for the 2010 World Science Festival in Battery Park, NY. The model takes about five days to construct. This video contains a ...
... Live a Full Life with Fibro Page Content Fibromyalgia is a chronic pain condition that affects 10 ... family, you can live an active life with fibromyalgia. Talking with Your Physician Take the first step ...
Implicit, nonswitching, vector-oriented algorithm for steady transonic flow
NASA Technical Reports Server (NTRS)
Lottati, I.
1983-01-01
A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.
Teachers Taking Professional Abuse
ERIC Educational Resources Information Center
Normore, Anthony H.; Floyd, Andrea
2005-01-01
Preservice teachers get their first teaching position hoping to take the first step toward becoming professional educators and expecting support from experienced colleagues and administrators, who often serve as their mentors. In this article, the authors present the story of Kristine (a pseudonym), who works at a middle school in a large U.S.…
NASA Astrophysics Data System (ADS)
White, Patrick; Smith, Emma
2016-10-01
A new study of the long-term employment prospects of UK science and engineering students suggests that talk of a skills shortage is overblown, with most graduates in these disciplines taking jobs outside science. Researchers Patrick White and Emma Smith discuss their findings and what they mean for current physics students
ERIC Educational Resources Information Center
Engelhardt, Lucas M.
2015-01-01
In this article, the author presents a price-takers' market simulation geared toward principles-level students. This simulation demonstrates that price-taking behavior is a natural result of the conditions that create perfect competition. In trials, there is a significant degree of price convergence in just three or four rounds. Students find this…
ERIC Educational Resources Information Center
Spitzer, Greg; Ogurek, Douglas J.
2009-01-01
Performing-arts centers can provide benefits at the high school and collegiate levels, and administrators can take steps now to get the show started. When a new performing-arts center comes to town, local businesses profit. Events and performances draw visitors to the community. Ideally, a performing-arts center will play many roles: entertainment…
ERIC Educational Resources Information Center
McNiff, J.
2011-01-01
In this article I argue for higher education practitioners to take focused action to contribute to transforming their societies into open and democratically negotiated forms of living, and why they should do so. The need is especially urgent in South Africa, whose earlier revolutionary spirit led to massive social change. The kind of social…
Taking Library Leadership Personally
ERIC Educational Resources Information Center
Davis, Heather; Macauley, Peter
2011-01-01
This paper outlines the emerging trends for leadership in the knowledge era. It discusses these within the context of leading, creating and sustaining the performance development cultures that libraries require. The first step is to recognise that we all need to take leadership personally no matter whether we see ourselves as leaders or followers.…
NASA Astrophysics Data System (ADS)
Southall, Hugh L.; O'Donnell, Teresa H.; Derov, John S.
2010-04-01
EGO is an evolutionary, data-adaptive algorithm which can be useful for optimization problems with expensive cost functions. Many antenna design problems qualify since complex computational electromagnetics (CEM) simulations can take significant resources. This makes evolutionary algorithms such as genetic algorithms (GA) or particle swarm optimization (PSO) problematic since iterations of large populations are required. In this paper we discuss multiparameter optimization of a wideband, single-element antenna over a metamaterial ground plane and the interfacing of EGO (optimization) with a full-wave CEM simulation (cost function evaluation).
Martinson, Eric; Brock, Derek
2013-06-01
Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners.
Thakur, C P; Sharma, D
1984-01-01
The incidence of crimes reported to three police stations in different towns (one rural, one urban, one industrial) was studied to see if it varied with the day of the lunar cycle. The period of the study covered 1978-82. The incidence of crimes committed on full moon days was much higher than on all other days, new moon days, and seventh days after the full moon and new moon. A small peak in the incidence of crimes was observed on new moon days, but this was not significant when compared with crimes committed on other days. The incidence of crimes on equinox and solstice days did not differ significantly from those on other days, suggesting that the sun probably does not influence the incidence of crime. The increased incidence of crimes on full moon days may be due to "human tidal waves" caused by the gravitational pull of the moon. PMID:6440656
ERIC Educational Resources Information Center
Lawton, Rebecca
2008-01-01
In this essay, the author recalls several of her experiences in which she successfully pulled her boats out of river holes by throwing herself to the water as a sea-anchor. She learned this trick from her senior guides at a spring training. Her guides told her, "When you're stuck in a hole, take the "C" train."" "Meaning?" The author asked her…
2012-04-25
Business Review (April 2006). [7] Marco Iansiti and Roy Levien. “Strategy as Ecology,” Harvard Business Review , March 2004. 27 It Takes an... Harvard Business Review (March 2004). [8] Viljainen, Martti & Kauppinen, Marjo. "Software Ecosystems: A Set of Management Practices for Platform...E. “How Competitive Forces Shape Strategy.” Harvard Business Review (March 1979). [11] ASA(ALT) Common Operating Environment Implementation Plan
ERIC Educational Resources Information Center
Matuskey, Patricia Varan; Tango, Robert
The "Care-Full" teaching process described in this report is an assessment-oriented procedure which monitors the student's specific rate of growth toward defined learning objectives. First, the report briefly delineates eight steps in the process, indicating that teachers and counselors: (1) become aware of the need for assessment; (2) transform…
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
SUPERFUND TREATABILITY CLEARINGHOUSE: FULL ...
This treatability study reports on the results of one of a series of field trials using various remedial action technologies that may be capable of restoring Herbicide Orange (HO)XDioxin contaminated sites. A full-scale field trial using a rotary kiln incinerator capable of processing up to 6 tons per hour of dioxin contaminated soil was conducted at the Naval Construction Battalion Center, Gulfport, MS. publish information
NASA Technical Reports Server (NTRS)
1931-01-01
Construction of motor fairing for the fan motors of the Full-Scale Tunnel (FST). The motors and their supporting structures were enclosed in aerodynamically smooth fairings to minimize resistance to the air flow. Close examination of this photograph reveals the complicated nature of constructing a wind tunnel. This motor fairing, like almost every other structure in the FST, represents a one-of-a-kind installation.
NASA Technical Reports Server (NTRS)
1929-01-01
Interior view of Full-Scale Tunnel (FST) model. (Small human figures have been added for scale.) On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel . 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow.
Categorizing Variations of Student-Implemented Sorting Algorithms
ERIC Educational Resources Information Center
Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri
2012-01-01
In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…
Routing Algorithm Exploits Spatial Relations
NASA Technical Reports Server (NTRS)
Okino, Clayton; Jennings, Esther
2004-01-01
A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).
Full Tolerant Archiving System
NASA Astrophysics Data System (ADS)
Knapic, C.; Molinaro, M.; Smareglia, R.
2013-10-01
The archiving system at the Italian center for Astronomical Archives (IA2) manages data from external sources like telescopes, observatories, or surveys and handles them in order to guarantee preservation, dissemination, and reliability, in most cases in a Virtual Observatory (VO) compliant manner. A metadata model dynamic constructor and a data archive manager are new concepts aimed at automatizing the management of different astronomical data sources in a fault tolerant environment. The goal is a full tolerant archiving system, nevertheless complicated by the presence of various and time changing data models, file formats (FITS, HDF5, ROOT, PDS, etc.) and metadata content, even inside the same project. To avoid this unpleasant scenario a novel approach is proposed in order to guarantee data ingestion, backward compatibility, and information preservation.
NASA Technical Reports Server (NTRS)
1930-01-01
Construction of Full Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)
NASA Technical Reports Server (NTRS)
1930-01-01
Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).
Taking action against violence.
Kunz, K
1996-05-01
Significant increase in violent crimes in recent years forced Icelandic men to take action against violence. Television was seen as a major contributory factor in increasing violence. Surveys indicate that 10-15 years after television broadcasting commences in a particular society, the incidence of crime can be expected to double. While the majority of the individuals arrested for violent crimes are men, being male does not necessarily mean being violent. The Men's Committee of the Icelandic Equal Rights Council initiated a week-long information and education campaign under the theme "Men Against Violence". This campaign involved several events including an art exhibit, speeches on violence in families, treatment sought by those who are likely to resort to violence, booklet distribution among students in secondary schools, and a mass media campaign to raise public awareness on this pressing problem.
NASA Technical Reports Server (NTRS)
2007-01-01
This image of Jupiter is produced from a 2x2 mosaic of photos taken by the New Horizons Long Range Reconnaissance Imager (LORRI), and assembled by the LORRI team at the Johns Hopkins University Applied Physics Laboratory. The telescopic camera snapped the images during a 3-minute, 35-second span on February 10, when the spacecraft was 29 million kilometers (18 million miles) from Jupiter. At this distance, Jupiter's diameter was 1,015 LORRI pixels -- nearly filling the imager's entire (1,024-by-1,024 pixel) field of view. Features as small as 290 kilometers (180 miles) are visible.
Both the Great Red Spot and Little Red Spot are visible in the image, on the left and lower right, respectively. The apparent 'storm' on the planet's right limb is a section of the south tropical zone that has been detached from the region to its west (or left) by a 'disturbance' that scientists and amateur astronomers are watching closely.
At the time LORRI took these images, New Horizons was 820 million kilometers (510 million miles) from home -- nearly 51/2 times the distance between the Sun and Earth. This is the last full-disk image of Jupiter LORRI will produce, since Jupiter is appearing larger as New Horizons draws closer, and the imager will start to focus on specific areas of the planet for higher-resolution studies.
Full Color Holographic Endoscopy
NASA Astrophysics Data System (ADS)
Osanlou, A.; Bjelkhagen, H.; Mirlis, E.; Crosby, P.; Shore, A.; Henderson, P.; Napier, P.
2013-02-01
The ability to produce color holograms from the human tissue represents a major medical advance, specifically in the areas of diagnosis and teaching. This has been achieved at Glyndwr University. In corporation with partners at Gooch & Housego, Moor Instruments, Vivid Components and peninsula medical school, Exeter, UK, for the first time, we have produced full color holograms of human cell samples in which the cell boundary and the nuclei inside the cells could be clearly focused at different depths - something impossible with a two-dimensional photographic image. This was the main objective set by the peninsula medical school at Exeter, UK. Achieving this objective means that clinically useful images essentially indistinguishable from the object human cells could be routinely recorded. This could potentially be done at the tip of a holo-endoscopic probe inside the body. Optimised recording exposure and development processes for the holograms were defined for bulk exposures. This included the optimisation of in-house recording emulsions for coating evaluation onto polymer substrates (rather than glass plates), a key step for large volume commercial exploitation. At Glyndwr University, we also developed a new version of our in-house holographic (world-leading resolution) emulsion.
NASA Technical Reports Server (NTRS)
1930-01-01
Installation of Full Scale Tunnel (FST) power plant. Virginia Public Service Company could not supply adequate electricity to run the wind tunnels being built at Langley. (The Propeller Research Tunnel was powered by two submarine diesel engines.) This led to the consideration of a number of different ideas for generating electric power to drive the fan motors in the FST. The main proposition involved two 3000 hp and two 1000 hp diesel engines with directly connected generators. Another, proposition suggested 30 Liberty motors driving 600 hp DC generators in pairs. For a month, engineers at Langley were hopeful they could secure additional diesel engines from decommissioned Navy T-boats but the Navy could not offer a firm commitment regarding the future status of the submarines. By mid-December 1929, Virginia Public Service Company had agreed to supply service to the field at the north end of the King Street Bridge connecting Hampton and Langley Field. Thus, new plans for FST powerplant and motors were made. Smith DeFrance described the motors in NACA TR No. 459: 'The most commonly used power plant for operating a wind tunnel is a direct-current motor and motor-generator set with Ward Leonard control system. For the FST it was found that alternating current slip-ring induction motors, together with satisfactory control equipment, could be purchased for approximately 30 percent less than the direct-current equipment. Two 4000-horsepower slip-ring induction motors with 24 steps of speed between 75 and 300 r.p.m. were therefore installed.'
ERIC Educational Resources Information Center
Schuster, Dwight
2008-01-01
Physical models in the classroom "cannot be expected to represent the full-scale phenomenon with complete accuracy, not even in the limited set of characteristics being studied" (AAAS 1990). Therefore, by modifying a popular classroom activity called a "planet walk," teachers can explore upper elementary students' current understandings; create an…
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Viscoacoustic anisotropic full waveform inversion
NASA Astrophysics Data System (ADS)
Qu, Yingming; Li, Zhenchun; Huang, Jianping; Li, Jinli
2017-01-01
A viscoacoustic vertical transverse isotropic (VTI) quasi-differential wave equation, which takes account for both the viscosity and anisotropy of media, is proposed for wavefield simulation in this study. The finite difference method is used to solve the equations, for which the attenuation terms are solved in the wavenumber domain, and all remaining terms in the time-space domain. To stabilize the adjoint wavefield, robust regularization operators are applied to the wave equation to eliminate the high-frequency component of the numerical noise produced during the backward propagation of the viscoacoustic wavefield. Based on these strategies, we derive the corresponding gradient formula and implement a viscoacoustic VTI full waveform inversion (FWI). Numerical tests verify that our proposed viscoacoustic VTI FWI can produce accurate and stable inversion results for viscoacoustic VTI data sets. In addition, we test our method's sensitivity to velocity, Q, and anisotropic parameters. Our results show that the sensitivity to velocity is much higher than that to Q and anisotropic parameters. As such, our proposed method can produce acceptable inversion results as long as the Q and anisotropic parameters are within predefined thresholds.
Quantum algorithm for data fitting.
Wiebe, Nathan; Braun, Daniel; Lloyd, Seth
2012-08-03
We provide a new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially large data set by building upon an algorithm for solving systems of linear equations efficiently [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)]. In many cases, our algorithm can also efficiently find a concise function that approximates the data to be fitted and bound the approximation error. In cases where the input data are pure quantum states, the algorithm can be used to provide an efficient parametric estimation of the quantum state and therefore can be applied as an alternative to full quantum-state tomography given a fault tolerant quantum computer.
NASA Astrophysics Data System (ADS)
1998-11-01
HAMLET (Highly Automated Multimedia Light Enhanced Theatre) was the star performance at the recent finals of the `Young Engineer for Britain' competition, held at the Commonwealth Institute in London. This state-of-the-art computer-controlled theatre lighting system won the title `Young Engineers for Britain 1998' for David Kelnar, Jonathan Scott, Ramsay Waller and John Wyllie (all aged 16) from Merchiston Castle School, Edinburgh. HAMLET replaces conventional manually-operated controls with a special computer program, and should find use in the thousands of small theatres, schools and amateur drama productions that operate with limited resources and without specialist expertise. The four students received a Â£2500 prize between them, along with Â£2500 for their school, and in addition they were invited to spend a special day with the Royal Engineers. A project designed to improve car locking systems enabled Ian Robinson of Durham University to take the `Working in industry award' worth Â£1000. He was also given the opportunity of a day at sea with the Royal Navy. Other prizewinners with their projects included: Jun Baba of Bloxham School, Banbury (a cardboard armchair which converts into a desk and chair); Kobika Sritharan and Gemma Hancock, Bancroft's School, Essex (a rain warning system for a washing line); and Alistair Clarke, Sam James and Ruth Jenkins, Bishop of Llandaff High School, Cardiff (a mechanism to open and close the retractable roof of the Millennium Stadium in Cardiff). The two principal national sponsors of the competition, which is organized by the Engineering Council, are Lloyd's Register and GEC. Industrial companies, professional engineering institutions and educational bodies also provided national and regional prizes and support. During this year's finals, various additional activities took place, allowing the students to surf the Internet and navigate individual engineering websites on a network of computers. They also visited the
Chambers, Tod; Ahmad, Ayesha; Crow, Sheila; Davis, Dena S; Dresser, Rebecca; Harter, Thomas D; Jordan, Sara R; Kaposy, Chris; Lanoix, Monique; Lee, K Jane; Scully, Jackie Leach; Taylor, Katherine A; Watson, Katie
2013-01-01
This narrative symposium examines the relationship of bioethics practice to personal experiences of illness. A call for stories was developed by Tod Chambers, the symposium editor, and editorial staff and was sent to several commonly used bioethics listservs and posted on the Narrative Inquiry in Bioethics website. The call asked authors to relate a personal story of being ill or caring for a person who is ill, and to describe how this affected how they think about bioethical questions and the practice of medicine. Eighteen individuals were invited to submit full stories based on review of their proposals. Twelve stories are published in this symposium, and six supplemental stories are published online only through Project MUSE. Authors explore themes of vulnerability, suffering, communication, voluntariness, cultural barriers, and flaws in local healthcare systems through stories about their own illnesses or about caring for children, partners, parents and grandparents. Commentary articles by Arthur Frank, Bradley Lewis, and Carol Taylor follow the collection of personal narratives.
Take-off mechanics in hummingbirds (Trochilidae).
Tobalske, Bret W; Altshuler, Douglas L; Powers, Donald R
2004-03-01
Initiating flight is challenging, and considerable effort has focused on understanding the energetics and aerodynamics of take-off for both machines and animals. For animal flight, the available evidence suggests that birds maximize their initial flight velocity using leg thrust rather than wing flapping. The smallest birds, hummingbirds (Order Apodiformes), are unique in their ability to perform sustained hovering but have proportionally small hindlimbs that could hinder generation of high leg thrust. Understanding the take-off flight of hummingbirds can provide novel insight into the take-off mechanics that will be required for micro-air vehicles. During take-off by hummingbirds, we measured hindlimb forces on a perch mounted with strain gauges and filmed wingbeat kinematics with high-speed video. Whereas other birds obtain 80-90% of their initial flight velocity using leg thrust, the leg contribution in hummingbirds was 59% during autonomous take-off. Unlike other species, hummingbirds beat their wings several times as they thrust using their hindlimbs. In a phylogenetic context, our results show that reduced body and hindlimb size in hummingbirds limits their peak acceleration during leg thrust and, ultimately, their take-off velocity. Previously, the influence of motivational state on take-off flight performance has not been investigated for any one organism. We studied the full range of motivational states by testing performance as the birds took off: (1) to initiate flight autonomously, (2) to escape a startling stimulus or (3) to aggressively chase a conspecific away from a feeder. Motivation affected performance. Escape and aggressive take-off featured decreased hindlimb contribution (46% and 47%, respectively) and increased flight velocity. When escaping, hummingbirds foreshortened their body movement prior to onset of leg thrust and began beating their wings earlier and at higher frequency. Thus, hummingbirds are capable of modulating their leg and
NASA Technical Reports Server (NTRS)
1990-01-01
One of three U.S. Air Force SR-71 reconnaissance aircraft originally retired from operational service and loaned to NASA for a high-speed research program retracts its landing gear after taking off from NASA's Ames-Dryden Flight Research Facility (later Dryden Flight Research Center), Edwards, California, on a 1990 research flight. One of the SR-71As was later returned to the Air Force for active duty in 1995. Data from the SR-71 high-speed research program will be used to aid designers of future supersonic/hypersonic aircraft and propulsion systems. Two SR-71 aircraft have been used by NASA as testbeds for high-speed and high-altitude aeronautical research. The aircraft, an SR-71A and an SR-71B pilot trainer aircraft, have been based here at NASA's Dryden Flight Research Center, Edwards, California. They were transferred to NASA after the U.S. Air Force program was cancelled. As research platforms, the aircraft can cruise at Mach 3 for more than one hour. For thermal experiments, this can produce heat soak temperatures of over 600 degrees Fahrenheit (F). This operating environment makes these aircraft excellent platforms to carry out research and experiments in a variety of areas -- aerodynamics, propulsion, structures, thermal protection materials, high-speed and high-temperature instrumentation, atmospheric studies, and sonic boom characterization. The SR-71 was used in a program to study ways of reducing sonic booms or over pressures that are heard on the ground, much like sharp thunderclaps, when an aircraft exceeds the speed of sound. Data from this Sonic Boom Mitigation Study could eventually lead to aircraft designs that would reduce the 'peak' overpressures of sonic booms and minimize the startling affect they produce on the ground. One of the first major experiments to be flown in the NASA SR-71 program was a laser air data collection system. It used laser light instead of air pressure to produce airspeed and attitude reference data, such as angle of
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Taking Sides on "Takings": Rhetorical Resurgence of the Sagebrush Rebellion.
ERIC Educational Resources Information Center
Chiaviello, Tony
The "Takings Clause" of the Fifth Amendment to the United States Constitution seems clear enough: when the government takes an individual's property, it must pay him or her for it. The "Sagebrush Rebellion" refers to the numerous incarnations of a movement to privatize public lands and contain environmental regulation. This…
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.
A distributed Canny edge detector: algorithm and FPGA implementation.
Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J
2014-07-01
The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100
Personal pronouns and perspective taking in toddlers.
Ricard, M; Girouard, P C; Décarie, T G
1999-10-01
This study examined the evolution of visual perspective-taking skills in relation to the comprehension and production of first, second and third person pronouns. Twelve French-speaking and 12 English-speaking children were observed longitudinally from 1.6 until they had acquired all pronouns and succeeded on all tasks. Free-play sessions and three tasks were used to test pronominal competence. Four other tasks assessed Level-1 perspective-taking skills: two of these tasks required the capacity to consider two visual perspectives, and two others tested the capacity to coordinate three such perspectives. The results indicated that children's performance on perspective-taking tasks was correlated with full pronoun acquisition. Moreover, competence at coordinating two visual perspectives preceded the full mastery of first and second person pronouns, and competence at coordinating three perspectives preceded the full mastery of third person pronouns when a strict criterion was adopted. However, with less stringent criteria, the sequence from perspective taking to pronoun acquisition varied either slightly or considerably. These findings are discussed in the light of the 'specificity hypothesis' concerning the links between cognition and language, and also in the context of the recent body of research on the child's developing theory of mind.
A DRAM compiler algorithm for high performance VLSI embedded memories
NASA Technical Reports Server (NTRS)
Eldin, A. G.
1992-01-01
In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .
ERIC Educational Resources Information Center
Grabowski, Carl
2008-01-01
Taking over a broken program can be one of the hardest tasks to take on. However, working towards a vision and a common goal--and eventually getting there--makes it all worth it in the end. In this article, the author shares the lessons she learned as the new director for the Bright Horizons Center in Ashburn, Virginia. She suggests that new…
Taking Chances in Romantic Relationships
ERIC Educational Resources Information Center
Elliott, Lindsey; Knox, David
2016-01-01
A 64 item Internet questionnaire was completed by 381 undergraduates at a large southeastern university to assess taking chances in romantic relationships. Almost three fourths (72%) self-identified as being a "person willing to take chances in my love relationship." Engaging in unprotected sex, involvement in a "friends with…
Survey cover pages: to take or not to take.
Sansone, Randy A; Lam, Charlene; Wiederman, Michael W
2010-01-01
In survey research, the elements of informed conset, including contact information for the researchers and the Institutional Review Board, may be located on a cover page, which participants are advised that they may take. To date, we are not aware of any studies examining the percentage of research participants that actually take these cover pages, which was the purpose of this study. Among a consecutive sample of 419 patients in an internal medicine setting, 16% removed the cover page. There were no demographic predictors regarding who took versus did not take the cover page.
Source Estimation by Full Wave Form Inversion
Sjögreen, Björn; Petersson, N. Anders
2013-08-07
Given time-dependent ground motion recordings at a number of receiver stations, we solve the inverse problem for estimating the parameters of the seismic source. The source is modeled as a point moment tensor source, characterized by its location, moment tensor components, the start time, and frequency parameter (rise time) of its source time function. In total, there are 11 unknown parameters. We use a non-linear conjugate gradient algorithm to minimize the full waveform misfit between observed and computed ground motions at the receiver stations. An important underlying assumption of the minimization problem is that the wave propagation is accurately described by the elastic wave equation in a heterogeneous isotropic material. We use a fourth order accurate finite difference method, developed in [12], to evolve the waves forwards in time. The adjoint wave equation corresponding to the discretized elastic wave equation is used to compute the gradient of the misfit, which is needed by the non-linear conjugated minimization algorithm. A new source point moment source discretization is derived that guarantees that the Hessian of the misfit is a continuous function of the source location. An efficient approach for calculating the Hessian is also presented. We show how the Hessian can be used to scale the problem to improve the convergence of the non-linear conjugated gradient algorithm. Numerical experiments are presented for estimating the source parameters from synthetic data in a layer over half-space problem (LOH.1), illustrating rapid convergence of the proposed approach.
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1994-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that are currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Kerr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1994-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that are currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Kerr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Silberberg, Yaron; Kwak, Dochan (Technical Monitor)
1995-01-01
This paper will present results in computational nonlinear optics. An algorithm will be described that solves the full vector nonlinear Maxwell's equations exactly without the approximations that we currently made. Present methods solve a reduced scalar wave equation, namely the nonlinear Schrodinger equation, and neglect the optical carrier. Also, results will be shown of calculations of 2-D electromagnetic nonlinear waves computed by directly integrating in time the nonlinear vector Maxwell's equations. The results will include simulations of 'light bullet' like pulses. Here diffraction and dispersion will be counteracted by nonlinear effects. The time integration efficiently implements linear and nonlinear convolutions for the electric polarization, and can take into account such quantum effects as Karr and Raman interactions. The present approach is robust and should permit modeling 2-D and 3-D optical soliton propagation, scattering, and switching directly from the full-vector Maxwell's equations.
Using DFX for Algorithm Evaluation
Beiriger, J.I.; Funkhouser, D.R.; Young, C.J.
1998-10-20
Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a
Intelligent decision support algorithm for distribution system restoration.
Singh, Reetu; Mehfuz, Shabana; Kumar, Parmod
2016-01-01
Distribution system is the means of revenue for electric utility. It needs to be restored at the earliest if any feeder or complete system is tripped out due to fault or any other cause. Further, uncertainty of the loads, result in variations in the distribution network's parameters. Thus, an intelligent algorithm incorporating hybrid fuzzy-grey relation, which can take into account the uncertainties and compare the sequences is discussed to analyse and restore the distribution system. The simulation studies are carried out to show the utility of the method by ranking the restoration plans for a typical distribution system. This algorithm also meets the smart grid requirements in terms of an automated restoration plan for the partial/full blackout of network.
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
... page: https://medlineplus.gov/news/fullstory_164577.html Nurse! What's Taking So Long? Study at a children's ... in a child's hospital room, anxious parents expect nurses to respond pronto. That rarely happens, however, and ...
LRO Takes the Moon's Temperature
During the June 2011 lunar eclipse, scientists will be able to get a unique view of the moon. While the sun is blocked by the Earth, LRO's Diviner instrument will take the temperature on the lunar ...
... CDC Features Take Care with Pet Reptiles and Amphibians Language: English Español (Spanish) Recommend on Facebook Tweet ... helpful resources. Safe Handling Tips for Reptiles and Amphibians Always wash your hands thoroughly after handling reptiles ...
... attack or stroke. Lifestyle changes, like making smart food choices and being physically active, and taking medicine can ... your risk by managing your “ABCs” with smart food choices, physical activity, and medicine. Losing weight and quitting ...
Brazilian physicists take centre stage
NASA Astrophysics Data System (ADS)
Curtis, Susan
2014-06-01
With the FIFA World Cup taking place in Brazil this month, Susan Curtis travels to South America's richest nation to find out how its physicists are exploiting recent big increases in science funding.
LRO Takes the Moon's Temperature
During the December 2011 lunar eclipse, LRO's Diviner instrument will take the temperature on the lunar surface. Since different rock sizes cool at different rates, scientists will be able to infer...
NASA's Commercial Crew Program (CCP) is taking America to new heights with its Commercial Crew Development Round 2 (CCDev2) partners. In 2011, NASA entered into funded Space Act Agreements (SAAs) w...
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Full Duplex, Spread Spectrum Radio System
NASA Technical Reports Server (NTRS)
Harvey, Bruce A.
2000-01-01
The goal of this project was to support the development of a full duplex, spread spectrum voice communications system. The assembly and testing of a prototype system consisting of a Harris PRISM spread spectrum radio, a TMS320C54x signal processing development board and a Zilog Z80180 microprocessor was underway at the start of this project. The efforts under this project were the development of multiple access schemes, analysis of full duplex voice feedback delays, and the development and analysis of forward error correction (FEC) algorithms. The multiple access analysis involved the selection between code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA). Full duplex voice feedback analysis involved the analysis of packet size and delays associated with full loop voice feedback for confirmation of radio system performance. FEC analysis included studies of the performance under the expected burst error scenario with the relatively short packet lengths, and analysis of implementation in the TMS320C54x digital signal processor. When the capabilities and the limitations of the components used were considered, the multiple access scheme chosen was a combination TDMA/FDMA scheme that will provide up to eight users on each of three separate frequencies. Packets to and from each user will consist of 16 samples at a rate of 8,000 samples per second for a total of 2 ms of voice information. The resulting voice feedback delay will therefore be 4 - 6 ms. The most practical FEC algorithm for implementation was a convolutional code with a Viterbi decoder. Interleaving of the bits of each packet will be required to offset the effects of burst errors.
Full Employment in Industrialized Countries.
ERIC Educational Resources Information Center
Britton, Andrew
1997-01-01
Argues that full employment must be acceptable on both social and economic grounds. Examines profound changes in industrialized economies since the 1970s and the diversity of employment contracts. Suggests that difficult policy decisions surround full employment. (SK)
Effects of visualization on algorithm comprehension
NASA Astrophysics Data System (ADS)
Mulvey, Matthew
Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.
Online Databases. ASCII Full Texts.
ERIC Educational Resources Information Center
Tenopir, Carol
1995-01-01
Defines the American Standard Code for Information Interchange (ASCII) full text, and reviews its past, present, and future uses in libraries. Discusses advantages, disadvantages, and uses of searchable and nonsearchable full-text databases. Also comments on full-text CD-ROM products and on technological advancements made by library vendors. (JMV)
Runtime support for parallelizing data mining algorithms
NASA Astrophysics Data System (ADS)
Jin, Ruoming; Agrawal, Gagan
2002-03-01
With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.
Full-Scale Tests of NACA Cowlings
NASA Technical Reports Server (NTRS)
Theodorsen, Theodore; Brevoort, M J; Stickle, George W
1937-01-01
A comprehensive investigation has been carried on with full-scale models in the NACA 20-foot wind tunnel, the general purpose of which is to furnish information in regard to the physical functioning of the composite propeller-nacelle unit under all conditions of take-off, taxiing, and normal flight. This report deals exclusively with the cowling characteristics under condition of normal flight and includes the results of tests of numerous combinations of more than a dozen nose cowlings, about a dozen skirts, two propellers, two sizes of nacelle, as well as various types of spinners and other devices.
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Sorting on STAR. [CDC computer algorithm timing comparison
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Full wave-field reflection coefficient inversion.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2007-12-01
This paper develops a Bayesian inversion for recovering multilayer geoacoustic (velocity, density, attenuation) profiles from a full wave-field (spherical-wave) seabed reflection response. The reflection data originate from acoustic time series windowed for a single bottom interaction, which are processed to yield reflection coefficient data as a function of frequency and angle. Replica data for inversion are computed using a wave number-integration model to calculate the full complex acoustic pressure field, which is processed to produce a commensurate seabed response function. To address the high computational cost of calculating short range acoustic fields, the inversion algorithms are parallelized and frequency averaging is replaced by range averaging in the forward model. The posterior probability density is interpreted in terms of optimal parameter estimates, marginal distributions, and credibility intervals. Inversion results for the full wave-field seabed response are compared to those obtained using plane-wave reflection coefficients. A realistic synthetic study indicates that the plane-wave assumption can fail, producing erroneous results with misleading uncertainty bounds, whereas excellent results are obtained with the full-wave reflection inversion.
Algorithms for Disconnected Diagrams in Lattice QCD
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Konstantinos; Yoon, Boram; Gupta, Rajan; Syritsyn, Sergey
2016-11-01
Computing disconnected diagrams in Lattice QCD (operator insertion in a quark loop) entails the computationally demanding problem of taking the trace of the all to all quark propagator. We first outline the basic algorithm used to compute a quark loop as well as improvements to this method. Then, we motivate and introduce an algorithm based on the synergy between hierarchical probing and singular value deflation. We present results for the chiral condensate using a 2+1-flavor clover ensemble and compare estimates of the nucleon charges with the basic algorithm.
Algorithmic Perspectives on Problem Formulations in MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.
A full-chip DSA correction framework
NASA Astrophysics Data System (ADS)
Wang, Wei-Long; Latypov, Azat; Zou, Yi; Coskun, Tamer
2014-03-01
The graphoepitaxy DSA process relies on lithographically created confinement wells to perform directed self-assembly in the thin film of the block copolymer. These self-assembled patterns are then etch transferred into the substrate. The conventional DUV immersion or EUV lithography is still required to print these confinement wells, and the lithographic patterning residual errors propagate to the final patterns created by DSA process. DSA proximity correction (PC), in addition to OPC, is essential to obtain accurate confinement well shapes that resolve the final DSA patterns precisely. In this study, we proposed a novel correction flow that integrates our co-optimization algorithms, rigorous 2-D DSA simulation engine, and OPC tool. This flow enables us to optimize our process and integration as well as provides a guidance to design optimization. We also showed that novel RET techniques such as DSA-Aware assist feature generation can be used to improve the process window. The feasibility of our DSA correction framework on large layout with promising correction accuracy has been demonstrated. A robust and efficient correction algorithm is also determined by rigorous verification studies. We also explored how the knowledge of DSA natural pitches and lithography printing constraints provide a good guidance to establish DSA-Friendly designs. Finally application of our DSA full-chip computational correction framework to several real designs of contact-like holes is discussed. We also summarize the challenges associated with computational DSA technology.
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei
2016-09-01
Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.
NASA Astrophysics Data System (ADS)
Lauroesch, T. J.; Edinger, J. R., Jr.; Lauroesch, J. T.
1996-01-01
The hypothesis that weather is influenced by the occurrence of the full moon has been explored with respect to cloud coverage. Statistical analysis of 44 years of data has shown no apparent correlation between a clear sky and the occurrence of the full moon.
Professionalism: Teachers Taking the Reins
ERIC Educational Resources Information Center
Helterbran, Valeri R.
2008-01-01
It is essential that teachers take a proactive look at their profession and themselves to strengthen areas of professionalism over which they have control. In this article, the author suggests strategies that include collaborative planning, reflectivity, growth in the profession, and the examination of certain personal characteristics.
Taking Stands for Social Justice
ERIC Educational Resources Information Center
Lindley, Lorinda; Rios, Francisco
2004-01-01
In this paper the authors describe efforts to help students take a stand for social justice in the College of Education at one predominantly White institution in the western Rocky Mountain region. The authors outline the theoretical frameworks that inform this work and the context of our work. The focus is on specific pedagogical strategies used…
ERIC Educational Resources Information Center
Rebell, Michael A.; Odden, Allan; Rolle, Anthony; Guthrie, James W.
2012-01-01
Educational Leadership talks with four experts in the fields of education policy and finance about how schools can weather the current financial crisis. Michael A. Rebell focuses on the recession and students' rights; Allan Odden suggests five steps schools can take to improve in tough times; Anthony Rolle describes the tension between equity and…
Experiencing discrimination increases risk taking.
Jamieson, Jeremy P; Koslov, Katrina; Nock, Matthew K; Mendes, Wendy Berry
2013-02-01
Prior research has revealed racial disparities in health outcomes and health-compromising behaviors, such as smoking and drug abuse. It has been suggested that discrimination contributes to such disparities, but the mechanisms through which this might occur are not well understood. In the research reported here, we examined whether the experience of discrimination affects acute physiological stress responses and increases risk-taking behavior. Black and White participants each received rejecting feedback from partners who were either of their own race (in-group rejection) or of a different race (out-group rejection, which could be interpreted as discrimination). Physiological (cardiovascular and neuroendocrine) changes, cognition (memory and attentional bias), affect, and risk-taking behavior were assessed. Significant participant race × partner race interactions were observed. Cross-race rejection, compared with same-race rejection, was associated with lower levels of cortisol, increased cardiac output, decreased vascular resistance, greater anger, increased attentional bias, and more risk-taking behavior. These data suggest that perceived discrimination is associated with distinct profiles of physiological reactivity, affect, cognitive processing, and risk taking, implicating direct and indirect pathways to health disparities.
ERIC Educational Resources Information Center
Fain, Paul
2008-01-01
College presidents have long gotten flak for refusing to take controversial stands on national issues. A large group of presidents opened an emotionally charged national debate on the drinking age. In doing so, they triggered an avalanche of news-media coverage and a fierce backlash. While the criticism may sting, the prime-time fracas may help…
Taking control of anorexia together.
Cole, Elaine
2015-02-27
Many people with anorexia receive inadequate treatment for what is a debilitating, relentless and life-threatening illness. In Lincolnshire an innovative nurse-led day programme is helping people stay out of hospital and take back control from the illness. Peer support is crucial to the programme's success.
Full Dynamic Compound Inverse Method: Extension to General and Rayleigh damping
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Rizzi, Egidio
2017-04-01
The present paper takes from the original output-only identification approach named Full Dynamic Compound Inverse Method (FDCIM), recently published on this journal by the authors, and proposes an innovative, much enhanced version, in the description of more general forms of structural damping, including for classically adopted Rayleigh damping. This has led to an extended FDCIM formulation, which offers superior performance, on all the targeted identification parameters, namely: modal properties, Rayleigh damping coefficients, structural features at the element-level and input seismic excitation time history. Synthetic earthquake-induced structural response signals are adopted as input channels for the FDCIM approach, towards comparison and validation. The identification algorithm is run first on a benchmark 3-storey shear-type frame, and then on a realistic 10-storey frame, also by considering noise added to the response signals. Consistency of the identification results is demonstrated, with definite superiority of this latter FDCIM proposal.
Full Dynamic Compound Inverse Method: Extension to General and Rayleigh damping
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Rizzi, Egidio
2017-01-01
The present paper takes from the original output-only identification approach named Full Dynamic Compound Inverse Method (FDCIM), recently published on this journal by the authors, and proposes an innovative, much enhanced version, in the description of more general forms of structural damping, including for classically adopted Rayleigh damping. This has led to an extended FDCIM formulation, which offers superior performance, on all the targeted identification parameters, namely: modal properties, Rayleigh damping coefficients, structural features at the element-level and input seismic excitation time history. Synthetic earthquake-induced structural response signals are adopted as input channels for the FDCIM approach, towards comparison and validation. The identification algorithm is run first on a benchmark 3-storey shear-type frame, and then on a realistic 10-storey frame, also by considering noise added to the response signals. Consistency of the identification results is demonstrated, with definite superiority of this latter FDCIM proposal.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
On Approximate Factorization Schemes for Solving the Full Potential Equation
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1997-01-01
An approximate factorization scheme based on the AF2 algorithm is presented for solving the three-dimensional full potential equation for the transonic flow about isolated wings. Two spatial discretization variations are presented, one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The present algorithm utilizes a C-H grid topology to map the flow field about the wing. One version of the AF2 iteration scheme is used on the upper wing surface and another slightly modified version is used on the lower surface. These two algorithm variations are then connected at the wing leading edge using a local iteration technique. The resulting scheme has improved linear stability characteristics and improved time-like damping characteristics relative to previous implementations of the AF2 algorithm. The presentation is highlighted with a grid refinement study and a number of numerical results.
Zero deadtime spectroscopy without full charge collection
Odell, D.M.C.; Bushart, B.S.; Harpring, L.J.; Moore, F.S.; Riley, T.N.
1998-10-01
The Savannah River Technology Center has built a remote gamma monitoring instrument which employs data sampling techniques rather than full charge collection to perform energy spectroscopy without instrument dead time. The raw, unamplified anode output of a photomultiplier tube is directly coupled to the instrument to generate many digital samples during the charge collection process, so that all pulse processing is done in the digital domain. The primary components are a free-running, 32 MSPS, 10-bit A/D, a field programmable gate array, FIFO buffers, and a digital signal processor (DSP). Algorithms for pulse integration, pile-up rejection, and other shape based criteria are being developed in DSP code for migration into the gate array. Spectra taken with a two inch Na(I) detector have been obtained at rates as high as 59,000 counts per second without dead time with peak resolution at 662 KeV measuring 7.3%.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Sleep Deprivation and Advice Taking
Häusser, Jan Alexander; Leder, Johannes; Ketturat, Charlene; Dresler, Martin; Faber, Nadira Sophie
2016-01-01
Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants. PMID:27109507
Improving the medical 'take sheet'.
Reed, Oliver
2014-01-01
The GMC states that "Trainees in hospital posts must have well organised handover arrangements, ensuring continuity of patient care[1]". In the Belfast City Hospital throughout the day there can be multiple new medical admissions. These can be via the GP Unit, transfers for tertiary care, and transfers due to bed shortages in other hospitals. Over the course of 24 hours there can be up to four medical SHOs and three registrars that fill in the take sheet. Due to the variety of admission routes and number of doctors looking after the medical take information can be lost during handover between SHOs. In the current format there is little room to write and key and relevant information on the medical take sheet about new and transferring patients. I felt that this handover sheet could be improved. An initial questionnaire demonstrated that 47% found the old proforma easy to use and 28.2% felt that it allowed them to identify sick patients. 100% of SHOs and Registrars surveyed felt that it could be improved from its current form. From feedback from my colleagues I created a new template and trialled it in the hospital. A repeat questionnaire demonstrated that 92.3% of responders felt the new format had improved medical handover and that 92.6% felt that it allowed safe handover most of the time/always. The success of this new proforma resulted in it being implemented on a permanent basis for new medical admissions and transfers to the hospital.
NASA Technical Reports Server (NTRS)
1930-01-01
Steam pile driver for foundation of Full-Scale Tunnel (FST). In 1924, George Lewis, Max Munk and Fred Weick began to discuss an idea for a wind tunnel large enough to test a full-scale propeller. Munk sketched out a design for a tunnel with a 20-foot test section. The rough sketches were presented to engineers at Langley for comment. Elliott Reid was especially enthusiastic and he wrote a memorandum in support of the proposed 'Giant Wind Tunnel.' At the end of the memorandum, he appended the recommendation that the tunnel test section should be increased to 30-feet diameter so as to allow full-scale testing of entire airplanes (not just propellers). Reid's idea for a full-scale tunnel excited many at Langley but the funds and support were not available in 1924. Nonetheless, Elliot Reid's idea would eventually become reality. In 1928, NACA engineers began making plans for a full-scale wind tunnel. In February 1929, Congress approved of the idea and appropriated $900,000 for construction. Located just a few feet from the Back River, pilings to support the massive building's foundation had to be driven deep into the earth. This work began in the spring of 1929 and cost $11,293.22
NASA Technical Reports Server (NTRS)
1930-01-01
Pile driving for foundation of Full-Scale Tunnel (FST). In 1924, George Lewis, Max Munk and Fred Weick began to discuss an idea for a wind tunnel large enough to test a full-scale propeller. Munk sketched out a design for a tunnel with a 20-foot test section. The rough sketches were presented to engineers at Langley for comment. Elliott Reid was especially enthusiastic and he wrote a memorandum in support of the proposed 'Giant Wind Tunnel.' At the end of the memorandum, he appended the recommendation that the tunnel test section should be increased to 30-feet diameter so as to allow full-scale testing of entire airplanes (not just propellers). Reid's idea for a full-scale tunnel excited many at Langley but the funds and support were not available in 1924. Nonetheless, Elliot Reid's idea would eventually become reality. In 1928, NACA engineers began making plans for a full-scale wind tunnel. In February 1929, Congress approved of the idea and appropriated $900,000 for construction. Located just a few feet from the Back River, pilings to support the massive building's foundation had to be driven deep into the earth. This work began in the spring of 1929 and cost $11,293.22.
NASA Technical Reports Server (NTRS)
1930-01-01
General view of concrete column base for Full-Scale Tunnel (FST). In 1924, George Lewis, Max Munk and Fred Weick began to discuss an idea for a wind tunnel large enough to test a full-scale propeller. Munk sketched out a design for a tunnel with a 20-foot test section. The rough sketches were presented to engineers at Langley for comment. Elliott Reid was especially enthusiastic and he wrote a memorandum in support of the proposed 'Giant Wind Tunnel.' At the end of the memorandum, he appended the recommendation that the tunnel test section should be increased to 30-feet diameter so as to allow full-scale testing of entire airplanes (not just propellers). Reid's idea for a full-scale tunnel excited many at Langley but the funds and support were not available in 1924. Nonetheless, Elliot Reid's idea would eventually become reality. In 1928, NACA engineers began making plans for a full-scale wind tunnel. In February 1929, Congress approved of the idea and appropriated $900,000 for construction. Work on the foundation began in the spring of 1929 and cost $11,293.22.
NASA Astrophysics Data System (ADS)
Nagao, Toshiyasu; Takeuchi, Akihiro; Nakamura, Kenji
2011-03-01
There are a number of reports on seismic quiescence phenomena before large earthquakes. The RTL algorithm is a weighted coefficient statistical method that takes into account the magnitude, occurrence time, and place of earthquake when seismicity pattern changes before large earthquakes are being investigated. However, we consider the original RTL algorithm to be overweighted on distance. In this paper, we introduce a modified RTL algorithm, called the RTM algorithm, and apply it to three large earthquakes in Japan, namely, the Hyogo-ken Nanbu earthquake in 1995 ( M JMA7.3), the Noto Hanto earthquake in 2007 ( M JMA 6.9), and the Iwate-Miyagi Nairiku earthquake in 2008 ( M JMA 7.2), as test cases. Because this algorithm uses several parameters to characterize the weighted coefficients, multiparameter sets have to be prepared for the tests. The results show that the RTM algorithm is more sensitive than the RTL algorithm to seismic quiescence phenomena. This paper represents the first step in a series of future analyses of seismic quiescence phenomena using the RTM algorithm. At this moment, whole surveyed parameters are empirically selected for use in the method. We have to consider the physical meaning of the "best fit" parameter, such as the relation of ACFS, among others, in future analyses.
Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Parallelism of the SANDstorm hash algorithm.
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2009-09-01
Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.
ALFA: Automated Line Fitting Algorithm
NASA Astrophysics Data System (ADS)
Wesson, R.
2015-12-01
ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.
Cluster Algorithm Special Purpose Processor
NASA Astrophysics Data System (ADS)
Talapov, A. L.; Shchur, L. N.; Andreichenko, V. B.; Dotsenko, Vl. S.
We describe a Special Purpose Processor, realizing the Wolff algorithm in hardware, which is fast enough to study the critical behaviour of 2D Ising-like systems containing more than one million spins. The processor has been checked to produce correct results for a pure Ising model and for Ising model with random bonds. Its data also agree with the Nishimori exact results for spin glass. Only minor changes of the SPP design are necessary to increase the dimensionality and to take into account more complex systems such as Potts models.
Photoacoustic imaging taking into account thermodynamic attenuation
NASA Astrophysics Data System (ADS)
Acosta, Sebastián; Montalto, Carlos
2016-11-01
In this paper we consider a mathematical model for photoacoustic imaging which takes into account attenuation due to thermodynamic dissipation. The propagation of acoustic (compressional) waves is governed by a scalar wave equation coupled to the heat equation for the excess temperature. We seek to recover the initial acoustic profile from knowledge of acoustic measurements at the boundary. We recognize that this inverse problem is a special case of boundary observability for a thermoelastic system. This leads to the use of control/observability tools to prove the unique and stable recovery of the initial acoustic profile in the weak thermoelastic coupling regime. This approach is constructive, yielding a solvable equation for the unknown acoustic profile. Moreover, the solution to this reconstruction equation can be approximated numerically using the conjugate gradient method. If certain geometrical conditions for the wave speed are satisfied, this approach is well-suited for variable media and for measurements on a subset of the boundary. We also present a numerical implementation of the proposed reconstruction algorithm.
"Don't take diabetes for granted."
... please turn Javascript on. Feature: Diabetes Stories "Don't take diabetes for granted." Past Issues / Fall 2009 ... regularly, and take your medicines on time. Don't take diabetes for granted! Fall 2009 Issue: Volume ...
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
NASA Astrophysics Data System (ADS)
2007-08-01
New Wide Field Near-Infrared Imager for ESO's Very Large Telescope Europe's flagship ground-based astronomical facility, the ESO VLT, has been equipped with a new 'eye' to study the Universe. Working in the near-infrared, the new instrument - dubbed HAWK-I - covers about 1/10th the area of the Full Moon in a single exposure. It is uniquely suited to the discovery and study of faint objects, such as distant galaxies or small stars and planets. ESO PR Photo 36a/07 ESO PR Photo 36a/07 HAWK-I on the VLT After three years of hard work, HAWK-I (High Acuity, Wide field K-band Imaging) saw First Light on Yepun, Unit Telescope number 4 of ESO's VLT, on the night of 31 July to 1 August 2007. The first images obtained impressively demonstrate its potential. "HAWK-I is a credit to the instrument team at ESO who designed, built and commissioned it," said Catherine Cesarsky, ESO's Director General. "No doubt, HAWK-I will allow rapid progress in very diverse areas of modern astronomy by filling a niche of wide-field, well-sampled near-infrared imagers on 8-m class telescopes." "It's wonderful; the instrument's performance has been terrific," declared Jeff Pirard, the HAWK-I Project Manager. "We could not have hoped for a better start, and look forward to scientifically exciting and beautiful images in the years to come." During this first commissioning period all instrument functions were checked, confirming that the instrument performance is at the level expected. Different astronomical objects were observed to test different characteristics of the instrument. For example, during one period of good atmospheric stability, images were taken towards the central bulge of our Galaxy. Many thousands of stars were visible over the field and allowed the astronomers to obtain stellar images only 3.4 pixels (0.34 arcsecond) wide, uniformly over the whole field of view, confirming the excellent optical quality of HAWK-I. ESO PR Photo 36b/07 ESO PR Photo 36c/07 Nebula in Serpens (HAWK
NASA Technical Reports Server (NTRS)
1931-01-01
Full-Scale Tunnel (FST). Construction of balance housing. Smith DeFrance noted the need for this housing in his NACA TR No. 459: 'The entire floating frame and scale assembly is enclosed in a room for protection from air currents and the supporting struts are shielded by streamlined fairings which are secured to the roof of the balance room and free from the balance.'
An improved simulated annealing algorithm for standard cell placement
NASA Technical Reports Server (NTRS)
Jones, Mark; Banerjee, Prithviraj
1988-01-01
Simulated annealing is a general purpose Monte Carlo optimization technique that was applied to the problem of placing standard logic cells in a VLSI ship so that the total interconnection wire length is minimized. An improved standard cell placement algorithm that takes advantage of the performance enhancements that appear to come from parallelizing the uniprocessor simulated annealing algorithm is presented. An outline of this algorithm is given.
NASA Technical Reports Server (NTRS)
Dotson, Jessie L.; Batalha, Natalie; Bryson, Stephen T.; Caldwell, Douglas A.; Clarke, Bruce D.
2010-01-01
NASA's exoplanet discovery mission Kepler provides uninterrupted 1-min and 30-min optical photometry of a 100 square degree field over a 3.5 yr nominal mission. Downlink bandwidth is filled at these short cadences by selecting only detector pixels specific to 105 preselected stellar targets. The majority of the Kepler field, comprising 4 x 10(exp 6) m_v < 20 sources, is sampled at much lower 1-month cadence in the form of a full-frame image. The Full Frame Images (FFIs) are calibrated by the Science Operations Center at NASA Ames Research Center. The Kepler Team employ these images for astrometric and photometric reference but make the images available to the astrophysics community through the Multimission Archive at STScI (MAST). The full-frame images provide a resource for potential Kepler Guest Observers to select targets and plan observing proposals, while also providing a freely-available long-cadence legacy of photometric variation across a swathe of the Galactic disk.
Simplified calculation of distance measure in DP algorithm
NASA Astrophysics Data System (ADS)
Hu, Tao; Ren, Xian-yi; Lu, Yu-ming
2014-01-01
Distance measure of point to segment is one of the determinants which affect the efficiency of DP (Douglas-Peucker) polyline simplification algorithm. Zone-divided distance measure instead of only perpendicular distance is proposed by Dan Sunday [1] to improve the deficiency of the original DP algorithm. A new efficiency zone-divided distance measure method is proposed in this paper. Firstly, a rotating coordinate is established based on the two endpoints of curve. Secondly, the new coordinate value in the rotating coordinate is computed for each point. Finally, the new coordinate values are used to divide points into three zones and to calculate distance, Manhattan distance is adopted in zone I and III, perpendicular distance in zone II. Compared with Dan Sunday's method, the proposed method can take full advantage of the computation result of previous point. The calculation amount basically keeps for points in zone I and III, and the calculation amount reduces significantly for points in zone II which own highest proportion. Experimental results show that the proposed distance measure method can improve the efficiency of original DP algorithm.
Algorithm for efficient elastic transport calculations for arbitrary device geometries
NASA Astrophysics Data System (ADS)
Mason, Douglas J.; Prendergast, David; Neaton, Jeffrey B.; Heller, Eric J.
2011-10-01
With the growth in interest in graphene, controlled nanoscale device geometries with complex form factors are now being studied and characterized. There is a growing need to understand new techniques to handle efficient electronic transport calculations for these systems. We present an algorithm that dramatically reduces the computational time required to find the local density of states and transmission matrix for open systems regardless of their topology or boundary conditions. We argue that the algorithm, which generalizes the recursive Green's function method by incorporating the reverse Cuthill-McKee algorithm for connected graphs, is ideal for calculating transmission through devices with multiple leads of unknown orientation and becomes a computational necessity when the input and output leads overlap in real space. This last scenario takes the Landauer-Buttiker formalism to general scattering theory in a computational framework that makes it tractable to perform full-spectrum calculations of the quantum scattering matrix in mesoscopic systems. We demonstrate the efficacy of these approaches on graphene stadiums, a system of recent scientific interest, and contribute to a physical understanding of Fano resonances which appear in these systems.
Automated Vectorization of Decision-Based Algorithms
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.
Langevin simulation of the full QCD hadron mass spectrum on a lattice
Fukugita, M.; Oyanagi, Y.; Ukawa, A.
1987-08-01
Langevin simulation of quantum chromodynamics (QCD) on a lattice is carried out fully taking into account the effect of the quark vacuum polarization. It is shown that the Langevin method works well for full QCD and that simulation on a large lattice is practically feasible. A careful study is made of systematic errors arising from a finite Langevin time-step size. The magnitude of the error is found to be significant for light quarks, but the well-controlled extrapolation allows a separation of the values at the vanishing time-step size. As another important ingredient for the feasibility of Langevin simulation the advantage of the matrix inversion algorithm of the preconditioned conjugate residual method is described, as compared with various other algorithms. The results of a hadron-mass-spectrum calculation on a 9/sup 3/ x 18 lattice at ..beta.. = 5.5 with the Wilson quark action having two flavors are presented. It is shown that the contribution of vacuum quark loops significantly modifies the hadron masses in lattice units, but that the dominant part can be absorbed into a shift of the gauge coupling constant at least for the ground-state hadrons. Some suggestion is also presented for the physical effect of vacuum quark loops for excited hadrons.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
NASA Technical Reports Server (NTRS)
1931-01-01
Modification of entrance cone Full-Scale Tunnel (FST). Smith DeFrance describes the entrance cone in NACA TR 459 as follows: 'The entrance cone is 75 feet in length and in this distance the cross section changes from a rectangle 72 by 110 feet to a 30 by 60 foot elliptic section. The area reduction in the entrance cone is slightly less than 5:1. The shape of the entrance cone was chosen to give as fas as possible a constant acceleration to the air stream and to retain a 9-foot length of nozzle for directing the flow.' (p. 293)
Parallelization of a blind deconvolution algorithm
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Borelli, Kathy J.
2006-09-01
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.
Efficient 2d full waveform inversion using Fortran coarray
NASA Astrophysics Data System (ADS)
Ryu, Donghyun; Kim, ahreum; Ha, Wansoo
2016-04-01
We developed a time-domain seismic inversion program using the coarray feature of the Fortran 2008 standard to parallelize the algorithm. We converted a 2d acoustic parallel full waveform inversion program with Message Passing Interface (MPI) to a coarray program and examined performance of the two inversion programs. The results show that the speed of the waveform inversion program using the coarray is slightly faster than that of the MPI version. The standard coarray lacks features for collective communication; however, it can be improved in following standards since it is introduced recently. The parallel algorithm can be applied for 3D seismic data processing.
NASA Technical Reports Server (NTRS)
1931-01-01
Wing and nacelle set-up in Full-Scale Tunnel (FST). The NACA conducted drag tests in 1931 on a P3M-1 nacelle which were presented in a special report to the Navy. Smith DeFrance described this work in the report's introduction: 'Tests were conducted in the full-scale wind tunnel on a five to four geared Pratt and Whitney Wasp engine mounted in a P3M-1 nacelle. In order to simulate the flight conditions the nacelle was assembled on a 15-foot span of wing from the same airplane. The purpose of the tests was to improve the cooling of the engine and to reduce the drag of the nacelle combination. Thermocouples were installed at various points on the cylinders and temperature readings were obtained from these by the power plants division. These results will be reported in a memorandum by that division. The drag results, which are covered by this memorandum, were obtained with the original nacelle condition as received from the Navy with the tail of the nacelle modified, with the nose section of the nacelle modified, with a Curtiss anti-drag ring attached to the engine, with a Type G ring developed by the N.A.C.A., and with a Type D cowling which was also developed by the N.A.C.A.' (p. 1)
Achieving and sustaining full employment.
Rosen, S M
1995-01-01
Human rights and public health considerations provide strong support for policies that maximize employment. Ample historical and conceptual evidence supports the feasibility of full employment policies. New factors affecting the labor force, the rate of technological change, and the globalization of economic activity require appropriate policies--international as well as national--but do not invalidate the ability of modern states to apply the measures needed. Among these the most important include: (I) systematic reduction in working time with no loss of income, (2) active labor market policies, (3) use of fiscal and monetary measures to sustain the needed level of aggregate demand, (4) restoration of equal bargaining power between labor and capital, (5) social investment in neglected and outmoded infrastructure, (6) accountability of corporations for decisions to shift or reduce capital investment, (7) major reductions in military spending, to be replaced by socially needed and economically productive expenditures, (8) direct public sector job creation, (9) reform of monetary policy to restore emphasis on minimizing unemployment and promoting full employment. None are without precedent in modern economies. The obstacles are ideological and political. To overcome them will require intellectual clarity and effective advocacy.
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun
2014-01-01
A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031
Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun
2014-01-01
A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method.
Software For Genetic Algorithms
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Taking charge: a personal responsibility.
Newman, D M
1987-01-01
Women can adopt health practices that will help them to maintain good health throughout their various life stages. Women can take charge of their health by maintaining a nutritionally balanced diet, exercising, and using common sense. Women can also employ known preventive measures against osteoporosis, stroke, lung and breast cancer and accidents. Because women experience increased longevity and may require long-term care with age, the need for restructuring the nation's care system for the elderly becomes an important women's health concern. Adult day care centers, home health aides, and preventive education will be necessary, along with sufficient insurance to maintain quality care and self-esteem without depleting a person's resources. PMID:3120224
Taking advantage of natural biodegradation
Butler, W.A.; Bartlett, C.L.
1995-12-31
A chemical manufacturing facility in central New Jersey evaluated alternatives to address low levels of volatile organic compounds (VOCs) in groundwater. Significant natural attenuation of VOCs was observed in groundwater, and is believed to be the result of natural biodegradation, commonly referred to as intrinsic bioremediation. A study consisting of groundwater sampling and analysis, field monitoring, and transport modeling was conducted to evaluate and confirm this phenomenon. The primary conclusion that can be drawn from the study is that observed natural attenuation of VOCs in groundwater is due to natural biodegradation. Based on the concept that natural biodegradation will minimize contaminant migration, bioventing has been implemented to remove the vadose-zone source of VOCs to groundwater. Taking advantage of natural biodegradation has resulted in significant cost savings compared to implementing a conventional groundwater pump-and-treat system, while still protecting human health and the environment.
NASA Technical Reports Server (NTRS)
1929-01-01
Modified propeller and spinner in Full-Scale Tunnel (FST) model. On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel. 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow. This model can be constructed in a comparatively short time, using 2 by 4 framing with matched sheathing inside, and where circular sections are desired they can be obtained by nailing sheet metal to wooden ribs, which can be cut on the band saw. It is estimated that three months will be required for the construction and testing of such a model and that the cost will be approximately three thousand dollars, one thousand dollars of which will be for the motors. No suitable location appears to exist in any of our present buildings, and it may be necessary to build it outside and cover it with a roof.' George Lewis responded immediately (June 27) granting the authority to proceed. He urged Langley to expedite construction and to employ extra carpenters if necessary. Funds for the model came from the FST project
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Quantum algorithms: an overview
NASA Astrophysics Data System (ADS)
Montanaro, Ashley
2016-01-01
Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
Gianola, M.
1988-10-01
For purposes of both final verification and optimization of TG 20 and TG 50 combustion systems, test programs have been carried out directly on full engines operating in the field, as well as in the test bench. These programs were carried out in two separate phases: the first one directed to determine the behavior at load by means of experimental data acquisition, including temperature distribution on the combustor exit plane for different burner arrangements, and the second one directed to optimize the ignition process and the acceleration sequence. This paper, after a brief description of the instrumentation used for each test, reports the most significant results burning both fuel oil and natural gas. Moreover, some peculiar operational problems are mentioned, along with their diagnosis and the corrections applied to the combustion system to solve them.
NASA Technical Reports Server (NTRS)
1930-01-01
Construction of Full-Scale Tunnel (FST): 120-Foot Truss hoisting, one and two point suspension. In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; and fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293)
NASA Technical Reports Server (NTRS)
1930-01-01
Construction of Full-Scale Tunnel (FST). In November 1929, Smith DeFrance submitted his recommendations for the general design of the Full Scale Wind Tunnel. The last on his list concerned the division of labor required to build this unusual facility. He believed the job had five parts and described them as follows: 'It is proposed that invitations be sent out for bids on five groups of items. The first would be for one contract on the complete structure; second the same as first, including the erection of the cones but not the fabrication, since this would be more of a shipyard job; third would cover structural steel, cover, sash and doors, but not cones or foundation; fourth, foundations; an fifth, fabrication of cones.' DeFrance's memorandum prompted the NACA to solicit estimates from a large number of companies. Preliminary designs and estimates were prepared and submitted to the Bureau of the Budget and Congress appropriated funds on February 20, 1929. The main construction contract with the J.A. Jones Company of Charlotte, North Carolina was signed one year later on February 12, 1930. It was a peculiar structure as the building's steel framework is visible on the outside of the building. DeFrance described this in NACA TR No. 459: 'The entire equipment is housed in a structure, the outside walls of which serve as the outer walls of the return passages. The over-all length of the tunnel is 434 feet 6 inches, the width 222 feet, and the maximum height 97 feet. The framework is of structural steel....' (pp. 292-293).
NASA Astrophysics Data System (ADS)
Graf, Norman A.
2001-07-01
An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
NASA Astrophysics Data System (ADS)
Tramacere, A.; Paraficz, D.; Dubath, P.; Kneib, J.-P.; Courbin, F.
2016-12-01
We present a study on galaxy detection and shape classification using topometric clustering algorithms. We first use the DBSCAN algorithm to extract, from CCD frames, groups of adjacent pixels with significant fluxes and we then apply the DENCLUE algorithm to separate the contributions of overlapping sources. The DENCLUE separation is based on the localization of pattern of local maxima, through an iterative algorithm, which associates each pixel to the closest local maximum. Our main classification goal is to take apart elliptical from spiral galaxies. We introduce new sets of features derived from the computation of geometrical invariant moments of the pixel group shape and from the statistics of the spatial distribution of the DENCLUE local maxima patterns. Ellipticals are characterized by a single group of local maxima, related to the galaxy core, while spiral galaxies have additional groups related to segments of spiral arms. We use two different supervised ensemble classification algorithms: Random Forest and Gradient Boosting. Using a sample of ≃24 000 galaxies taken from the Galaxy Zoo 2 main sample with spectroscopic redshifts, and we test our classification against the Galaxy Zoo 2 catalogue. We find that features extracted from our pipeline give, on average, an accuracy of ≃93 per cent, when testing on a test set with a size of 20 per cent of our full data set, with features deriving from the angular distribution of density attractor ranking at the top of the discrimination power.
Full-color holographic 3D printer
NASA Astrophysics Data System (ADS)
Takano, Masami; Shigeta, Hiroaki; Nishihara, Takashi; Yamaguchi, Masahiro; Takahashi, Susumu; Ohyama, Nagaaki; Kobayashi, Akihiko; Iwata, Fujio
2003-05-01
A holographic 3D printer is a system that produces a direct hologram with full-parallax information using the 3-dimensional data of a subject from a computer. In this paper, we present a proposal for the reproduction of full-color images with the holographic 3D printer. In order to realize the 3-dimensional color image, we selected the 3 laser wavelength colors of red (λ=633nm), green (λ=533nm), and blue (λ=442nm), and we built a one-step optical system using a projection system and a liquid crystal display. The 3-dimensional color image is obtained by synthesizing in a 2D array the multiple exposure with these 3 wavelengths made on each 250mm elementary hologram, and moving recording medium on a x-y stage. For the natural color reproduction in the holographic 3D printer, we take the approach of the digital processing technique based on the color management technology. The matching between the input and output colors is performed by investigating first, the relation between the gray level transmittance of the LCD and the diffraction efficiency of the hologram and second, by measuring the color displayed by the hologram to establish a correlation. In our first experimental results a non-linear functional relation for single and multiple exposure of the three components were found. These results are the first step in the realization of a natural color 3D image produced by the holographic color 3D printer.
Integrated powerhead demonstration full flow cycle development
NASA Astrophysics Data System (ADS)
Jones, J. Mathew; Nichols, James T.; Sack, William F.; Boyce, William D.; Hayes, William A.
1998-01-01
The Integrated Powerhead Demonstration (IPD) is a 1,112,000 N (250,000 lbf) thrust (at sea level) LOX/LH2 demonstration of a full flow cycle in an integrated system configuration. Aerojet and Rocketdyne are on contract to the Air Force Research Laboratory to design, develop, and deliver the required components, and to provide test support to accomplish the demonstration. Rocketdyne is on contract to provide a fuel and oxygen turbopump, a gas-gas injector, and system engineering and integration. Aerojet is on contract to provide a fuel and oxygen preburner, a main combustion chamber, and a nozzle. The IPD components are being designed with Military Spaceplane (MSP) performance and operability requirements in mind. These requirements include: lifetime >=200 missions, mean time between overhauls >=100 cycles, and a capability to throttle from 20% to 100% of full power. These requirements bring new challenges both in designing and testing the components. This paper will provide some insight into these issues. Lessons learned from operating and supporting the space shuttle main engine (SSME) have been reviewed and incorporated where applicable. The IPD program will demonstrate phase I goals of the Integrated High Payoff Rocket Propulsion Technology (IHPRPT) program while demonstrating key propulsion technologies that will be available for MSP concepts. The demonstration will take place on Test Stand 2A at the Air Force Research Laboratory at Edwards AFB. The component tests will begin in 1999 and the integrated system tests will be completed in 2002.
A full-scale STOVL ejector experiment
NASA Technical Reports Server (NTRS)
Barankiewicz, Wendy S.
1993-01-01
The design and development of thrust augmenting short take-off and vertical landing (STOVL) ejectors has typically been an iterative process. In this investigation, static performance tests of a full-scale vertical lift ejector were performed at primary flow temperatures up to 1560 R (1100 F). Flow visualization (smoke generators, yarn tufts and paint dots) was used to assess inlet flowfield characteristics, especially around the primary nozzle and end plates. Performance calculations are presented for ambient temperatures close to 480 R (20 F) and 535 R (75 F) which simulate 'seasonal' aircraft operating conditions. Resulting thrust augmentation ratios are presented as functions of nozzle pressure ratio and temperature. Full-scale experimental tests such as this are expensive, and difficult to implement at engine exhaust temperatures. For this reason the utility of using similarity principles -- in particular, the Munk and Prim similarity principle for isentropic flow -- was explored. At different primary temperatures, exit pressure contours are compared for similarity. A nondimensional flow parameter is then shown to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments. Under the assumption that an appropriate similarity principle can be established, then properly chosen performance parameters should be similar for both hot flow and cold flow model tests.
Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2016-09-01
A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.
Full Stokes polarization imaging camera
NASA Astrophysics Data System (ADS)
Vedel, M.; Breugnot, S.; Lechocinski, N.
2011-10-01
Objective and background: We present a new version of Bossa Nova Technologies' passive polarization imaging camera. The previous version was performing live measurement of the Linear Stokes parameters (S0, S1, S2), and its derivatives. This new version presented in this paper performs live measurement of Full Stokes parameters, i.e. including the fourth parameter S3 related to the amount of circular polarization. Dedicated software was developed to provide live images of any Stokes related parameters such as the Degree Of Linear Polarization (DOLP), the Degree Of Circular Polarization (DOCP), the Angle Of Polarization (AOP). Results: We first we give a brief description of the camera and its technology. It is a Division Of Time Polarimeter using a custom ferroelectric liquid crystal cell. A description of the method used to calculate Data Reduction Matrix (DRM)5,9 linking intensity measurements and the Stokes parameters is given. The calibration was developed in order to maximize the condition number of the DRM. It also allows very efficient post processing of the images acquired. Complete evaluation of the precision of standard polarization parameters is described. We further present the standard features of the dedicated software that was developed to operate the camera. It provides live images of the Stokes vector components and the usual associated parameters. Finally some tests already conducted are presented. It includes indoor laboratory and outdoor measurements. This new camera will be a useful tool for many applications such as biomedical, remote sensing, metrology, material studies, and others.
Wahbeh, V.N.; Clark, J.H.; Naydo, W.R.; Horii, R.S.
1993-09-01
The high-purity-oxygen activated sludge process will be used to expand secondary treatment capacity and improve water quality in Santa Monica Bay. The facility is operated by the city of Los Angeles Department of Public Works` Bureau of Sanitation. The overall Hyperion Full Secondary Project is 30% complete, including a new headworks, a new primary clarifier battery, an electrical switch yard, and additional support facilities. The upgrading of secondary facilities is 50% complete, and construction of the digester facilities, the waste-activated sludge thickening facility, and the second phase of the three-phase modification to existing primary clarifier batteries has just begun. The expansion program will provide a maximum monthly design capacity of 19,723 L/s(450 mgd). Hyperion`s expansion program uses industrial treatment techniques rarely attempted in a municipal facility, particularly on such a large scale, including: a user-friendly intermediate pumping station featuring 3.8-m Archimedes screw pumps with a capacity of 5479 L/s each; space-efficient, high-purity-oxygen reactors; a one-of-a-kind, 777-Mg/d oxygen-generating facility incorporating several innovative features that not only save money and energy, but reduce noise; design improvements in 36 new final clarifiers to enhance settling and provide high effluent quality; and egg-shaped digesters to respond to technical and aesthetic design parameters.
NASA Technical Reports Server (NTRS)
1931-01-01
Modification of entrance cone of the Full-Scale Tunnel (FST). To the left are the FST guide vanes which Smith DeFrance described in NACA TR No. 459: 'The air is turned at the four corners of each return passage by guide vanes. The vanes are of the curved-airfoil type formed by two intersecting arcs with a rounded nose. The arcs were so chosen as to give a practically constant area through the vanes.' (p. 295) These vanes 'have chords of 3 feet 6 inches and are spaced at 0.41 of a chord length. By a proper adjustment of the angular setting of the vanes, a satisfactory velocity distribution has been obtained and no honeycomb has been found necessary.' (p. 295). Close inspection of the photograph will reveal a number of workers on the scaffolding. The heights were great and the work was quite dangerous. In October 1930, one construction worker working on the roof of the tunnel would die when he stepped off the planking to fetch a tool and fell through an unsupported piece of Careystone to the floor some 70 feet below.
NASA Technical Reports Server (NTRS)
1931-01-01
Construction of Full-Scale Tunnel (FST) balance. Smith DeFrance described the 6-component type balance in NACA TR No. 459 (which also includes a schematic diagram of the balance and its various parts). 'Ball and socket fittings at the top of each of the struts hod the axles of the airplane to be tested; the tail is attached to the triangular frame. These struts are secured to the turntable, which is attached to the floating frame. This frame rests on the struts (next to the concrete piers on all four corners), which transmit the lift forces to the scales (partially visible on the left). The drag linkage is attached to the floating frame on the center line and, working against a known counterweight, transmits the drag force to the scale (center, face out). The cross-wind force linkages are attached to the floating frame on the front and rear sides at the center line. These linkages, working against known counterweights, transmit the cross-wind force to scales (two front scales, face in). In the above manner the forces in three directions are measured and by combining the forces and the proper lever arms, the pitching, rolling, and yawing moments can be computed. The scales are of the dial type and are provided with solenoid-operated printing devices. When the proper test condition is obtained, a push-button switch is momentarily closed and the readings on all seven scales are recorded simultaneously, eliminating the possibility of personal errors.'
Microgravity Smoldering Combustion Takes Flight
NASA Technical Reports Server (NTRS)
1996-01-01
The Microgravity Smoldering Combustion (MSC) experiment lifted off aboard the Space Shuttle Endeavour in September 1995 on the STS-69 mission. This experiment is part of series of studies focused on the smolder characteristics of porous, combustible materials in a microgravity environment. Smoldering is a nonflaming form of combustion that takes place in the interior of combustible materials. Common examples of smoldering are nonflaming embers, charcoal briquettes, and cigarettes. The objective of the study is to provide a better understanding of the controlling mechanisms of smoldering, both in microgravity and Earth gravity. As with other forms of combustion, gravity affects the availability of air and the transport of heat, and therefore, the rate of combustion. Results of the microgravity experiments will be compared with identical experiments carried out in Earth's gravity. They also will be used to verify present theories of smoldering combustion and will provide new insights into the process of smoldering combustion, enhancing our fundamental understanding of this frequently encountered combustion process and guiding improvement in fire safety practices.
Apollo - Lunar Take Off Simulator
NASA Technical Reports Server (NTRS)
1961-01-01
Lunar Take Off Simulator: This simulator is used by scientists at the Langley Research Center ... to help determine human ability to control a lunar launch vehicle in vertical alignment during takeoff from the moon for rendezvous with a lunar satellite vehicle on the return trip to earth. The three-axis chair, a concept which allows the pilot to sit upright during launch, gives the navigator angular motion (pitch, role, and yaw) cues as he operates the vehicle through a sidearm control system. The sight apparatus in front of the pilot's face enables him to align the vehicle on a course toward a chosen star, which will be followed as a guidance reference during the lunar launch. The pilot's right hand controls angular motions, while his left hand manipulates the thrust lever. The simulator is designed for operation inside an artificial planetarium, where a star field will be projected against the ceiling during 'flights'. The tests are part of an extensive NASA program at Langley in the study of problems relating to a manned lunar mission. (From a NASA Langley, photo release caption.)
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
BN-600 full MOX core benchmark analysis.
Kim, Y. I.; Hill, R. N.; Grimm, K.; Rimpault, G.; Newton, T.; Li, Z. H.; Rineiski, A.; Mohanakrishan, P.; Ishikawa, M.; Lee, K. B.; Danilytchev, A.; Stogov, V.; Nuclear Engineering Division; International Atomic Energy Agency; CEA SERCO Assurance; China Inst. of Atomic Energy; Forschnungszentrum Karlsruhe; Indira Gandhi Centre for Atomic Research; Japan Nuclear Cycle Development Inst.; Korea Atomic Energy Research Inst.; Inst. of Physics and Power Engineering
2004-01-01
As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.
Addiss, John W.; Collins, Adam; Proud, William G.
2009-12-28
Digital Speckle Radiography (DSR) is a technique allowing full field displacement maps in a plan within an opaque material to be determined. The displacements are determined by tracking the motions of small sub-sections of a deforming speckle pattern, produced by seeding an internal layer of lead and taking flash x-ray images. An improved DSR algorithm is discussed which can improve the often poor contrast in DSR images, such that the mean and variance of the speckle pattern is uniform. This considerably improves the correlation success relative to other similar algorithms for DSR experiments. A series of experiments involving the penetration of granular media by long-rod projectiles, and the improved correlation achieved using this new algorithm, are discussed.
Full-field vibrometry with digital Fresnel holography
Leval, Julien; Picart, Pascal; Boileau, Jean Pierre; Pascal, Jean Claude
2005-09-20
A setup that permits full-field vibration amplitude and phase retrieval with digital Fresnel holography is presented. Full reconstruction of the vibration is achieved with a three-step stroboscopic holographic recording, and an extraction algorithm is proposed. The finite temporal width of the illuminating light is considered in an investigation of the distortion of the measured amplitude and phase. In particular, a theoretical analysis is proposed and compared with numerical simulations that show good agreement. Experimental results are presented for a loudspeaker under sinusoidal excitation; the mean quadratic velocity extracted from amplitude evaluation under two different measuring conditions is presented. Comparison with time averaging validates the full-field vibrometer.
Integrated Resilient Aircraft Control Project Full Scale Flight Validation
NASA Technical Reports Server (NTRS)
Bosworth, John T.
2009-01-01
Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.
Artificial immune algorithm implementation for optimized multi-axis sculptured surface CNC machining
NASA Astrophysics Data System (ADS)
Fountas, N. A.; Kechagias, J. D.; Vaxevanidis, N. M.
2016-11-01
This paper presents the results obtained by the implementation of an artificial immune algorithm to optimize standard multi-axis tool-paths applied to machine free-form surfaces. The investigation for its applicability was based on a full factorial experimental design addressing the two additional axes for tool inclination as independent variables whilst a multi-objective response was formulated by taking into consideration surface deviation and tool path time; objectives assessed directly from computer-aided manufacturing environment A standard sculptured part was developed by scratch considering its benchmark specifications and a cutting-edge surface machining tool-path was applied to study the effects of the pattern formulated when dynamically inclining a toroidal end-mill and guiding it towards the feed direction under fixed lead and tilt inclination angles. The results obtained form the series of the experiments were used for the fitness function creation the algorithm was about to sequentially evaluate. It was found that the artificial immune algorithm employed has the ability of attaining optimal values for inclination angles facilitating thus the complexity of such manufacturing process and ensuring full potentials in multi-axis machining modelling operations for producing enhanced CNC manufacturing programs. Results suggested that the proposed algorithm implementation may reduce the mean experimental objective value to 51.5%
Inhomogeneous phase shifting: an algorithm for nonconstant phase displacements
Tellez-Quinones, Alejandro; Malacara-Doblado, Daniel
2010-11-10
In this work, we have developed a different algorithm than the classical one on phase-shifting interferometry. These algorithms typically use constant or homogeneous phase displacements and they can be quite accurate and insensitive to detuning, taking appropriate weight factors in the formula to recover the wrapped phase. However, these algorithms have not been considered with variable or inhomogeneous displacements. We have generalized these formulas and obtained some expressions for an implementation with variable displacements and ways to get partially insensitive algorithms with respect to these arbitrary error shifts.
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
Primary Care Sports Medicine: A Full-Timer's Perspective.
ERIC Educational Resources Information Center
Moats, William E.
1988-01-01
This article describes the history and structure of a sports medicine facility, the patient care services it offers, and the types of injuries treated at the center. Opportunities and potentials for physicians who wish to enter the field of sports medicine on a full-time basis are described, as are steps to take to prepare to do so. (Author/JL)
Reconstruction algorithm for limited-angle diffraction tomography for microwave NDE
Paladhi, P. Roy; Klaser, J.; Tayebi, A.; Udpa, L.; Udpa, S.
2014-02-18
Microwave tomography is becoming a popular imaging modality in nondestructive evaluation and medicine. A commonly encountered challenge in tomography in general is that in many practical situations a full 360° angular access is not possible and with limited access, the quality of reconstructed image is compromised. This paper presents an approach for reconstruction with limited angular access in diffraction tomography. The algorithm takes advantage of redundancies in image Fourier space data obtained from diffracted field measurements and couples it to an error minimization technique using a constrained total variation (CTV) minimization. Initial results from simulated data have been presented here to validate the approach.
Aerodynamics of a beetle in take-off flights
NASA Astrophysics Data System (ADS)
Lee, Boogeon; Park, Hyungmin; Kim, Sun-Tae
2015-11-01
In the present study, we investigate the aerodynamics of a beetle in its take-off flights based on the three-dimensional kinematics of inner (hindwing) and outer (elytron) wings, and body postures, which are measured with three high-speed cameras at 2000 fps. To track the highly deformable wing motions, we distribute 21 morphological markers and use the modified direct linear transform algorithm for the reconstruction of measured wing motions. To realize different take-off conditions, we consider two types of take-off flights; that is, one is the take-off from a flat ground and the other is from a vertical rod mimicking a branch of a tree. It is first found that the elytron which is flapped passively due to the motion of hindwing also has non-negligible wing-kinematic parameters. With the ground, the flapping amplitude of elytron is reduced and the hindwing changes its flapping angular velocity during up and downstrokes. On the other hand, the angle of attack on the elytron and hindwing increases and decreases, respectively, due to the ground. These changes in the wing motion are critically related to the aerodynamic force generation, which will be discussed in detail. Supported by the grant to Bio-Mimetic Robot Research Center funded by Defense Acquisition Program Administration (UD130070ID).
Taking medicines - what to ask your doctor
... medicine you take. Know what medicines, vitamins, and herbal supplements you take. Make a list of your medicines ... Will this medicine change how any of my herbal or dietary supplements work? Ask if your new medicine interferes with ...
Taking your blood pressure at home (image)
... sure you are taking your blood pressure correctly. Compare your home machine with the one at your ... sure you are taking your blood pressure correctly. Compare your home machine with the one at your ...
Take Care of Your Teeth and Gums
... En español Take Care of Your Teeth and Gums Browse Sections The Basics Overview Take Action! Brushing ... only in moderation. What causes tooth decay and gum disease? Plaque (“plak”) is a sticky substance that ...
Taking Medicines Safely: Ask Your Pharmacist
... this page please turn Javascript on. Feature: Taking Medicines Safely Ask Your Pharmacist Past Issues / Summer 2013 ... brand name medicine. What About Over-The-Counter Medicines? Be careful when taking an OTC drug. For ...
NASA Astrophysics Data System (ADS)
Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing
2016-11-01
The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.
Information filtering via weighted heat conduction algorithm
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng
2011-06-01
In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.
Fairness algorithm of the resilient packet ring
NASA Astrophysics Data System (ADS)
Tu, Lai; Huang, Benxiong; Zhang, Fan; Wang, Xiaoling
2004-04-01
Resilient Packet Ring (RPR) is a newly developed Layer 2 access technology for ring topology based high speed network. Fairness Algorithm (FA), one of its key technologies, takes responsibility for regulating each station access to the ring. Since different methods emphasize particularly on different aspects, the RPR Work Group have tabled several proposals. This paper will discuss two of them and propose an improved algorithm, which can be seen as a generalization of the two schemes proposed in [1] and [2]. The new algorithm is a distributed algorithm, and uses a multi level feedback mechanism. Each station calculates its own fair rate to regulate its access to the ring, and sends fairness control message (FCM) with its bandwidth demand information to the whole ring. All stations keep a bandwidth demand image, which update periodically based on the information of received FCM. The image can be used for local fair rate calculation to achieve fair access. In the properties study section of this paper, we compare our algorithm with the two existing one both in theoretical method and in scenario simulation. Our algorithm has successfully resolve lack of the awareness of multi congestion points in [1] and the drawback of weakness of fault tolerance in [2].
Taking Aspirin to Protect Your Heart
Toolkit No. 23 Taking Aspirin to Protect Your Heart What can taking aspirin do for me? If you are at high risk for or if you have heart disease, taking a low dose aspirin every day may help. Aspirin can also help ...
50 CFR 216.11 - Prohibited taking.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...
50 CFR 216.11 - Prohibited taking.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...
50 CFR 216.11 - Prohibited taking.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...
50 CFR 18.11 - Prohibited taking.
Code of Federal Regulations, 2010 CFR
2010-10-01
... PLANTS (CONTINUED) MARINE MAMMALS Prohibitions § 18.11 Prohibited taking. Except as otherwise provided in... subject to the jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction...
50 CFR 216.11 - Prohibited taking.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF COMMERCE MARINE MAMMALS REGULATIONS GOVERNING THE TAKING AND IMPORTING OF MARINE MAMMALS... jurisdiction of the United States to take any marine mammal on the high seas, or (b) Any person, vessel, or conveyance to take any marine mammal in waters or on lands under the jurisdiction of the United States, or...
Distributed Minimum Hop Algorithms
1982-01-01
acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is
Full waveform inversion of solar interior flows
Hanasoge, Shravan M.
2014-12-10
The inference of flows of material in the interior of the Sun is a subject of major interest in helioseismology. Here, we apply techniques of full waveform inversion (FWI) to synthetic data to test flow inversions. In this idealized setup, we do not model seismic realization noise, training the focus entirely on the problem of whether a chosen supergranulation flow model can be seismically recovered. We define the misfit functional as a sum of L {sub 2} norm deviations in travel times between prediction and observation, as measured using short-distance filtered f and p {sub 1} and large-distance unfiltered p modes. FWI allows for the introduction of measurements of choice and iteratively improving the background model, while monitoring the evolution of the misfit in all desired categories. Although the misfit is seen to uniformly reduce in all categories, convergence to the true model is very slow, possibly because it is trapped in a local minimum. The primary source of error is inaccurate depth localization, which, due to density stratification, leads to wrong ratios of horizontal and vertical flow velocities ({sup c}ross talk{sup )}. In the present formulation, the lack of sufficient temporal frequency and spatial resolution makes it difficult to accurately localize flow profiles at depth. We therefore suggest that the most efficient way to discover the global minimum is to perform a probabilistic forward search, involving calculating the misfit associated with a broad range of models (generated, for instance, by a Monte Carlo algorithm) and locating the deepest minimum. Such techniques possess the added advantage of being able to quantify model uncertainty as well as realization noise (data uncertainty).
Modified Cholesky factorizations in interior-point algorithms for linear programming.
Wright, S.; Mathematics and Computer Science
1999-01-01
We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.
Computational and methodological developments towards 3D full waveform inversion
NASA Astrophysics Data System (ADS)
Etienne, V.; Virieux, J.; Hu, G.; Jia, Y.; Operto, S.
2010-12-01
Full waveform inversion (FWI) is one of the most promising techniques for seismic imaging. It relies on a formalism taking into account every piece of information contained in the seismic data as opposed to more classical techniques such as travel time tomography. As a result, FWI is a high resolution imaging process able to reach a spatial accuracy equal to half a wavelength. FWI is based on a local optimization scheme and therefore the main limitation concerns the starting model which has to be closed enough to the real one in order to converge to the global minimum. Another counterpart of FWI is the required computational resources when considering models and frequencies of interest. The task becomes even more tremendous when one tends to perform the inversion using the elastic equation instead of using the acoustic approximation. This is the reason why until recently most studies were limited to 2D cases. In the last few years, due to the increase of the available computational power, FWI has focused a lot of interests and continuous efforts towards inversion of 3D models, leading to remarkable applications up to the continental scale. We investigate the computational burden induced by FWI in 3D elastic media and propose some strategic features leading to the reduction of the numerical cost while providing a great flexibility in the inversion parametrization. First, in order to release the memory requirements, we developed our FWI algorithm in the frequency domain and take benefit of the wave-number redundancy in the seismic data to process a quite reduced number of frequencies. To do so, we extract frequency solutions from time marching techniques which are efficient for 3D structures. Moreover, this frequency approach permits a multi-resolution strategy by proceeding from low to high frequencies: the final model at one frequency is used as the starting model for the next frequency. This procedure overcomes partially the non-linear behavior of the inversion
Distributed sensor data compression algorithm
NASA Astrophysics Data System (ADS)
Ambrose, Barry; Lin, Freddie
2006-04-01
Theoretically it is possible for two sensors to reliably send data at rates smaller than the sum of the necessary data rates for sending the data independently, essentially taking advantage of the correlation of sensor readings to reduce the data rate. In 2001, Caltech researchers Michelle Effros and Qian Zhao developed new techniques for data compression code design for correlated sensor data, which were published in a paper at the 2001 Data Compression Conference (DCC 2001). These techniques take advantage of correlations between two or more closely positioned sensors in a distributed sensor network. Given two signals, X and Y, the X signal is sent using standard data compression. The goal is to design a partition tree for the Y signal. The Y signal is sent using a code based on the partition tree. At the receiving end, if ambiguity arises when using the partition tree to decode the Y signal, the X signal is used to resolve the ambiguity. We have extended this work to increase the efficiency of the code search algorithms. Our results have shown that development of a highly integrated sensor network protocol that takes advantage of a correlation in sensor readings can result in 20-30% sensor data transport cost savings. In contrast, the best possible compression using state-of-the-art compression techniques that did not take into account the correlation of the incoming data signals achieved only 9-10% compression at most. This work was sponsored by MDA, but has very widespread applicability to ad hoc sensor networks, hyperspectral imaging sensors and vehicle health monitoring sensors for space applications.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... Incidental to Commercial Fishing Operations; Bottlenose Dolphin Take Reduction Plan AGENCY: National Marine... Bottlenose Dolphin Take Reduction Plan (BDTRP) and implementing regulations by permanently continuing medium... April 30. Members of the Bottlenose Dolphin Take Reduction Team (BDTRT) recommended these regulations...
Algorithms to Automate LCLS Undulator Tuning
Wolf, Zachary
2010-12-03
Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.
Conjugate gradient algorithms using multiple recursions
Barth, T.; Manteuffel, T.
1996-12-31
Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
Bergmeir, Christoph; García Silvente, Miguel; Benítez, José Manuel
2012-09-01
In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.
Compressive full-waveform LIDAR with low-cost sensor
NASA Astrophysics Data System (ADS)
Yang, Weiyi; Ke, Jun
2016-10-01
Full-waveform LiDAR is a method that digitizes the complete waveform of backscattered pulses to obtain range information of multi-targets. To avoid expensive sensors in conventional full-waveform LiDAR system, a new system based on compressive sensing method is presented in this paper. The non-coherent continuous-wave laser is modulated by electro-optical modulator with pseudo-random sequences. A low-bandwidth detector and a low-bandwidth analog-digital converter are used to acquire the returned signal. OMP algorithm is employed to reconstruct the high resolution range information.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Quantum Algorithms, Symmetry, and Fourier Analysis
NASA Astrophysics Data System (ADS)
Denney, Aaron
I describe the role of symmetry in two quantum algorithms, with a focus on how that symmetry is made manifest by the Fourier transform. The Fourier transform can be considered in a wider context than the familiar one of functions on
Ultrametric Hierarchical Clustering Algorithms.
ERIC Educational Resources Information Center
Milligan, Glenn W.
1979-01-01
Johnson has shown that the single linkage and complete linkage hierarchical clustering algorithms induce a metric on the data known as the ultrametric. Johnson's proof is extended to four other common clustering algorithms. Two additional methods also produce hierarchical structures which can violate the ultrametric inequality. (Author/CTM)
The Training Effectiveness Algorithm.
ERIC Educational Resources Information Center
Cantor, Jeffrey A.
1988-01-01
Describes the Training Effectiveness Algorithm, a systematic procedure for identifying the cause of reported training problems which was developed for use in the U.S. Navy. A two-step review by subject matter experts is explained, and applications of the algorithm to other organizations and training systems are discussed. (Author/LRW)
Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P
2010-10-30
Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy.
Totally parallel multilevel algorithms
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Branfoot, T
1994-01-01
Injured motorcyclists may have a damaged and unstable cervical spine (C-spine). This paper looks at whether a helmet can be safely removed, how and when should this be done? The literature is reviewed and the recommendations of the Trauma Working party of the Joint Colleges Ambulance Liaison Committee are presented. PMID:7921566
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
A new algorithm for agile satellite-based acquisition operations
NASA Astrophysics Data System (ADS)
Bunkheila, Federico; Ortore, Emiliano; Circi, Christian
2016-06-01
Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.
Algorithms for High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Morookian, John-Michael; Lambert, James
2010-01-01
Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
A limited-memory algorithm for bound-constrained optimization
Byrd, R.H.; Peihuang, L.; Nocedal, J. |
1996-03-01
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.
Aerodynamic Shape Optimization using an Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Hoist, Terry L.; Pulliam, Thomas H.
2003-01-01
A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.
Aerodynamic Shape Optimization using an Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.
Simultaneous Inversion of Full Data Bandwidth by Tomographic Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Almomin, A. A.; Biondi, B. C.
2015-12-01
The convergence of full-waveform inversion can be improved by extending the velocity model along either the subsurface-offset axis or the time-lag axis. The extension of the velocity model along the time-lag axis enables us to linearly model large time shifts caused by velocity perturbations. This linear modeling was based on a new linearization of the scalar wave equation in which perturbation of the extended slowness squared was convolved in time with the second time derivative of the background wavefield. The linearization was accurate for reflected events and transmitted events. We determined that it can effectively model conventional reflection data as well as modern long-offset data containing diving waves. It also enabled the simultaneous inversion of reflections and diving waves, even when the starting velocity model was far from being accurate. We solved the optimization problem related to the inversion with a nested algorithm. The inner iterations were based on the proposed linearization and on a mixing of scales between the short- and long-wavelength components of the velocity model. We significantly improved the convergence rate by preconditioning the extended model to balance the amplitude-versus-angle behavior of the wave-equation and by imposing wavelength continuation of the gradient in the outer loop. Numerical tests performed on synthetic data modeled on the Marmousi model and on Chevron's FWI blind-test data demonstrated the global convergence properties as well as the high-resolution potential of the proposed method.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
NASA Astrophysics Data System (ADS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-10-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.
Improved algorithms for parsing ESLTAGs: a grammatical model suitable for RNA pseudoknots.
Rajasekaran, Sanguthevar; Al Seesi, Sahar; Ammar, Reda A
2010-01-01
Formal grammars have been employed in biology to solve various important problems. In particular, grammars have been used to model and predict RNA structures. Two such grammars are Simple Linear Tree Adjoining Grammars (SLTAGs) and Extended SLTAGs (ESLTAGs). Performances of techniques that employ grammatical formalisms critically depend on the efficiency of the underlying parsing algorithms. In this paper, we present efficient algorithms for parsing SLTAGs and ESLTAGs. Our algorithm for SLTAGs parsing takes O(min{m,n⁴}) time and O(min{m,n⁴}) space, where m is the number of entries that will ever be made in the matrix M (that is normally used by TAG parsing algorithms). Our algorithm for ESLTAGs parsing takes O(min{m,n⁴}) time and O(min{m,n⁴}) space. We show that these algorithms perform better, in practice, than the algorithms of Uemura et al.
Wire Detection Algorithms for Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.
2002-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Implementation of Parallel Algorithms
1993-06-30
their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Parallel Wolff Cluster Algorithms
NASA Astrophysics Data System (ADS)
Bae, S.; Ko, S. H.; Coddington, P. D.
The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.
Men: Take Charge of Your Health
... they get treatment. Ask your doctor about taking aspirin every day. If you are age 50 to 59, taking aspirin every day can lower your risk of heart ... cancer. Talk with your doctor about whether daily aspirin is right for you. Next section Cost and ...
Take Home Tests: An Experimental Study.
ERIC Educational Resources Information Center
Weber, Larry J.; And Others
1983-01-01
Data gathered on three kinds of tests (closed-book, open-book, and take-home) covered possible differential achievement on knowledge and cognitive-skill items, student attitudes, and cheating. On take-home tests, scores on knowledge items were found to higher, anxiety level was lower, and cheating was not a problem. (MSE)
Test Taking Skills. A SORD Project.
ERIC Educational Resources Information Center
Phillips, Art
This pamphlet, prepared by the Southern Oregon Research and Development Committee (SORD), offers suggestions for students and teachers for improving students' test-taking skills. Among the skills that students should possess to be prepared for taking tests are knowing the purposes of testing, having experience and practice in testing and following…
50 CFR 18.11 - Prohibited taking.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 9 2014-10-01 2014-10-01 false Prohibited taking. 18.11 Section 18.11... PLANTS (CONTINUED) MARINE MAMMALS Prohibitions § 18.11 Prohibited taking. Except as otherwise provided in subpart C, D, or H of this part 18, or part 403, it is unlawful for: (a) Any person, vessel, or...
Take Steps Toward a Healthier Life | Poster
The National Institutes of Health (NIH) is promoting wellness by encouraging individuals to take the stairs. In an effort to increase participation in this program, NIH has teamed up with Occupational Health Services (OHS). OHS is placing NIH-sponsored “Take the Stairs” stickers on stair entrances, stair exits, and elevators.
Does Anticipation Training Affect Drivers' Risk Taking?
ERIC Educational Resources Information Center
McKenna, Frank P.; Horswill, Mark S.; Alexander, Jane L.
2006-01-01
Skill and risk taking are argued to be independent and to require different remedial programs. However, it is possible to contend that skill-based training could be associated with an increase, a decrease, or no change in risk-taking behavior. In 3 experiments, the authors examined the influence of a skill-based training program (hazard…
Cost Discrepancy, Signaling, and Risk Taking
ERIC Educational Resources Information Center
Lemon, Jim
2005-01-01
If risk taking is in some measure a signal to others by the person taking risks, the model of "costly signaling" predicts that the more the apparent cost of the risk to others exceeds the perceived cost of the risk to the risk taker, the more attractive that risk will be as a signal. One hundred and twelve visitors to youth…
Taking Decisions: Assessment for University Entry
ERIC Educational Resources Information Center
Plassmann, Sibylle; Zeidler, Beate
2014-01-01
Language testing means taking decisions: about the test taker's results, but also about the test construct and the measures taken in order to ensure quality. This article takes the German test "telc Deutsch C1 Hochschule" as an example to illustrate this decision-making process in an academic context. The test is used for university…
TakeTwo: an indexing algorithm suited to still images with known crystal parameters
Ginn, Helen Mary; Roedig, Philip; Kuo, Anling; Evans, Gwyndaf; Sauter, Nicholas K.; Ernst, Oliver; Meents, Alke; Mueller-Werkmeister, Henrike; Miller, R. J. Dwayne; Stuart, David Ian
2016-01-01
The indexing methods currently used for serial femtosecond crystallography were originally developed for experiments in which crystals are rotated in the X-ray beam, providing significant three-dimensional information. On the other hand, shots from both X-ray free-electron lasers and serial synchrotron crystallography experiments are still images, in which the few three-dimensional data available arise only from the curvature of the Ewald sphere. Traditional synchrotron crystallography methods are thus less well suited to still image data processing. Here, a new indexing method is presented with the aim of maximizing information use from a still image given the known unit-cell dimensions and space group. Efficacy for cubic, hexagonal and orthorhombic space groups is shown, and for those showing some evidence of diffraction the indexing rate ranged from 90% (hexagonal space group) to 151% (cubic space group). Here, the indexing rate refers to the number of lattices indexed per image. PMID:27487826
Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao
2015-07-01
Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.
An Algorithm for Linearly Constrained Nonlinear Programming Programming Problems.
1980-01-01
ALGORITHM FOR LINEARLY CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS Mokhtar S. Bazaraa and Jamie J. Goode In this paper an algorithm for solving a linearly...distance pro- gramr.ing, as in the works of Bazaraa and Goode 12], and Wolfe [16 can be used for solving this problem. Special methods that take advantage of...34 Pacific Journal of Mathematics, Volume 16, pp. 1-3, 1966. 2. M. S. Bazaraa and J. j. Goode, "An Algorithm for Finding the Shortest Element of a
Full reconstruction of a 14-qubit state within four hours
NASA Astrophysics Data System (ADS)
Hou, Zhibo; Zhong, Han-Sen; Tian, Ye; Dong, Daoyi; Qi, Bo; Li, Li; Wang, Yuanlong; Nori, Franco; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can
2016-08-01
Full quantum state tomography (FQST) plays a unique role in the estimation of the state of a quantum system without a priori knowledge or assumptions. Unfortunately, since FQST requires informationally (over)complete measurements, both the number of measurement bases and the computational complexity of data processing suffer an exponential growth with the size of the quantum system. A 14-qubit entangled state has already been experimentally prepared in an ion trap, and the data processing capability for FQST of a 14-qubit state seems to be far away from practical applications. In this paper, the computational capability of FQST is pushed forward to reconstruct a 14-qubit state with a run time of only 3.35 hours using the linear regression estimation (LRE) algorithm, even when informationally overcomplete Pauli measurements are employed. The computational complexity of the LRE algorithm is first reduced from ∼1019 to ∼1015 for a 14-qubit state, by dropping all the zero elements, and its computational efficiency is further sped up by fully exploiting the parallelism of the LRE algorithm with parallel Graphic Processing Unit (GPU) programming. Our result demonstrates the effectiveness of using parallel computation to speed up the postprocessing for FQST, and can play an important role in quantum information technologies with large quantum systems.
Fast Density Inversion Solution for Full Tensor Gravity Gradiometry Data
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Wei, Xiaohui; Huang, Danian
2016-02-01
We modify the classical preconditioned conjugate gradient method for full tensor gravity gradiometry data. The resulting parallelized algorithm is implemented on a cluster to achieve rapid density inversions for various scenarios, overcoming the problems of computation time and memory requirements caused by too many iterations. The proposed approach is mainly based on parallel programming using the Message Passing Interface, supplemented by Open Multi-Processing. Our implementation is efficient and scalable, enabling its use with large-scale data. We consider two synthetic models and real survey data from Vinton Dome, US, and demonstrate that our solutions are reliable and feasible.
How taking photos increases enjoyment of experiences.
Diehl, Kristin; Zauberman, Gal; Barasch, Alixandra
2016-08-01
Experiences are vital to the lives and well-being of people; hence, understanding the factors that amplify or dampen enjoyment of experiences is important. One such factor is photo-taking, which has gone unexamined by prior research even as it has become ubiquitous. We identify engagement as a relevant process that influences whether photo-taking will increase or decrease enjoyment. Across 3 field and 6 lab experiments, we find that taking photos enhances enjoyment of positive experiences across a range of contexts and methodologies. This occurs when photo-taking increases engagement with the experience, which is less likely when the experience itself is already highly engaging, or when photo-taking interferes with the experience. As further evidence of an engagement-based process, we show that photo-taking directs greater visual attention to aspects of the experience one may want to photograph. Lastly, we also find that this greater engagement due to photo-taking results in worse evaluations of negative experiences. (PsycINFO Database Record
A heterogeneous nonlinear attenuating full-wave model of ultrasound.
Pinton, Gianmarco F; Dahl, Jeremy; Rosenzweig, Stephen; Trahey, Gregg E
2009-03-01
A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD). Three-dimensional solutions of the equation are verified with water tank measurements of a commercial diagnostic ultrasound transducer and are shown to be in excellent agreement in terms of the fundamental and harmonic acoustic fields and the power spectrum at the focus. The linear and nonlinear components of the algorithm are also verified independently. In the linear nonattenuating regime solutions match results from Field II, a well established software package used in transducer modeling, to within 0.3 dB. Nonlinear plane wave propagation is shown to closely match results from the Galerkin method up to 4 times the fundamental frequency. In addition to thermoviscous attenuation we present a numerical solution of the relaxation attenuation laws that allows modeling of arbitrary frequency dependent attenuation, such as that observed in tissue. A perfectly matched layer (PML) is implemented at the boundaries with a numerical implementation that allows the PML to be used with high-order discretizations. A -78 dB reduction in the reflected amplitude is demonstrated. The numerical algorithm is used to simulate a diagnostic ultrasound pulse propagating through a histologically measured representation of human abdominal wall with spatial variation in the speed of sound, attenuation, nonlinearity, and density. An ultrasound image is created in silico using the same physical and algorithmic process used in an ultrasound scanner: a series of pulses are transmitted through heterogeneous scattering tissue and the received echoes are used in a delay-and-sum beam-forming algorithm to generate a images. The resulting harmonic image exhibits characteristic improvement in lesion boundary definition and contrast when compared with the fundamental image. We demonstrate a mechanism of harmonic image quality
Applying the take-grant protection model
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
The Take-Grant Protection Model has in the past been used to model multilevel security hierarchies and simple protection systems. The models are extended to include theft of rights and sharing information, and additional security policies are examined. The analysis suggests that in some cases the basic rules of the Take-Grant Protection Model should be augmented to represent the policy properly; when appropriate, such modifications are made and their efforts with respect to the policy and its Take-Grant representation are discussed.
NASA Technical Reports Server (NTRS)
Hall, Albert W.
1961-01-01
The take-off distances over a 35-foot obstacle have been determined for a supersonic transport configuration characterized by a low maximum lift coefficient at a high angle of attack and by high drag due to lift. These distances were determined analytically by means of an electronic digital computer. The effects of rotation speed, rotation angle, and rotation time were determined. A few configuration changes were made to determine the effects of thrust-weight ratio, wing loading, maximum lift coefficient, and induced drag on the take-off distance. The required runway lengths based on Special Civil Air Regulation No. SR-422B were determined for various values of rotation speed and compared with those based on full engine power. Increasing or decreasing the rotation speed as much as 5 knots from the value at which the minimum take-off distance occurred increased the distance only slightly more than 1 percent for the configuration studied. Under-rotation by 1 deg to 1.5 deg increased the take-off distance by 9 to 15 percent. Increasing the time required for rotation from 3 to 5 seconds had a rather small effect on the take-off distance when the values of rotation speed were near the values which result in the shortest take-off distance. When the runway length is based on full engine power rather than on SR-422B, the rotation speed which results in the shortest required runway length is 10 knots lower and the runway length is 4.3 percent less.
A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)
NASA Astrophysics Data System (ADS)
Cantó, J.; Curiel, S.; Martínez-Gómez, E.
2009-07-01
Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.
Taking Care of Your Diabetes Means Taking Care of Your Heart (Tip Sheet)
... Your Heart Diabetes & Your Heart Infographic (English) Taking Care of Your Diabetes Means Taking Care of Your Heart Diabetes and Heart Disease For ... What you can do now Ask your health care team these questions: What can I do to ...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-14
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Taking and Importing Marine Mammals; Taking Marine Mammals Incidental to Operation and Maintenance of the Neptune Liquefied Natural Gas Facility off...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-19
... Part 217 Taking and Importing Marine Mammals; Taking Marine Mammals Incidental to Columbia River... Incidental to Columbia River Crossing Project, Washington and Oregon AGENCY: National Marine Fisheries... Transit Authority (FTA) and Federal Highway Administration (FHWA), on behalf of the Columbia...
Evolutionary pattern search algorithms
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.
Algorithmization in Learning and Instruction.
ERIC Educational Resources Information Center
Landa, L. N.
An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
Full-Text Databases in Medicine.
ERIC Educational Resources Information Center
Sievert, MaryEllen C.; And Others
1995-01-01
Describes types of full-text databases in medicine; discusses features for searching full-text journal databases available through online vendors; reviews research on full-text databases in medicine; and describes the MEDLINE/Full-Text Research Project at the University of Missouri (Columbia) which investigated precision, recall, and relevancy.…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-31
... Incidental to Commercial Fishing Operations; Bottlenose Dolphin Take Reduction Plan AGENCY: National Marine... Dolphin Take Reduction Plan (BDTRP) and its implementing regulations by permanently continuing nighttime... November 1 through April 30. Members of the Bottlenose Dolphin Take Reduction Team (Team) recommended...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-26
... Air Warfare Center Weapons Division, U.S. Navy (Navy), to take three species of seals and sea lions... such taking. Regulations governing the taking of northern elephant seals (Mirounga angustirostris), Pacific harbor seals (Phoca vitulina richardsi), and California sea lions (Zalophus californianus),...
LRO's Diviner Takes the Eclipse's Temperature
During the June 15, 2011, total lunar eclipse, LRO's Diviner instrument will take temperature measurements of eclipsed areas of the moon, giving scientists a new look at rock distribution on the su...
5 CFR 1201.75 - Taking depositions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... PROCEDURES Procedures for Appellate Cases Discovery § 1201.75 Taking depositions. Depositions may be taken by any method agreed upon by the parties. The person providing information is subject to penalties...
Taking medicine at home - create a routine
... page: //medlineplus.gov/ency/patientinstructions/000613.htm Taking medicine at home - create a routine To use the ... teeth. Find Ways to Help You Remember Your Medicines You can: Set the alarm on your clock, ...
Gateway to New Atlantis Attraction Takes Shape
The home of space shuttle Atlantis continues taking shape at the Kennedy Space Center Visitor Complex. Crews placed the nose cone atop the second of a replica pair of solid rocket boosters. A life-...
When and How to Take Antibiotics
... complete dose, and they will not work to kill all your disease causing bacteria. Taking partial doses ... dose of the appropriate antibiotic is needed to kill all the harmful bacteria. How safe are antibiotics? ...
The Solar Constant: A Take Home Lab
ERIC Educational Resources Information Center
Eaton, B. G.; And Others
1977-01-01
Describes a method that uses energy from the sun, absorbed by aluminum discs, to melt ice, and allows the determination of the solar constant. The take-home equipment includes Styrofoam cups, a plastic syringe, and aluminum discs. (MLH)
The calculation of take-off run
NASA Technical Reports Server (NTRS)
Diehl, Walter S
1934-01-01
A comparatively simple method of calculating length of take-off run is developed from the assumption of a linear variation in net accelerating force with air speed and it is shown that the error involved is negligible.
Taking Medicines Safely: At Your Doctor's Office
... on. Feature: Taking Medicines Safely At Your Doctor's Office Past Issues / Summer 2013 Table of Contents Download ... Articles Medicines: Use Them Safely / At Your Doctor's Office / Ask Your Pharmacist / Now, It's Your Turn: How ...
Take Steps to Prevent Type 2 Diabetes
... En español Take Steps to Prevent Type 2 Diabetes Browse Sections The Basics Overview Types of Diabetes ... 1 of 9 sections The Basics: Types of Diabetes What is diabetes? Diabetes is a disease. People ...
Take Care of Your Child's Teeth
... This Topic En español Take Care of Your Child’s Teeth Browse Sections The Basics Overview Tooth Decay ... can cause cavities (holes) in teeth. Is my child at risk for tooth decay? Tooth decay is ...
Taking Statins May Boost Heart Surgery Outcomes
... taking your statin for even one day before cardiac surgery may increase your risk of death after surgery," ... cause-and-effect relationship. SOURCE: The Annals of Thoracic Surgery , news release, March 16, 2017 HealthDay Copyright (c) ...
Fever and Taking Your Child's Temperature
... instructions before putting it back in its case. Electronic ear thermometers measure the tympanic temperature (the amount ... a digital thermometer to take a rectal temperature. Electronic ear thermometers aren't recommended for infants younger ...
Algorithms for Labeling Focus Regions.
Fink, M; Haunert, Jan-Henrik; Schulz, A; Spoerhase, J; Wolff, A
2012-12-01
In this paper, we investigate the problem of labeling point sites in focus regions of maps or diagrams. This problem occurs, for example, when the user of a mapping service wants to see the names of restaurants or other POIs in a crowded downtown area but keep the overview over a larger area. Our approach is to place the labels at the boundary of the focus region and connect each site with its label by a linear connection, which is called a leader. In this way, we move labels from the focus region to the less valuable context region surrounding it. In order to make the leader layout well readable, we present algorithms that rule out crossings between leaders and optimize other characteristics such as total leader length and distance between labels. This yields a new variant of the boundary labeling problem, which has been studied in the literature. Other than in traditional boundary labeling, where leaders are usually schematized polylines, we focus on leaders that are either straight-line segments or Bezier curves. Further, we present algorithms that, given the sites, find a position of the focus region that optimizes the above characteristics. We also consider a variant of the problem where we have more sites than space for labels. In this situation, we assume that the sites are prioritized by the user. Alternatively, we take a new facility-location perspective which yields a clustering of the sites. We label one representative of each cluster. If the user wishes, we apply our approach to the sites within a cluster, giving details on demand.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Kernel Affine Projection Algorithms
NASA Astrophysics Data System (ADS)
Liu, Weifeng; Príncipe, José C.
2008-12-01
The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Ego depletion increases risk-taking.
Fischer, Peter; Kastenmüller, Andreas; Asal, Kathrin
2012-01-01
We investigated how the availability of self-control resources affects risk-taking inclinations and behaviors. We proposed that risk-taking often occurs from suboptimal decision processes and heuristic information processing (e.g., when a smoker suppresses or neglects information about the health risks of smoking). Research revealed that depleted self-regulation resources are associated with reduced intellectual performance and reduced abilities to regulate spontaneous and automatic responses (e.g., control aggressive responses in the face of frustration). The present studies transferred these ideas to the area of risk-taking. We propose that risk-taking is increased when individuals find themselves in a state of reduced cognitive self-control resources (ego-depletion). Four studies supported these ideas. In Study 1, ego-depleted participants reported higher levels of sensation seeking than non-depleted participants. In Study 2, ego-depleted participants showed higher levels of risk-tolerance in critical road traffic situations than non-depleted participants. In Study 3, we ruled out two alternative explanations for these results: neither cognitive load nor feelings of anger mediated the effect of ego-depletion on risk-taking. Finally, Study 4 clarified the underlying psychological process: ego-depleted participants feel more cognitively exhausted than non-depleted participants and thus are more willing to take risks. Discussion focuses on the theoretical and practical implications of these findings.
Taking Blame for Other People's Misconduct.
Willard, Jennifer; Madon, Stephanie; Curran, Timothy
2015-01-01
Taking blame for another person's misconduct may occur at relatively high rates for less serious crimes. The authors examined individual differences and situational factors related to this phenomenon by surveying college students (n = 213) and men enrolled in substance abuse treatment programs (n = 42). Among college students, conscientiousness and delinquency predicted their likelihood of being in a situation in which it was possible to take the blame for another person's misconduct. Situational factors, including the relationship with the perpetrator, the seriousness of the offense, feelings of responsibility for the offense, and differential consequences between the offender and the blame taker, were associated with college students' decisions to take the blame. Among substance abuse treatment participants, individuals who took the blame for another person's misconduct were more extraverted, reported feeling more loyalty toward the true perpetrator, and indicated more incentives to take the blame than individuals who did not take the blame. Links between theories of helping behavior and situational factors that predict blame taking are discussed.
Psychopathy and Risk Taking among Jailed Inmates
Swogger, Marc T.; Walsh, Zach; Lejuez, C. W.; Kosson, David S.
2010-01-01
Several clinical descriptions of psychopathy suggest a link to risk taking; however the empirical basis for this association is not well established. Moreover, it is not clear whether any association between psychopathy and risk taking is specific to psychopathy or reflects shared variance with other externalizing disorders, such as antisocial personality disorder, alcohol use disorders, and drug use disorders. In the present study we aimed to clarify relationships between psychopathy and risky behavior among male county jail inmates using both self-reports of real-world risky behaviors and performance on the Balloon Analogue Risk Task (BART), a behavioral measure of risk taking. Findings suggest that associations between externalizing disorders and self-reported risk taking largely reflect shared mechanisms. However, psychopathy appears to account for unique variance in self-reported irresponsible and criminal risk taking beyond that associated with other externalizing disorders. By contrast, none of the disorders were associated with risk taking behavior on the BART, potentially indicating limited clinical utility for the BART in differentiating members of adult offender populations. PMID:20419073
Obstacle Detection Algorithms for Rotorcraft Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)
2001-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.
Sequence comparisons via algorithmic mutual information
Milosavijevic, A.
1994-12-31
One of the main problems in DNA and protein sequence comparisons is to decide whether observed similarity of two sequences should be explained by their relatedness or by mere presence of some shared internal structure, e.g., shared internal tandem repeats. The standard methods that are based on statistics or classical information theory can be used to discover either internal structure or mutual sequence similarity, but cannot take into account both. Consequently, currently used methods for sequence comparison employ {open_quotes}masking{close_quotes} techniques that simply eliminate sequences that exhibit internal repetitive structure prior to sequence comparisons. The {open_quotes}masking{close_quotes} approach precludes discovery of homologous sequences of moderate or low complexity, which abound at both DNA and protein levels. As a solution to this problem, we propose a general method that is based on algorithmic information theory and minimal length encoding. We show that algorithmic mutual information factors out the sequence similarity that is due to shared internal structure and thus enables discovery of truly related sequences. We extend the recently developed algorithmic significance method to show that significance depends exponentially on algorithmic mutual information.
Optical flow optimization using parallel genetic algorithm
NASA Astrophysics Data System (ADS)
Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe
2011-06-01
A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.
Brain-Machine Interface Control Algorithms.
Shanechi, Maryam M
2016-12-14
Motor brain-machine interfaces (BMI) allow subjects to control external devices by modulating their neural activity. BMIs record the neural activity, use a mathematical algorithm to estimate the subject's intended movement, actuate an external device, and provide visual feedback of the generated movement to the subject. A critical component of a BMI system is the control algorithm, termed decoder. Significant progress has been made in the design of BMI decoders in recent years resulting in proficient control in non-human primates and humans. In this review article, we discuss the decoding algorithms developed in the BMI field, with particular focus on recent designs that are informed by closed-loop control ideas. A motor BMI can be modeled as a closed-loop control system, where the controller is the brain, the plant is the prosthetic, the feedback is the biofeedback, and the control command is the neural activity. Additionally, compared to other closed-loop systems, BMIs have various unique properties. Neural activity is noisy and stochastic, and often consists of a sequence of spike trains. Neural representations of movement could be non-stationary and change over time, for example as a result of learning. We review recent decoder designs that take these unique properties into account. We also discuss the opportunities that exist at the interface of control theory, statistical inference, and neuroscience to devise a control-theoretic framework for BMI design and help develop the next-generation BMI control algorithms.
Saleh, Marwan D; Eswaran, C; Mueen, Ahmed
2011-08-01
This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.
Saleh, Marwan D; Eswaran, C
2012-01-01
Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.
Parallel Algorithms and Patterns
Robey, Robert W.
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Improved Chaff Solution Algorithm
2009-03-01
Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED
Accuracy metrics for judging time scale algorithms
NASA Technical Reports Server (NTRS)
Douglas, R. J.; Boulanger, J.-S.; Jacques, C.
1994-01-01
Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.
Development of sensor-based nitrogen recommendation algorithms for cereal crops
NASA Astrophysics Data System (ADS)
Asebedo, Antonio Ray
through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.
Global Precipitation Measurement (GPM) Microwave Imager Falling Snow Retrieval Algorithm Performance
NASA Astrophysics Data System (ADS)
Skofronick Jackson, Gail; Munchak, Stephen J.; Johnson, Benjamin T.
2015-04-01
Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges and uncertainties remaining. This work reports on the development and post-launch testing of retrieval algorithms for the NASA Global Precipitation Measurement (GPM) mission Core Observatory satellite launched in February 2014. In particular, we will report on GPM Microwave Imager (GMI) radiometer instrument algorithm performance with respect to falling snow detection and estimation. Since GPM's launch, the at-launch GMI precipitation algorithms, based on a Bayesian framework, have been used with the new GPM data. The at-launch database is generated using proxy satellite data merged with surface measurements (instead of models). One year after launch, the Bayesian database will begin to be replaced with the more realistic observational data from the GPM spacecraft radar retrievals and GMI data. It is expected that the observational database will be much more accurate for falling snow retrievals because that database will take full advantage of the 166 and 183 GHz snow-sensitive channels. Furthermore, much retrieval algorithm work has been done to improve GPM retrievals over land. The Bayesian framework for GMI retrievals is dependent on the a priori database used in the algorithm and how profiles are selected from that database. Thus, a land classification sorts land surfaces into ~15 different categories for surface-specific databases (radiometer brightness temperatures are quite dependent on surface characteristics). In addition, our work has shown that knowing if the land surface is snow-covered, or not, can improve the performance of the algorithm. Improvements were made to the algorithm that allow for daily inputs of ancillary snow cover
CAST: Contraction Algorithm for Symmetric Tensors
Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-09-22
Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.
Evolutionary algorithm for metabolic pathways synthesis.
Gerard, Matias F; Stegmayer, Georgina; Milone, Diego H
2016-06-01
Metabolic pathway building is an active field of research, necessary to understand and manipulate the metabolism of organisms. There are different approaches, mainly based on classical search methods, to find linear sequences of reactions linking two compounds. However, an important limitation of these methods is the exponential increase of search trees when a large number of compounds and reactions is considered. Besides, such models do not take into account all substrates for each reaction during the search, leading to solutions that lack biological feasibility in many cases. This work proposes a new evolutionary algorithm that allows searching not only linear, but also branched metabolic pathways, formed by feasible reactions that relate multiple compounds simultaneously. Tests performed using several sets of reactions show that this algorithm is able to find feasible linear and branched metabolic pathways.
Landau-Zener type surface hopping algorithms.
Belyaev, Andrey K; Lasser, Caroline; Trigila, Giulio
2014-06-14
A class of surface hopping algorithms is studied comparing two recent Landau-Zener (LZ) formulas for the probability of nonadiabatic transitions. One of the formulas requires a diabatic representation of the potential matrix while the other one depends only on the adiabatic potential energy surfaces. For each classical trajectory, the nonadiabatic transitions take place only when the surface gap attains a local minimum. Numerical experiments are performed with deterministically branching trajectories and with probabilistic surface hopping. The deterministic and the probabilistic approach confirm the affinity of both the LZ probabilities, as well as the good approximation of the reference solution computed by solving the Schrödinger equation via a grid based pseudo-spectral method. Visualizations of position expectations and superimposed surface hopping trajectories with reference position densities illustrate the effective dynamics of the investigated algorithms.
Automatic design of decision-tree algorithms with evolutionary algorithms.
Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A
2013-01-01
This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.
A bioinspired collision detection algorithm for VLSI implementation
NASA Astrophysics Data System (ADS)
Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.
2005-06-01
In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking
NASA Astrophysics Data System (ADS)
Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu
2015-03-01
Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image
A New Approximate Chimera Donor Cell Search Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Nixon, David (Technical Monitor)
1998-01-01
The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-02-01
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks.
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-02-27
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-01-01
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model. PMID:28240238
Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers
NASA Technical Reports Server (NTRS)
Lind, Rick; Balas, Gary J.
1995-01-01
This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.
Two-wavelength full-field heterodyne interferometric profilometry
NASA Astrophysics Data System (ADS)
Hsieh, Hung-Chih; Chen, Yen-Liang; Jian, Zhi-Chen; Wu, Wang-Tsung; Su, Der-Chin
2009-02-01
An alternative full-field interferometric profilometry is proposed by combining two-wavelength interferometry and heterodyne interferometry. A collimated heterodyne light is introduced into a modified Twyman-Green interferometer, the full-field interference signals are taken by a fast CMOS camera. The sampled intensities recorded by each pixel are fitted to derive a sinusoidal signal with the least-square sine wave fitting algorithm, and its phase can be obtained. Comparing the phase of the reference point, the relative phase of the pixel can be calculated. Next, the same measurement is made again at a different wavelength. The relative phase with respect to the effective wavelength can be calculated and the profile of the tested sample can be derived with the two-wavelength interferometric technique. Its validity is demonstrated. It has merits of both two-wavelength interferometry and heterodyne interferometry.
Assessing allowable take of migratory birds
Runge, M.C.; Sauer, J.R.; Avery, M.L.; Blackwell, B.F.; Koneff, M.D.
2009-01-01
Legal removal of migratory birds from the wild occurs for several reasons, including subsistence, sport harvest, damage control, and the pet trade. We argue that harvest theory provides the basis for assessing the impact of authorized take, advance a simplified rendering of harvest theory known as potential biological removal as a useful starting point for assessing take, and demonstrate this approach with a case study of depredation control of black vultures (Coragyps atratus) in Virginia, USA. Based on data from the North American Breeding Bird Survey and other sources, we estimated that the black vulture population in Virginia was 91,190 (95% credible interval = 44,520?212,100) in 2006. Using a simple population model and available estimates of life-history parameters, we estimated the intrinsic rate of growth (rmax) to be in the range 7?14%, with 10.6% a plausible point estimate. For a take program to seek an equilibrium population size on the conservative side of the yield curve, the rate of take needs to be less than that which achieves a maximum sustained yield (0.5 x rmax). Based on the point estimate for rmax and using the lower 60% credible interval for population size to account for uncertainty, these conditions would be met if the take of black vultures in Virginia in 2006 was <3,533 birds. Based on regular monitoring data, allowable harvest should be adjusted annually to reflect changes in population size. To initiate discussion about how this assessment framework could be related to the laws and regulations that govern authorization of such take, we suggest that the Migratory Bird Treaty Act requires only that take of native migratory birds be sustainable in the long-term, that is, sustained harvest rate should be
Assessing allowable take of migratory birds
Runge, M.C.; Sauer, J.R.; Avery, M.L.; Blackwell, B.F.; Koneff, M.D.
2009-01-01
Legal removal of migratory birds from the wild occurs for several reasons, including subsistence, sport harvest, damage control, and the pet trade. We argue that harvest theory provides the basis for assessing the impact of authorized take, advance a simplified rendering of harvest theory known as potential biological removal as a useful starting point for assessing take, and demonstrate this approach with a case study of depredation control of black vultures (Coragyps atratus) in Virginia, USA. Based on data from the North American Breeding Bird Survey and other sources, we estimated that the black vulture population in Virginia was 91,190 (95% credible interval = 44,520?212,100) in 2006. Using a simple population model and available estimates of life-history parameters, we estimated the intrinsic rate of growth (rmax) to be in the range 7?14%, with 10.6% a plausible point estimate. For a take program to seek an equilibrium population size on the conservative side of the yield curve, the rate of take needs to be less than that which achieves a maximum sustained yield (0.5 x rmax). Based on the point estimate for rmax and using the lower 60% credible interval for population size to account for uncertainty, these conditions would be met if the take of black vultures in Virginia in 2006 was < 3,533 birds. Based on regular monitoring data, allowable harvest should be adjusted annually to reflect changes in population size. To initiate discussion about how this assessment framework could be related to the laws and regulations that govern authorization of such take, we suggest that the Migratory Bird Treaty Act requires only that take of native migratory birds be sustainable in the long-term, that is, sustained harvest rate should be < rmax. Further, the ratio of desired harvest rate to 0.5 x rmax may be a useful metric for ascertaining the applicability of specific requirements of the National Environmental Protection Act.
Patch Based Multiple Instance Learning Algorithm for Object Tracking
2017-01-01
To deal with the problems of illumination changes or pose variations and serious partial occlusion, patch based multiple instance learning (P-MIL) algorithm is proposed. The algorithm divides an object into many blocks. Then, the online MIL algorithm is applied on each block for obtaining strong classifier. The algorithm takes account of both the average classification score and classification scores of all the blocks for detecting the object. In particular, compared with the whole object based MIL algorithm, the P-MIL algorithm detects the object according to the unoccluded patches when partial occlusion occurs. After detecting the object, the learning rates for updating weak classifiers' parameters are adaptively tuned. The classifier updating strategy avoids overupdating and underupdating the parameters. Finally, the proposed method is compared with other state-of-the-art algorithms on several classical videos. The experiment results illustrate that the proposed method performs well especially in case of illumination changes or pose variations and partial occlusion. Moreover, the algorithm realizes real-time object tracking. PMID:28321248
Patch Based Multiple Instance Learning Algorithm for Object Tracking.
Wang, Zhenjie; Wang, Lijia; Zhang, Hua
2017-01-01
To deal with the problems of illumination changes or pose variations and serious partial occlusion, patch based multiple instance learning (P-MIL) algorithm is proposed. The algorithm divides an object into many blocks. Then, the online MIL algorithm is applied on each block for obtaining strong classifier. The algorithm takes account of both the average classification score and classification scores of all the blocks for detecting the object. In particular, compared with the whole object based MIL algorithm, the P-MIL algorithm detects the object according to the unoccluded patches when partial occlusion occurs. After detecting the object, the learning rates for updating weak classifiers' parameters are adaptively tuned. The classifier updating strategy avoids overupdating and underupdating the parameters. Finally, the proposed method is compared with other state-of-the-art algorithms on several classical videos. The experiment results illustrate that the proposed method performs well especially in case of illumination changes or pose variations and partial occlusion. Moreover, the algorithm realizes real-time object tracking.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Quantum gate decomposition algorithms.
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
Algorithm for reaction classification.
Kraut, Hans; Eiblmaier, Josef; Grethe, Guenter; Löw, Peter; Matuszczyk, Heinz; Saller, Heinz
2013-11-25
Reaction classification has important applications, and many approaches to classification have been applied. Our own algorithm tests all maximum common substructures (MCS) between all reactant and product molecules in order to find an atom mapping containing the minimum chemical distance (MCD). Recent publications have concluded that new MCS algorithms need to be compared with existing methods in a reproducible environment, preferably on a generalized test set, yet the number of test sets available is small, and they are not truly representative of the range of reactions that occur in real reaction databases. We have designed a challenging test set of reactions and are making it publicly available and usable with InfoChem's software or other classification algorithms. We supply a representative set of example reactions, grouped into different levels of difficulty, from a large number of reaction databases that chemists actually encounter in practice, in order to demonstrate the basic requirements for a mapping algorithm to detect the reaction centers in a consistent way. We invite the scientific community to contribute to the future extension and improvement of this data set, to achieve the goal of a common standard.
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Academic Journal Embargoes and Full Text Databases.
ERIC Educational Resources Information Center
Brooks, Sam
2003-01-01
Documents the reasons for embargoes of academic journals in full text databases (i.e., publisher-imposed delays on the availability of full text content) and provides insight regarding common misconceptions. Tables present data on selected journals covering a cross-section of subjects and publishers and comparing two full text business databases.…
Equilibrium stellar systems with genetic algorithms
NASA Astrophysics Data System (ADS)
Gularte, E.; Carpintero, D. D.
In 1979, M Schwarzschild showed that it is possible to build an equilibrium triaxial stellar system. However, the linear programmation used to that goal was not able to determine the uniqueness of the solution, nor even if that solution was the optimum one. Genetic algorithms are ideal tools to find a solution to this problem. In this work, we use a genetic algorithm to reproduce an equilibrium spherical stellar system from a suitable set of predefined orbits, obtaining the best solution attainable with the provided set. FULL TEXT IN SPANISH
A low-power VLSI implementation for fast full-search variable block size motion estimation
NASA Astrophysics Data System (ADS)
Li, Peng; Tang, Hua
2013-09-01
Variable block size motion estimation (VBSME) is becoming the new coding technique in H.264/AVC. This article presents a low-power VLSI implementation for VBSME, which employs a fast full-search block-matching algorithm to reduce power consumption, while preserving the optimal motion vectors (MVs). The fast full-search algorithm is based on the comparison of the current minimum sum of absolute difference (SAD) to a conservative lower bound so that unnecessary SAD calculations can be eliminated. We first experimentally determine the specific conservative lower bound of SAD and then implement the fast full-search algorithm in FPGA and 0.18 µm CMOS technology. To the best of our knowledge, this is the first time that a fast full-search block-matching algorithm is explored to reduce power consumption in the context of VBSME and implemented in hardware. Experiment results show that the proposed design can save power consumption by 45% compared to conventional VBSME designs that give optimal MV based on the full-search algorithms.
Fast autodidactic adaptive equalization algorithms
NASA Astrophysics Data System (ADS)
Hilal, Katia
Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
An Elegant Algorithm for the Construction of Suffix Arrays.
Rajasekaran, Sanguthevar; Nicolae, Marius
2014-07-01
The suffix array is a data structure that finds numerous applications in string processing problems for both linguistic texts and biological data. It has been introduced as a memory efficient alternative for suffix trees. The suffix array consists of the sorted suffixes of a string. There are several linear time suffix array construction algorithms (SACAs) known in the literature. However, one of the fastest algorithms in practice has a worst case run time of O(n(2)). The problem of designing practically and theoretically efficient techniques remains open. In this paper we present an elegant algorithm for suffix array construction which takes linear time with high probability; the probability is on the space of all possible inputs. Our algorithm is one of the simplest of the known SACAs and it opens up a new dimension of suffix array construction that has not been explored until now. Our algorithm is easily parallelizable. We offer parallel implementations on various parallel models of computing. We prove a lemma on the ℓ-mers of a random string which might find independent applications. We also present another algorithm that utilizes the above algorithm. This algorithm is called RadixSA and has a worst case run time of O(n log n). RadixSA introduces an idea that may find independent applications as a speedup technique for other SACAs. An empirical comparison of RadixSA with other algorithms on various datasets reveals that our algorithm is one of the fastest algorithms to date. The C++ source code is freely available at http://www.engr.uconn.edu/~man09004/radixSA.zip.
An Elegant Algorithm for the Construction of Suffix Arrays
Rajasekaran, Sanguthevar; Nicolae, Marius
2014-01-01
The suffix array is a data structure that finds numerous applications in string processing problems for both linguistic texts and biological data. It has been introduced as a memory efficient alternative for suffix trees. The suffix array consists of the sorted suffixes of a string. There are several linear time suffix array construction algorithms (SACAs) known in the literature. However, one of the fastest algorithms in practice has a worst case run time of O(n2). The problem of designing practically and theoretically efficient techniques remains open. In this paper we present an elegant algorithm for suffix array construction which takes linear time with high probability; the probability is on the space of all possible inputs. Our algorithm is one of the simplest of the known SACAs and it opens up a new dimension of suffix array construction that has not been explored until now. Our algorithm is easily parallelizable. We offer parallel implementations on various parallel models of computing. We prove a lemma on the ℓ-mers of a random string which might find independent applications. We also present another algorithm that utilizes the above algorithm. This algorithm is called RadixSA and has a worst case run time of O(n log n). RadixSA introduces an idea that may find independent applications as a speedup technique for other SACAs. An empirical comparison of RadixSA with other algorithms on various datasets reveals that our algorithm is one of the fastest algorithms to date. The C++ source code is freely available at http://www.engr.uconn.edu/~man09004/radixSA.zip. PMID:25045344
Taking the Bite out of Bruxism (For Kids)
... Dictionary of Medical Words En Español What Other Kids Are Reading Taking Care of Your Ears Taking ... Taking the Bite Out of Bruxism KidsHealth > For Kids > Taking the Bite Out of Bruxism Print A ...
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
A Mathematical Basis for the Safety Analysis of Conflict Prevention Algorithms
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Butler, Ricky W.; Munoz, Cesar A.; Dowek, Gilles
2009-01-01
In air traffic management systems, a conflict prevention system examines the traffic and provides ranges of guidance maneuvers that avoid conflicts. This guidance takes the form of ranges of track angles, vertical speeds, or ground speeds. These ranges may be assembled into prevention bands: maneuvers that should not be taken. Unlike conflict resolution systems, which presume that the aircraft already has a conflict, conflict prevention systems show conflicts for all maneuvers. Without conflict prevention information, a pilot might perform a maneuver that causes a near-term conflict. Because near-term conflicts can lead to safety concerns, strong verification of correct operation is required. This paper presents a mathematical framework to analyze the correctness of algorithms that produce conflict prevention information. This paper examines multiple mathematical approaches: iterative, vector algebraic, and trigonometric. The correctness theories are structured first to analyze conflict prevention information for all aircraft. Next, these theories are augmented to consider aircraft which will create a conflict within a given lookahead time. Certain key functions for a candidate algorithm, which satisfy this mathematical basis are presented; however, the proof that a full algorithm using these functions completely satisfies the definition of safety is not provided.
NASA Astrophysics Data System (ADS)
Alfonso, Lester; Zamora, Jose; Cruz, Pedro
2015-04-01
The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
NASA Astrophysics Data System (ADS)
Allner, S.; Koehler, T.; Fehringer, A.; Birnbacher, L.; Willner, M.; Pfeiffer, F.; Noël, P. B.
2016-05-01
The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.
Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B
2016-05-21
The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.
Equilibrium points in the full three-body problem
NASA Astrophysics Data System (ADS)
Woo, Pamela; Misra, Arun K.
2014-06-01
The orbital motion of a spacecraft in the vicinity of a binary asteroid system can be modelled as the full three-body problem. The circular restricted case is considered here. Taking into account the shape, size, and mass distribution of arbitrarily shaped primary bodies, the locations of the equilibrium points are computed and are found to be offset from those of the classical CR3BP with point-masses. Through numerical computations, it was found that in cases with highly aspherical primaries, additional collinear and noncollinear equilibrium points exist. Examples include systems with pear-shaped and peanut-shaped bodies.
NASA Astrophysics Data System (ADS)
Weber, James Daniel
1999-11-01
This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is
Caregiver Leave-Taking in Spain: Rate, Motivations, and Barriers.
Rogero-García, Jesús; García-Sainz, Cristina
2016-01-01
This paper aims to (1) determine the rate of (full- and part-time) caregiver leave-taking in Spain, (2) identify the reasons conducive to a more intense use of this resource, and (3) ascertain the main obstacles to its use, as perceived by caregivers. All 896 people covered by the sample were engaging in paid work and had cared for dependent adults in the last 12 years. This resource, in particular the full-time alternative, was found to be a minority option. The data showed that legal, work-related, and family and gender norm issues are the four types of factors that determine the decision to take such leaves. The most significant obstacles to their use are the forfeiture of income and the risk of losing one's job. Our results suggest that income replacement during a leave would increase the take-up of these resources. Moreover, enlargement of public care services would promote the use of leave as a free choice of caregivers.
A MEDLINE categorization algorithm
Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit
2006-01-01
Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms
The Time Domain Spectroscopic Survey: Taking Spectra of 250,000 Optical Variables
NASA Astrophysics Data System (ADS)
Morganson, Eric; Green, Paul J.; Anderson, Scott F.; Ruan, John J.; TDSS Team, SDSS Collaboration, PS1 Consortium
2015-01-01
The Time Domain Spectroscopic Survey (TDSS) is an SDSS-IV subproject that will take spectra of 250,000 optical variables including 185,000 quasars and 65,000 variable stars. TDSS began taking data in August, 2014 and will continue for 4-6 years. TDSS uses a unique, variability-only selection algorithm that does not focus on targeting any specific type of variable. TDSS will find unusual quasars that could not be found by conventional color selection and will allow us to see how quasar variability is related to other properties of the AGN. TDSS will also produce the largest sample of spectroscopic stellar variable classifications and will show how the concentrations of different types of stellar variables vary across the sky. Most excitingly, TDSS's unprecedented scale and broad selection algorithm promise to identify new classes of astrophysical variables.
Quantum search algorithms on a regular lattice
Hein, Birgit; Tanner, Gregor
2010-07-15
Quantum algorithms for searching for one or more marked items on a d-dimensional lattice provide an extension of Grover's search algorithm including a spatial component. We demonstrate that these lattice search algorithms can be viewed in terms of the level dynamics near an avoided crossing of a one-parameter family of quantum random walks. We give approximations for both the level splitting at the avoided crossing and the effectively two-dimensional subspace of the full Hilbert space spanning the level crossing. This makes it possible to give the leading order behavior for the search time and the localization probability in the limit of large lattice size including the leading order coefficients. For d=2 and d=3, these coefficients are calculated explicitly. Closed form expressions are given for higher dimensions.
Scatter correction for full-fan volumetric CT using a stationary beam blocker in a single full scan
Niu, Tianye; Zhu, Lei
2011-01-01
Purpose: Applications of volumetric CT (VCT) are hampered by shading and streaking artifacts in the reconstructed images. These artifacts are mainly due to strong x-ray scatter signals accompanied with the large illumination area within one projection, which lead to CT number inaccuracy, image contrast loss and spatial nonuniformity. Although different scatter correction algorithms have been proposed in literature, a standard solution still remains unclear. Measurement-based methods use a beam blocker to acquire scatter samples. These techniques have unrivaled advantages over other existing algorithms in that they are simple and efficient, and achieve high scatter estimation accuracy without prior knowledge of the imaged object. Nevertheless, primary signal loss is inevitable in the scatter measurement, and multiple scans or moving the beam blocker during data acquisition are typically employed to compensate for the missing primary data. In this paper, we propose a new measurement-based scatter correction algorithm without primary compensation for full-fan VCT. An accurate reconstruction is obtained with one single-scan and a stationary x-ray beam blocker, two seemingly incompatible features which enable simple and efficient scatter correction without increase of scan time or patient dose. Methods: Based on the CT reconstruction theory, we distribute the blocked data over the projection area where primary signals are considered approximately redundant in a full scan, such that the CT image quality is not degraded even with primary loss. Scatter is then accurately estimated by interpolation and scatter-corrected CT images are obtained using an FDK-based reconstruction algorithm. Results: The proposed method is evaluated using two phantom studies on a tabletop CBCT system. On the Catphan©600 phantom, our approach reduces the reconstruction error from 207 Hounsfield unit (HU) to 9 HU in the selected region of interest, and improves the image contrast by a factor of 2
Measurement of vehicles speed with full waveform lidar
NASA Astrophysics Data System (ADS)
Muzal, Michał; Mierczyk, Zygmunt; Zygmunt, Marek; Wojtanowski, Jacek; Piotrowski, Wiesław
2016-12-01
Measurement of vehicles speed by means of displacement measurement with "time of flight" lidar requires gathering of accurate information about distance to the vehicle in a set time interval. As with any pulsed laser lidar, its maximum range is limited by available incoming signal to noise ratio. That ratio determines not only maximum range, but also accuracy of measurement. For fast and precise measurements of speed of the vehicles their displacement should bee measured with centimeter accuracy. However that demand is hard to reach on long distances and poor quality of the echo signal. Improving accuracy beyond given by a single pulse probing requires emission of several probing pulses. Total displacement error will than fall with the square root of the number of executed measurements. Yet this method will not extend available distance beyond the limit set by threshold detection systems. Acquisition of the full waveform of received signals is a method that allows extension of maximum range through synchronic addition of subsequent waveforms. Doing so improves SNR by a well-known factor of square root of the number of carried additions. Disadvantage of this method is that it requires use of fast analog to digital converters for data acquisition, and simple distance calculation algorithms may not give the adequate accuracy due to relatively long sampling period of reasonable priced ADC's. In this article more advanced algorithms of distance calculations that base on ADC raw data are presented and analyzed. Practical implementation of algorithm in prototype design of laser speed gun is shown along with real life test results.
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-18
... Sanctuary (MBNMS) to incidentally take, by Level B harassment only, California sea lions (Zalophus californianus) and Pacific harbor seals (Phoca vitulina) incidental to professional fireworks displays...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
... Sanctuary (MBNMS) to incidentally take, by Level B harassment only, California sea lions (Zalophus californianus) and harbor seals (Phoca vitulina) incidental to professional fireworks displays within the...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-21
... Mammals; Taking Marine Mammals Incidental to Operation and Maintenance of the Neptune Liquefied Natural Gas Facility of Massachusetts; Correction AGENCY: National Marine Fisheries Service (NMFS),...
New packet scheduling algorithm in wireless CDMA data networks
NASA Astrophysics Data System (ADS)
Wang, Yu; Gao, Zhuo; Li, Shaoqian; Li, Lemin
2002-08-01
The future 3G/4G wireless communication systems will provide internet access for mobile users. Packet scheduling algorithms are essential for QoS of diversified data traffics and efficient utilization of radio spectrum.This paper firstly presents a new packet scheduling algorithm DSTTF under the assumption of continuous transmission rates and scheduling intervals for CDMA data networks . Then considering the constraints of discrete transmission rates and fixed scheduling intervals imposed by the practical system, P-DSTTF, a modified version of DSTTF, is brought forward. Both scheduling algorithms take into consideration of channel condition, packet size and traffic delay bounds. The extensive simulation results demonstrate that the proposed scheduling algorithms are superior to some typical ones in current research. In addition, both static and dynamic wireless channel model of multi-level link capacity are established. These channel models sketch better the characterizations of wireless channel than two state Markov model widely adopted by the current literature.
Magnetotelluric inversion via reverse time migration algorithm of seismic data
Ha, Taeyoung . E-mail: tyha@math.snu.ac.kr; Shin, Changsoo . E-mail: css@model.snu.ac.kr
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.
Optimization of circuits using a constructive learning algorithm
Beiu, V.
1997-05-01
The paper presents an application of a constructive learning algorithm to optimization of circuits. For a given Boolean function f. a fresh constructive learning algorithm builds circuits belonging to the smallest F{sub n,m} class of functions (n inputs and having m groups of ones in their truth table). The constructive proofs, which show how arbitrary Boolean functions can be implemented by this algorithm, are shortly enumerated An interesting aspect is that the algorithm can be used for generating both classical Boolean circuits and threshold gate circuits (i.e. analogue inputs and digital outputs), or a mixture of them, thus taking advantage of mixed analogue/digital technologies. One illustrative example is detailed The size and the area of the different circuits are compared (special cost functions can be used to closer estimate the area and the delay of VLSI implementations). Conclusions and further directions of research are ending the paper.
Magnetotelluric inversion via reverse time migration algorithm of seismic data
NASA Astrophysics Data System (ADS)
Ha, Taeyoung; Shin, Changsoo
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nédélec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.
Algorithms, games, and evolution
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-01-01
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793
Tomasz Plawski, J. Hovater
2010-09-01
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
NASA Technical Reports Server (NTRS)
Schrenk, Martin
1933-01-01
As a result of previous reports, it was endeavored to obtain, along with the truest possible comprehension of the course of thrust, a complete, simple and clear formula for the whole take-off distance up to a certain altitude, which shall give the correct relative weight to all the factors.
What Predicts Skill in Lecture Note Taking?
ERIC Educational Resources Information Center
Peverly, Stephen T.; Ramaswamy, Vivek; Brown, Cindy; Sumowski, James; Alidoost, Moona; Garner, Joanna
2007-01-01
Despite the importance of good lecture notes to test performance, very little is known about the cognitive processes that underlie effective lecture note taking. The primary purpose of the 2 studies reported (a pilot study and Study 1) was to investigate 3 processes hypothesized to be significantly related to quality of notes: transcription…
Taking Religion Seriously across the Curriculum.
ERIC Educational Resources Information Center
Nord, Warren A.; Haynes, Charles C.
This book presents an overview of the interplay of religion and public education. The book states that schools must take religion seriously, and it outlines the civic, constitutional, and educational frameworks that should shape the treatment of religion in the curriculum and classroom. It examines religion's absence from the classroom and the…
Dental management of patients taking antiplatelet medications.
Henry, Robert G
2009-07-01
Antiplatelet medications are drugs which decrease platelet aggregation and inhibit thrombus (clot) formation. They are widely used in primary and secondary prevention of thrombotic cerebrovascular or cardiovascular disease. The most common antiplatelet medications are the cyclooxygenase inhibitors (aspirin) and the adenosine disphosphate (ADP) receptor inhibitors clopidogrel (Plavix) and ticlopidine (Ticlid). The dental management of patients taking these drugs is reviewed here.
Renew! Take a Break in Kindergarten
ERIC Educational Resources Information Center
Charlesworth, Rosalind
2005-01-01
A university child development/early childhood education professor renews her relationship with young children and with current public school teaching by spending 5 weeks in kindergarten. This article describes some highlights of her experience: the children's daily journal writing, an in-class and take-home math activity, and teaching the…
Promoting Knowledge Transfer with Electronic Note Taking
ERIC Educational Resources Information Center
Katayama, Andrew D.; Shambaugh, R. Neal; Doctor, Tasneem
2005-01-01
We investigated the differences between (a) copying and pasting text versus typed note-taking methods of constructing study notes simultaneously with (b) vertically scaffolded versus horizontally scaffold notes on knowledge transfer. Forty-seven undergraduate educational psychology students participated. Materials included 2 electronic…
ERIC Educational Resources Information Center
Fowler, Betty
1997-01-01
Describes a unit for the first grade that takes advantage of the fall seasonal changes to explore, be creative, and learn. Ties all the subject areas together through a look at leaves and trees. A list of resources is included. (JRH)
Empirical Considerations of Episodic Perspective Taking.
ERIC Educational Resources Information Center
Roen, Duane H.
To study the effects of writers' attending to the informational needs of their readers (episodic perspective taking), each of 65 college freshmen was randomly assigned to one of three treatment conditions: (1) no attention to audience, (2) attention to audience during prewriting, and (3) attention to audience during revising. All three groups…
Literacy Technologies: What Stance Should We Take?
ERIC Educational Resources Information Center
Bruce, Bertram C.
Technology is a word which seems unavoidable now in discussions of literacy theory and practice. The question of what form literacies will take in a century likely to be defined by a new technological environment has become a present issue for nearly everyone involved with literacy today. This paper contends that at the core of both the excitement…
Disentangling Adolescent Pathways of Sexual Risk Taking
ERIC Educational Resources Information Center
Brookmeyer, Kathryn A.; Henrich, Christopher C.
2009-01-01
Using data from the National Longitudinal Survey of Youth, the authors aimed to describe the pathways of risk within sexual risk taking, alcohol use, and delinquency, and then identify how the trajectory of sexual risk is linked to alcohol use and delinquency. Risk trajectories were measured with adolescents aged 15-24 years (N = 1,778). Using…
Note Taking in Multi-Media Settings
ERIC Educational Resources Information Center
Black, Kelly; Yao, Guangming
2014-01-01
We provide a preliminary exploration into the use of note taking when combined with video examples. Student volunteers were divided into three groups and asked to perform two problems. The first problem was explored in a classroom setting and the other problem was a novel problem. The students were asked to complete the two questions. Furthermore,…
Kenojuak Ashevak: "Young Owl Takes a Ride."
ERIC Educational Resources Information Center
Schwartz, Bernard
1988-01-01
Describes a lesson plan used to introduce K-3 students to a Canadian Inuit artist, to the personal and cultural context of the artwork, and to a simple printmaking technique. Includes background information on the artist, instructional strategies, and a print of the artist's "Young Owl Takes a Ride." (GEA)
ERIC Educational Resources Information Center
Fornaciari, James
2016-01-01
As legendary Cubs manager Joe Maddon did with his players, seeing students as people first works for teachers who hope to build cohesive classes that achieve. Maddon's strength was his emphasis on cultivating positive relationships among his players. Taking a tip from Maddon's strategy, Fornaciari, an Advanced Placement history teacher, shares…
[Risk taking and the insular cortex].
Ishii, Hironori; Tsutsui, Ken-Ichiro; Iijima, Toshio
2013-08-01
Risk taking can lead to ruin, but sometimes, it can also provide great success. How does our brain make a decision on whether to take a risk or to play it safe? Recent studies have revealed the neural basis of risky decision making. In this review, we focus on the role of the anterior insular cortex (AIC) in risky decision making. Although human imaging studies have shown activations of the AIC in various gambling tasks, the causal involvement of the AIC in risky decision making was still unclear. Recently, we demonstrated a causality of the AIC in risky decision making by using a pharmacological approach in behaving rats-temporary inactivation of the AIC decreased the risk preference in gambling tasks, whereas temporary inactivation of the adjacent orbitofrontal cortex (OFC) increased the risk preference. The latter finding is consistent with a previous finding that patients with damage to the OFC take abnormally risky decisions in the Iowa gambling task. On the basis of these observations, we hypothesize that the intact AIC promotes risk-seeking behavior, and that the AIC and OFC are crucial for balancing the opposing motives of whether to take a risk or avoid it. However, the functional relationship between the AIC and OFC remains unclear. Future combinations of inactivation and electrophysiological studies may promote further understanding of risky decision making.
Teachable Moment: Google Earth Takes Us There
ERIC Educational Resources Information Center
Williams, Ann; Davinroy, Thomas C.
2015-01-01
In the current educational climate, where clearly articulated learning objectives are required, it is clear that the spontaneous teachable moment still has its place. Authors Ann Williams and Thomas Davinroy think that instructors from almost any discipline can employ Google Earth as a tool to take advantage of teachable moments through the…
Picture THIS: Taking Human Impact Seriously
ERIC Educational Resources Information Center
Patrick, Patricia; Patrick, Tammy
2010-01-01
Unfortunately, middle school students often view human impact as an abstract idea over which they have no control and do not see themselves as contributing to the Earth's environmental decline. How better to uncover students' ideas concerning human impact in their local community than to have them take photographs. With this objective in mind, the…
GROUP RESPONSIBILITY, AFFILIATION, AND ETHICAL RISK TAKING.
ERIC Educational Resources Information Center
RETTIG, SALOMON; AND OTHERS
THE COMBINED EFFECT OF AFFILIATION AND GROUP RESPONSIBILITY ON ETHICAL RISK TAKING IS EXAMINED. SUBJECTS WERE 150 MALE COLLEGE STUDENTS RANDOMLY ASSIGNED TO THREE LEVELS OF AFFILIATION. THE TASK CONSISTED OF TRACING A LINE BETWEEN TWO CONCENTRIC CIRCLES WITHOUT TOUCHING EITHER CIRCLE. SUBJECTS REPORTED THEIR OWN "SUCCESSES" ON THE TASK,…
Reconceptualizing Environmental Education: Taking Account of Reality.
ERIC Educational Resources Information Center
Dillon, Justin; Teamey, Kelly
2002-01-01
Investigates the pros and cons of integrating environmental education into the school curriculum. Focusing solely on environmental education's role in the school curriculum ignores a range of factors that affect its efficacy in the majority of the world. Suggests a conceptualization of environmental education that takes into account a range of…
String theorist takes over as Lucasian Professor
NASA Astrophysics Data System (ADS)
Banks, Michael
2009-11-01
String theorist Michael Green will be the next Lucasian Professor of Mathematics at Cambridge University. Green, 63, will succeed Stephen Hawking, who held the chair from 1980 before retiring last month at the age of 67 and taking up a distinguished research chair at the Perimeter Institute for Theoretical Physics in Canada (see above).
Take the Search out of Research.
ERIC Educational Resources Information Center
Giese, Ronald N.; And Others
1992-01-01
Provides a model that maps out five stages of relating library and scientific research: (1) establish an interest; (2) narrow a topic; (3) clarify the variables; (4) refine the procedures; and (5) interpret the unexpected. Provides a student questionnaire for selecting a topic and a format for general note taking. (MDH)
Teen Risk-Taking: A Statistical Portrait.
ERIC Educational Resources Information Center
Lindberg, Laura Duberstein; Boggess, Scott; Porter, Laura; Williams, Sean
This report provides a statistical portrait of teen participation in 10 of the most prevalent risk behaviors. It focuses on the overall participation in each behavior and in multiple risk taking. The booklet presents the overall incidence and patterns of teen involvement in the following risk behaviors: (1) regular alcohol use; (2) regular tobacco…
Take Pride in America Educational Leader's Guide.
ERIC Educational Resources Information Center
Sledge, Janet H., Comp.
The Take Pride in America (TPIA) school program encourages volunteer stewardship programs to help protect, enhance, and manage public lands such as school sites, forests, parks, water reservoirs, historical sites, fish and wildlife areas, public nature preserves, and wilderness areas in the United States. From this program an educational guide and…
NASA Technical Reports Server (NTRS)
2008-01-01
Hundreds of children participated in the annual Take Our Children to Work Day at Stennis Space Center on July 29. During the day, children of Stennis employees received a tour of facilities and took part in various activities, including demonstrations in cryogenics and robotics.
Taking a strategic approach to campus parking.
Burr, Dave
2006-01-01
Building a new parking facility in a campus setting - such as a hospital or medical center - is not an easy assignment. By taking a strategic planning approach, according to the author, campus planners can meet the needs of most of their constituents for convenient, easily accessible and safe parking.
Distance Education: Taking Classes to the Students.
ERIC Educational Resources Information Center
Collins, Timothy; Dewees, Sarah
2001-01-01
Technological advances have equipped educational institutions with the capability to take classes to the student. Higher education institutions throughout the South are upgrading existing wide-area networks connecting buildings and campuses to create statewide "backbones" that will serve primary and secondary schools, libraries, offices,…
Irregular Applications: Architectures & Algorithms
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Basic cluster compression algorithm
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Lee, J.
1980-01-01
Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.
NASA Astrophysics Data System (ADS)
Reda, Ibrahim; Andreas, Afshin
2015-04-01
The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.
Algorithmic Complexity. Volume II.
1982-06-01
works, give an example, and discuss the inherent weaknesses and their causes. Electrical Network Analysis Knuth mentions the applicability of...of these 3 products of 2-coefficient 2 1 polynomials can be found by a repeated application of the 3 multiplication W Ascheme, only 3.3-9 scalar...see another application of this paradigm later. We now investigate the efficiency of the divide-and-conquer polynomial multiplication algorithm. Let M(n
ARPANET Routing Algorithm Improvements
1978-10-01
IMPROVEMENTS . .PFOnINI ORG. REPORT MUNDER -- ) _ .. .... 3940 7, AUT(c) .. .. .. CONTRACT Of GRANT NUMSlet e) SJ. M. /Mc~uillan E. C./Rosen I...8217), this problem may persist for a very long time, causing extremely bad performance throughout the whole network (for instance, if w’ reports that one of...algorithm may naturally tend to oscillate between bad routing paths and become itself a major contributor to network congestion. These examples show
1983-10-13
determining the solu- tion using the Moore - Penrose inverse . An expression for the mean square error is derived [8,9]. The expression indicates that...Proc. 10. "An Iterative Algorithm for Finding the Minimum Eigenvalue of a Class of Symmetric Matrices," D. Fuhrmann and B. Liu, submitted to 1984 IEEE...Int. Conf. Acous. Sp. 5V. Proc. 11. "Approximating the Eigenvectors of a Symmetric Toeplitz Matrix," by D. Fuhrmann and B. Liu, 1983 Allerton Conf. an
2016-06-07
XBT’s sound speed values instead of temperature values. Studies show that the sound speed at the surface in a specific location varies less than...be entered at the terminal in metric or English temperatures or sound speeds. The algorithm automatically determines which form each data point was... sound speeds. Leroy’s equation is used to derive sound speed from temperature or temperature from sound speed. The previous, current, and next months
Adaptive continuous twisting algorithm
NASA Astrophysics Data System (ADS)
Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid
2016-09-01
In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Stubbs, Allston Julius; Atilla, Halis Atil
2016-01-01
Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734
Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L
2013-12-01
ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically.
Large scale tracking algorithms
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
The Weaknesses of Full-Text Searching
ERIC Educational Resources Information Center
Beall, Jeffrey
2008-01-01
This paper provides a theoretical critique of the deficiencies of full-text searching in academic library databases. Because full-text searching relies on matching words in a search query with words in online resources, it is an inefficient method of finding information in a database. This matching fails to retrieve synonyms, and it also retrieves…
Education, Wechler's Full Scale IQ and "g."
ERIC Educational Resources Information Center
Colom, Roberto; Abad, Francisco J.; Garcia, Luis F.; Juan-Espinosa, Manuel
2002-01-01
Investigated whether average Full Scale IQ (FSIQ) differences can be attributed to "g" using the Spanish standardization sample of the Wechsler Adult Intelligence Scale III (WAIS III) (n=703 females and 666 men). Results support the conclusion that WAIS III FSIQ does not directly or exclusively measure "g" across the full range…
About Reformulation in Full-Text IRS.
ERIC Educational Resources Information Center
Debili, Fathi; And Others
1989-01-01
Analyzes different kinds of reformulations used in information retrieval systems where full text databases are accessed through natural language queries. Tests of these reformulations on large full text databases managed by the Syntactic and Probabilistic Indexing and Retrieval of Information in Texts (SPIRIT) system are described, and an expert…
Full-Day Kindergarten Programs. ERIC Digest.
ERIC Educational Resources Information Center
Rothenberg, Dianne
Changes in American society and education over the last 20 years have contributed to the popularity of all-day, every-day kindergarten programs. Full-day kindergarten is popular for a number of reasons. Full-day programs eliminate the need to provide buses and crossing guards at mid-day. In high-poverty schools, state and federal funding for…
Code of Federal Regulations, 2013 CFR
2013-07-01
... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Full hearing. 42.213 Section 42.213... Justice System Improvement Act of 1979 § 42.213 Full hearing. (a) At any time after notification of..., a State government or unit of general local government may request a hearing on the record...
Inspiring a Life Full of Learning
ERIC Educational Resources Information Center
Nasse, Saul
2010-01-01
After being appointed as Controller of BBC Learning, this author reflected on how the BBC had inspired his own love of learning. He realised that unlocking the learning potential of the full range of BBC outputs would be the key to inspiring a "life full of learning" for all its audiences. In this article, the author describes four new…
ERIC Educational Resources Information Center
Cotton, P. L.
1987-01-01
Defines two types of online databases: source, referring to those intended to be complete in themselves, whether full-text or abstracts; and bibliographic, meaning those that are not complete. Predictions are made about the future growth rate of these two types of databases, as well as full-text versus abstract databases. (EM)
Automated Simplification of Full Chemical Mechanisms
NASA Technical Reports Server (NTRS)
Norris, A. T.
1997-01-01
A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.
Algorithm development for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Rosario, Dalton S.
2008-10-01
process, one can achieve a desirably low cumulative probability of taking target samples by chance and using them as background samples. This probability is modeled by the binomial distribution family, where the only target related parameter---the proportion of target pixels potentially covering the imagery---is shown to be robust. PRS requires a suitable scoring algorithm to compare samples, although applying PRS with the new two-step univariate detectors is shown to outperform existing multivariate detectors.
Algorithm for Constructing Contour Plots
NASA Technical Reports Server (NTRS)
Johnson, W.; Silva, F.
1984-01-01
General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.
Two Meanings of Algorithmic Mathematics.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1984-01-01
Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…
Greedy algorithms in disordered systems
NASA Astrophysics Data System (ADS)
Duxbury, P. M.; Dobrin, R.
1999-08-01
We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.
Grammar Rules as Computer Algorithms.
ERIC Educational Resources Information Center
Rieber, Lloyd
1992-01-01
One college writing teacher engaged his class in the revision of a computer program to check grammar, focusing on improvement of the algorithms for identifying inappropriate uses of the passive voice. Process and problems of constructing new algorithms, effects on student writing, and other algorithm applications are discussed. (MSE)
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-20
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration RIN 0648-XC564 Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to Marine Seismic Survey in the Beaufort Sea,...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-10
... Specified Activities; Taking Marine Mammals Incidental to Polar Bear Captures AGENCY: National Marine..., incidental to a capture-recapture program of polar bears in the U.S. Chukchi Sea. DATES: Effective March 14... taking, by harassment, of marine mammals incidental to a capture-recapture program of polar bears in...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-04
... Specified Activities; Taking Marine Mammals Incidental to Polar Bear Captures AGENCY: National Marine...) to take marine mammals, by harassment, incidental to a capture- recapture program of polar bears in...-recapture program of polar bears in the U.S. Chukchi Sea. NMFS reviewed the USFWS' application...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-07
...NMFS received an application from Shell Offshore Inc. (Shell) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Beaufort Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to Shell to take, by......
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
...NMFS received an application from ConocoPhillips Company (COP) for an Incidental Harassment Authorization (IHA) to take marine mammals, by harassment, incidental to offshore exploration drilling on Outer Continental Shelf (OCS) leases in the Chukchi Sea, Alaska. Pursuant to the Marine Mammal Protection Act (MMPA), NMFS is requesting comments on its proposal to issue an IHA to COP to take, by......
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
NASA Astrophysics Data System (ADS)
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Implementing a self-structuring data learning algorithm
NASA Astrophysics Data System (ADS)
Graham, James; Carson, Daniel; Ternovskiy, Igor
2016-05-01
In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.
NASA Astrophysics Data System (ADS)
Abdelazim, S.; Santoro, D.; Arend, M.; Moshary, F.; Ahmed, S.
2015-05-01
In this paper, we present two signal processing algorithms implemented using the FPGA. The first algorithm involves explicate time gating of received signals that correspond to a desired spatial resolution, performing a Fast Fourier Transform (FFT) calculation on each individual time gate, taking the square modulus of the FFT to form a power spectrum and then accumulating these power spectra for 10k return signals. The second algorithm involves calculating the autocorrelation of the backscattered signals and then accumulating the autocorrelation for 10k pulses. Efficient implementation of each of these two signal processing algorithms on an FPGA is challenging because it requires there to be tradeoffs between retaining the full data word width, managing the amount of on chip memory used and respecting the constraints imposed by the data width of the FPGA. A description of the approach used to manage these tradeoffs for each of the two signal processing algorithms are presented and explained in this article. Results of atmospheric measurements obtained through these two embedded programming techniques are also presented.
Full employment maintenance in the private sector
NASA Technical Reports Server (NTRS)
Young, G. A.
1976-01-01
Operationally, full employment can be accomplished by applying modern computer capabilities, game and decision concepts, and communication feedback possibilities, rather than accepted economic tools, to the problem of assuring invariant full employment. The government must provide positive direction to individual firms concerning the net number of employees that each firm must hire or refrain from hiring to assure national full employment. To preserve free enterprise and the decision making power of the individual manager, this direction must be based on each private firm's own numerical employment projections.
Full-matrix capture with a customizable phased array instrument
NASA Astrophysics Data System (ADS)
Dao, Gavin; Braconnier, Dominique; Gruber, Matt
2015-03-01
In recent years, a technique known as Full-Matrix Capture (FMC) has gained some headway in the NDE community for phased array applications. It's important to understand that FMC is the method that the instrumentation acquires the ultrasonic signals, but further post-processing is required in software to create a meaningful image for a particular application. Having a flexible software interface, small form factor, excellent signal-to-noise ratio per acquisition channel on a 64/64 or 128/128 phased array module with FMC capability proves beneficial in both industrial implementation and in further investigation of post-processing techniques. This paper will provide an example of imaging with a 5MHz linear phased array transducer with 128 elements using FMC and a popular post-processing algorithm known as Total-Focus Method (TFM).
FSP (Full Space Parameterization), Version 2.0
Fries, G.A.; Hacker, C.J.; Pin, F.G.
1995-10-01
This paper describes the modifications made to FSPv1.0 for the Full Space Parameterization (FSP) method, a new analytical method used to resolve underspecified systems of algebraic equations. The optimized code recursively searches for the necessary number of linearly independent vectors that are necessary to form the solution space. While doing this, it ensures that all possible combinations of solutions are checked, if needed, and handles complications which arise due to particular cases. In addition, two particular cases which cause failure of the FSP algorithm were discovered during testing of this new code. These cases are described in the context of how they are recognized and how they are handled by the new code. Finally, testing was performed on the new code using both isolated movements and complex trajectories for various mobile manipulators.
Planck 2015 results. XII. Full focal plane simulations
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Castex, G.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Karakci, A.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Welikala, N.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 104 mission realizations reduced to about 106 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms, FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.
Minimising biases in full configuration interaction quantum Monte Carlo.
Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W
2015-03-14
We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.
Traffic sharing algorithms for hybrid mobile networks
NASA Technical Reports Server (NTRS)
Arcand, S.; Murthy, K. M. S.; Hafez, R.
1995-01-01
In a hybrid (terrestrial + satellite) mobile personal communications networks environment, a large size satellite footprint (supercell) overlays on a large number of smaller size, contiguous terrestrial cells. We assume that the users have either a terrestrial only single mode terminal (SMT) or a terrestrial/satellite dual mode terminal (DMT) and the ratio of DMT to the total terminals is defined gamma. It is assumed that the call assignments to and handovers between terrestrial cells and satellite supercells take place in a dynamic fashion when necessary. The objectives of this paper are twofold, (1) to propose and define a class of traffic sharing algorithms to manage terrestrial and satellite network resources efficiently by handling call handovers dynamically, and (2) to analyze and evaluate the algorithms by maximizing the traffic load handling capability (defined in erl/cell) over a wide range of terminal ratios (gamma) given an acceptable range of blocking probabilities. Two of the algorithms (G & S) in the proposed class perform extremely well for a wide range of gamma.
Full-charge indicator for battery chargers
NASA Technical Reports Server (NTRS)
Cole, Steven W. (Inventor)
1983-01-01
A full-charge indicator for battery chargers, includes a transistor which is in a conductive state as long as charging current to the battery is not less than a level which indicates that the battery did not reach full charge. When the battery reaches full charge, a voltage drop in a resistor in the charging current path is not sufficient to maintain the transistor in a conducting state, and therefore it is switched off. When this occurs an LED is turned on, to indicate a full charge state of the battery. A photocoupler together with a photocoupler transistor are included. When the transistor is off, the photocoupler activates the photocoupler transistor to shunt out a resistor, thereby reducing the charging current to the battery to a float charging current and prevent the battery from being overcharged and damaged.
28 CFR 40.15 - Full certification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... effective, the Attorney General shall grant full certification. Such certification shall remain in effect unless and until the Attorney General finds reasonable cause to believe that the grievance procedure...
3D elastic full waveform inversion: case study from a land seismic survey
NASA Astrophysics Data System (ADS)
Kormann, Jean; Marti, David; Rodriguez, Juan-Esteban; Marzan, Ignacio; Ferrer, Miguel; Gutierrez, Natalia; Farres, Albert; Hanzich, Mauricio; de la Puente, Josep; Carbonell, Ramon
2016-04-01
Full Waveform Inversion (FWI) is one of the most advanced processing methods that is recently reaching a mature state after years of solving theoretical and technical issues such as the non-uniqueness of the solution and harnessing the huge computational power required by realistic scenarios. BSIT (Barcelona Subsurface Imaging Tools, www.bsc.es/bsit) includes a FWI algorithm that can tackle with very complex problems involving large datasets. We present here the application of this system to a 3D dataset acquired to constrain the shallow subsurface. This is where the wavefield is the most complicated, because most of the wavefield conversions takes place in the shallow region and also because the media is much more laterally heterogeneous. With this in mind, at least isotropic elastic approximation would be suitable as kernel engine for FWI. The current study explores the possibilities to apply elastic isotropic FWI using only the vertical component of the recorded seismograms. The survey covers an area of 500×500 m2, and consists in a receivers grid of 10 m×20 m combined with a 250 kg accelerated weight-drop as source on a displaced grid of 20 m×20 m. One of the main challenges in this case study is the costly 3D modeling that includes topography and substantial free surface effects. FWI is applied to a data subset (shooting lines 4 to 12), and is performed for 3 frequencies ranging from 15 to 25 Hz. The starting models are obtained from travel-time tomography and the all computation is run on 75 nodes of Mare Nostrum supercomputer during 3 days. The resulting models provide a higher resolution of the subsurface structures, and show a good correlation with the available borehole measurements. FWI allows to extend in a reliable way this 1D knowledge (borehole) to 3D.
An Investigation into Orthodontic Clinical Record Taking.
Lee, Kennth; Torkfar, Ghazal; Fraser, Cary
2015-01-01
Dental examination including records taken refers to a systematic process in which dentists investigate various facets of patients' oral and general health in order to identify the underlying pathologies or concerns experienced by them. Often, a unique customized treatment plan is developed based on this process in order to maximize patients' oral health while meeting their goals and expectations. This study aims to review the orthodontic clinical record taking, with a particular emphasis on steps undertaken in a private clinic in Australia.
Astronaut Jack Lousma taking hot bath
NASA Technical Reports Server (NTRS)
1973-01-01
A closeup view of Astronaut Jack R. Lousma, Skylab 3 pilot, taking a hot bath in the crew quarters of the Orbital Workshop (OWS) of the Skylab space station cluster in Earth orbit. In deploying the shower facility, the shower curtain is pulled up from the floor and attached to the ceiling. The water comes through a push-button shower head attached to a flexible hose. Water is drawn off by a vacuum system.
Effects of Full Spectrum Lighting in Submarines
1987-04-09
The subjects rated their health as being better under full-spectrum light, but this was not accompanied by higher ratings of mood or quality of sleep ...full-spectrum light and rated their health as being better under this light, but this was not accompanied by higher ratings of mood or quality of sleep ...the elderly in the northern United States (Neer, 1985) and must be due to insufficient exposure to sunlight (Holick, 1985). Light also indirectly
Empathy and visual perspective-taking performance.
Mattan, Bradley D; Rotshtein, Pia; Quinn, Kimberly A
2016-01-01
This study examined the extent to which visual perspective-taking performance is modulated by trait-level empathy. Participants completed a third-person visual perspective-taking task in which they judged the perspectives of two simultaneously presented avatars, designated "Self" and "Other." Depending on the trial, these avatars either held the same view (i.e., congruent) or a different view (i.e., incongruent). Analyses focused on the relationship between empathy and two perspective-taking phenomena: Selection between competing perspectives (i.e., perspective-congruence effects) and prioritization of the Self avatar's perspective. Empathy was related to improved overall performance on this task and a reduced cost of selecting between conflicting perspectives (i.e., smaller perspective-congruence effects). This effect was asymmetric, with empathy (i.e., empathic concern) levels predicting reduced interference from a conflicting perspective, especially when adopting the Self (vs. Other) avatar's perspective. Taken together, these results highlight the importance of the self-other distinction and mental flexibility components of empathy.
Join-Graph Propagation Algorithms
Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina
2010-01-01
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057
TopicPanorama: A Full Picture of Relevant Topics.
Wang, Xiting; Liu, Shixia; Liu, Junlin; Chen, Jianfei; Zhu, Jun; Guo, Baining
2016-12-01
This paper presents a visual analytics approach to analyzing a full picture of relevant topics discussed in multiple sources, such as news, blogs, or micro-blogs. The full picture consists of a number of common topics covered by multiple sources, as well as distinctive topics from each source. Our approach models each textual corpus as a topic graph. These graphs are then matched using a consistent graph matching method. Next, we develop a level-of-detail (LOD) visualization that balances both readability and stability. Accordingly, the resulting visualization enhances the ability of users to understand and analyze the matched graph from multiple perspectives. By incorporating metric learning and feature selection into the graph matching algorithm, we allow users to interactively modify the graph matching result based on their information needs. We have applied our approach to various types of data, including news articles, tweets, and blog data. Quantitative evaluation and real-world case studies demonstrate the promise of our approach, especially in support of examining a topic-graph-based full picture at different levels of detail.
Categorisation of full waveform data provided by laser scanning devices
NASA Astrophysics Data System (ADS)
Ullrich, Andreas; Pfennigbauer, Martin
2011-11-01
In 2004, a laser scanner device for commercial airborne laser scanning applications, the RIEGL LMS-Q560, was introduced to the market, making use of a radical alternative approach to the traditional analogue signal detection and processing schemes found in LIDAR instruments so far: digitizing the echo signals received by the instrument for every laser pulse and analysing these echo signals off-line in a so-called full waveform analysis in order to retrieve almost all information contained in the echo signal using transparent algorithms adaptable to specific applications. In the field of laser scanning the somewhat unspecific term "full waveform data" has since been established. We attempt a categorisation of the different types of the full waveform data found in the market. We discuss the challenges in echo digitization and waveform analysis from an instrument designer's point of view and we will address the benefits to be gained by using this technique, especially with respect to the so-called multi-target capability of pulsed time-of-flight LIDAR instruments.
Linear-scaling and parallelisable algorithms for stochastic quantum chemistry
NASA Astrophysics Data System (ADS)
Booth, George H.; Smart, Simon D.; Alavi, Ali
2014-07-01
For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.
Parallel algorithm development
Adams, T.F.
1996-06-01
Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.
Algorithm performance evaluation
NASA Astrophysics Data System (ADS)
Smith, Richard N.; Greci, Anthony M.; Bradley, Philip A.
1995-03-01
Traditionally, the performance of adaptive antenna systems is measured using automated antenna array pattern measuring equipment. This measurement equipment produces a plot of the receive gain of the antenna array as a function of angle. However, communications system users more readily accept and understand bit error rate (BER) as a performance measure. The work reported on here was conducted to characterize adaptive antenna receiver performance in terms of overall communications system performance using BER as a performance measure. The adaptive antenna system selected for this work featured a linear array, least mean square (LMS) adaptive algorithm and a high speed phase shift keyed (PSK) communications modem.
Take a breath and take the turn: how breathing meets turns in spontaneous dialogue.
Rochet-Capellan, Amélie; Fuchs, Susanne
2014-12-19
Physiological rhythms are sensitive to social interactions and could contribute to defining social rhythms. Nevertheless, our knowledge of the implications of breathing in conversational turn exchanges remains limited. In this paper, we addressed the idea that breathing may contribute to timing and coordination between dialogue partners. The relationships between turns and breathing were analysed in unconstrained face-to-face conversations involving female speakers. No overall relationship between breathing and turn-taking rates was observed, as breathing rate was specific to the subjects' activity in dialogue (listening versus taking the turn versus holding the turn). A general inter-personal coordination of breathing over the whole conversation was not evident. However, specific coordinative patterns were observed in shorter time-windows when participants engaged in taking turns. The type of turn-taking had an effect on the respective coordination in breathing. Most of the smooth and interrupted turns were taken just after an inhalation, with specific profiles of alignment to partner breathing. Unsuccessful attempts to take the turn were initiated late in the exhalation phase and with no clear inter-personal coordination. Finally, breathing profiles at turn-taking were different than those at turn-holding. The results support the idea that breathing is actively involved in turn-taking and turn-holding.
Take a breath and take the turn: how breathing meets turns in spontaneous dialogue
Rochet-Capellan, Amélie; Fuchs, Susanne
2014-01-01
Physiological rhythms are sensitive to social interactions and could contribute to defining social rhythms. Nevertheless, our knowledge of the implications of breathing in conversational turn exchanges remains limited. In this paper, we addressed the idea that breathing may contribute to timing and coordination between dialogue partners. The relationships between turns and breathing were analysed in unconstrained face-to-face conversations involving female speakers. No overall relationship between breathing and turn-taking rates was observed, as breathing rate was specific to the subjects' activity in dialogue (listening versus taking the turn versus holding the turn). A general inter-personal coordination of breathing over the whole conversation was not evident. However, specific coordinative patterns were observed in shorter time-windows when participants engaged in taking turns. The type of turn-taking had an effect on the respective coordination in breathing. Most of the smooth and interrupted turns were taken just after an inhalation, with specific profiles of alignment to partner breathing. Unsuccessful attempts to take the turn were initiated late in the exhalation phase and with no clear inter-personal coordination. Finally, breathing profiles at turn-taking were different than those at turn-holding. The results support the idea that breathing is actively involved in turn-taking and turn-holding. PMID:25385777
50 CFR 216.71 - Allowable take of fur seals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 10 2014-10-01 2014-10-01 false Allowable take of fur seals. 216.71... MAMMALS Pribilof Islands, Taking for Subsistence Purposes § 216.71 Allowable take of fur seals. Pribilovians may take fur seals on the Pribilof Islands if such taking is (a) For subsistence uses, and (b)...
50 CFR 216.71 - Allowable take of fur seals.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Allowable take of fur seals. 216.71... MAMMALS Pribilof Islands, Taking for Subsistence Purposes § 216.71 Allowable take of fur seals. Pribilovians may take fur seals on the Pribilof Islands if such taking is (a) For subsistence uses, and (b)...
50 CFR 216.71 - Allowable take of fur seals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Allowable take of fur seals. 216.71... MAMMALS Pribilof Islands, Taking for Subsistence Purposes § 216.71 Allowable take of fur seals. Pribilovians may take fur seals on the Pribilof Islands if such taking is (a) For subsistence uses, and (b)...
50 CFR 216.71 - Allowable take of fur seals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Allowable take of fur seals. 216.71... MAMMALS Pribilof Islands, Taking for Subsistence Purposes § 216.71 Allowable take of fur seals. Pribilovians may take fur seals on the Pribilof Islands if such taking is (a) For subsistence uses, and (b)...
50 CFR 216.71 - Allowable take of fur seals.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Allowable take of fur seals. 216.71... MAMMALS Pribilof Islands, Taking for Subsistence Purposes § 216.71 Allowable take of fur seals. Pribilovians may take fur seals on the Pribilof Islands if such taking is (a) For subsistence uses, and (b)...
NASA Technical Reports Server (NTRS)
Shull, Forrest; Godfrey, Sally; Bechtel, Andre; Feldmann, Raimund L.; Regardie, Myrna; Seaman, Carolyn
2008-01-01
A viewgraph presentation describing the NASA Software Assurance Research Program (SARP) project, with a focus on full life-cycle defect management, is provided. The topics include: defect classification, data set and algorithm mapping, inspection guidelines, and tool support.
Naturally selecting solutions: the use of genetic algorithms in bioinformatics.
Manning, Timmy; Sleator, Roy D; Walsh, Paul
2013-01-01
For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.
An Adaptive Inpainting Algorithm Based on DCT Induced Wavelet Regularization
2013-01-01
differentiable and its gradient is Lipschitz continuous. This property is particularly important in developing a fast and efficient numerical algorithm for...with Lipschitz continuous gra- dient L(ψ), i.e., ∥∇ψ(f1) − ∇ψ(f2)∥2 ≤ L(ψ)∥f1 − f2∥2 for every f1, f2 ∈ Rn. The corresponding APG algorithm proposed in...entries are uniformly distributed on the interval [0, 255]; 2) Take u1 = f0 and L = L(ψ) as a Lipschitz constant of ∇ψ; 3) For k = 1, 2, . . ., compute a
Unquenched Studies Using the Truncated Determinant Algorithm
A. Duncan, E. Eichten and H. Thacker
2001-11-29
A truncated determinant algorithm is used to study the physical effects of the quark eigenmodes associated with eigenvalues below 420 MeV. This initial high statistics study focuses on coarse (6{sup 4}) lattices (with O(a{sup 2}) improved gauge action), light internal quark masses and large physical volumes. Three features of full QCD are examined: topological charge distributions, string breaking as observed in the static energy and the eta prime mass.
An image super-resolution algorithm for different error levels per frame.
He, Hu; Kondi, Lisimachos P
2006-03-01
In this paper, we propose an image super-resolution (resolution enhancement) algorithm that takes into account inaccurate estimates of the registration parameters and the point spread function. These inaccurate estimates, along with the additive Gaussian noise in the low-resolution (LR) image sequence, result in different noise level for each frame. In the proposed algorithm, the LR frames are adaptively weighted according to their reliability and the regularization parameter is simultaneously estimated. A translational motion model is assumed. The convergence property of the proposed algorithm is analyzed in detail. Our experimental results using both real and synthetic data show the effectiveness of the proposed algorithm.
[Algorithm for assessment of exposure to asbestos].
Martines, V; Fioravanti, M; Anselmi, A; Attili, F; Battaglia, D; Cerratti, D; Ciarrocca, M; D'Amelio, R; De Lorenzo, G; Ferrante, E; Gaudioso, F; Mascia, E; Rauccio, A; Siena, S; Palitti, T; Tucci, L; Vacca, D; Vigliano, R; Zelano, V; Tomei, F; Sancini, A
2010-01-01
There is no universally approved method in the scientific literature to identify subjects exposed to asbestos and divide them in classes according to intensity of exposure. The aim of our work is to study and develope an algorithm based on the findings of occupational anamnestical information provided by a large group of workers. The algorithm allows to discriminate, in a probabilistic way, the risk of exposure by the attribution of a code for each worker (ELSA Code--work estimated exposure to asbestos). The ELSA code has been obtained through a synthesis of information that the international scientific literature identifies as the most predictive for the onset of asbestos-related abnormalities. Four dimensions are analyzed and described: 1) present and/or past occupation; 2) type of materials and equipment used in performing working activity; 3) environment where these activities are carried out; 4) period of time when activities are performed. Although it is possible to have informations in a subjective manner, the decisional procedure is objective and is based on the systematic evaluation of asbestos exposure. From the combination of the four identified dimensions it is possible to have 108 ELSA codes divided in three typological profiles of estimated risk of exposure. The application of the algorithm offers some advantages compared to other methods used for identifying individuals exposed to asbestos: 1) it can be computed both in case of present and past exposure to asbestos; 2) the classification of workers exposed to asbestos using ELSA code is more detailed than the one we have obtained with Job Exposure Matrix (JEM) because the ELSA Code takes in account other indicators of risk besides those considered in the JEM. This algorithm was developed for a project sponsored by the Italian Armed Forces and is also adaptable to other work conditions for in which it could be necessary to assess risk for asbestos exposure.
Preparing for full-risk capitation.
Fine, A
1998-03-01
Full-risk capitation arrangements involve shared financial risk among all participants and place providers at risk not only for their own financial performance, but also for the performance of other providers in the network. Providers that wish to assume full risk must understand the types of risks they need to manage to ensure financial success for all network participants. They also must choose a method of paying network participants. The five principal physician payment models currently used in conjunction with full-risk capitation contracts are fee-for-service, salary, entrepreneurial, subcapitation, and hospital reimbursement. No matter which model is used, measurement and feedback systems should be established to increase the effectiveness of the payment systems. Such measurement and feedback systems should facilitate risk management, cost management, process management, revenue distribution, and contract renegotiation and follow-up monitoring.
Full Discharges in Fermilab's Electron Cooler
NASA Astrophysics Data System (ADS)
Prost, L. R.; Shemyakin, A.
2006-03-01
Fermilab's 4.3 MeV electron cooler is based on an electrostatic accelerator, which generates a DC electron beam in an energy recovery mode. Effective cooling of the antiprotons in the Recycler requires that the beam remains stable for hours. While short beam interruptions do not deteriorate the performance of the Recycler ring, the beam may provoke full discharges in the accelerator, which significantly affect the duty factor of the machine as well as the reliability of various components. Although cooling of 8 GeV antiprotons has been successfully achieved, full discharges still occur in the current setup. The paper describes factors leading to full discharges and ways to prevent them.
NASA Astrophysics Data System (ADS)
Droujinine, Alexander; Vasilevsky, Alexander; Evans, Russ
2007-06-01
Three-Dimensional Full Tensor Gradiometry (3-D FTG) acquires ultrasensitive measurements of the Earth's (vector) gravity gradient field. Departures from simple weakening of the field in the vertical direction are due to subsurface variations in density. We have undertaken a numerical examination of the feasibility of using this system for detecting time variations in local lateral density contrasts in subsurface layers during reservoir production monitoring. Our gravity modelling focuses on the value added by taking account of the horizontal components of gravity gradient in imaging local targets. We have studied the sensitivity of these components to model and acquisition parameters. Iterative regularized inversion algorithms that can explain the behaviour of a hydrocarbon reservoir in terms of time-lapse density changes have been described. These algorithms are based on the homotopy method that utilizes the solution space topology. To stabilize the inversion, we have introduced the maximum compactness of anomalous sources along one or more directions. This allows the use of a priori information natural to time-lapse monitoring about the arbitrary shape and spatial locations of the sources. The time-evolving density contrast within the local target zone is recovered by means of an iterative procedure. The treatment of either correlated or random noise fits naturally into this procedure. The final scheme has been illustrated by several examples using synthetic FTG data generated from various density distributions of practical interest. We have considered the effect of model and data uncertainties on the inversion solution. It appears that the iterative process is effective, numerically stable and rapidly convergent in the presence of both random and structural noise. Results show that it is relatively insensitive to the choice of starting model. Appropriate applications include direct monitoring of gas-oil contact and temperature front expansion during steam
Advanced methods in global gyrokinetic full f particle simulation of tokamak transport
Ogando, F.; Heikkinen, J. A.; Henriksson, S.; Janhunen, S. J.; Kiviniemi, T. P.; Leerink, S.
2006-11-30
A new full f nonlinear gyrokinetic simulation code, named ELMFIRE, has been developed for simulating transport phenomena in tokamak plasmas. The code is based on a gyrokinetic particle-in-cell algorithm, which can consider electrons and ions jointly or separately, as well as arbitrary impurities. The implicit treatment of the ion polarization drift and the use of full f methods allow for simulations of strongly perturbed plasmas including wide orbit effects, steep gradients and rapid dynamic changes. This article presents in more detail the algorithms incorporated into ELMFIRE, as well as benchmarking comparisons to both neoclassical theory and other codes.Code ELMFIRE calculates plasma dynamics by following the evolution of a number of sample particles. Because of using an stochastic algorithm its results are influenced by statistical noise. The effect of noise on relevant magnitudes is analyzed.Turbulence spectra of FT-2 plasma has been calculated with ELMFIRE, obtaining results consistent with experimental data.
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
A Full-Bayesian Approach to the Estimation of Transmissivity Fields From Thermal Data
NASA Astrophysics Data System (ADS)
Jiang, Y.; Woodbury, A. D.
2002-12-01
Woodbury and Ulrych (WRR 36(8), 2000) proposed a Full-Bayesian approach to the estimation of transmissivity from hydraulic head and transmissivity measurements for two-dimensional steady state groundwater flow. Specifically, Bayesian updating (see Woodbury, 1989) was used to condition prior estimates of logarithm transmissivity [denoted as ln (T)] field with ln (T) measurements. Then they incorporated hydraulic head measurements into the updating procedure by adopting a linearized aquifer equation. Prior probability density functions (pdfs) of the ln (T) field and any hyperparameters associated with its two central moments were determined from maximum entropy considerations. Any uncertainties in the basic geostatistical hyperparameters were removed by marginalization. Woodbury and Ulrych (2000) showed that the resolution of the ln (T) field gradually improved through the updating procedure. In their work the problem was discretized very finely so that many more unknowns than data points are sought. In this way the heterogeneous nature of the porous media is more reasonably approximated. The Bayesian methodology reformulates this ill-posed inverse of deduction into a well-posed problem of inference. With this approach one can take advantage of data from different sources and any knowledge of the identified system can help constraint the inverse problem. In this work the full-Bayesian approach is extended to estimate transmissivity field from thermal data. Linearized treatment to the advection-conduction heat transport leads to a linear formulation between temperature and ln (T) perturbations. An updating procedure similar to that of Woodbury and Ulrych (2000) can be performed. This new algorithm is examined against a generic example. It is found that the use of temperature data is showed to improve the ln (T) estimates, in comparison to the updated ln (T) field conditioned on spare ln (T) and head data; also the addition of temperature data without head data to the
Redefining Full-Time in College: Evidence on 15-Credit Strategies
ERIC Educational Resources Information Center
Klempin, Serena
2014-01-01
Because federal financial aid guidelines stipulate that students must be enrolled in a minimum of 12 credits per semester in order to receive the full amount of aid, many colleges and universities define full-time enrollment as 12 credits per semester. Yet, if a student takes only 12 credits each fall and spring term, it is impossible to complete…
Toward full mental health parity and beyond.
Gitterman, D P; Sturm, R; Scheffler, R M
2001-01-01
The 1996 Mental Health Parity Act (MHPA), which became effective in January 1998, is scheduled to expire in September 2001. This paper examines what the MHPA accomplished and steps toward more comprehensive parity. We explain the strategic and self-reinforcing link of parity with managed behavioral health care and conclude that the current path will be difficult to reverse. The paper ends with a discussion of what might be behind the claims that full parity in mental health benefits is insufficient to achieve true equity and whether additional steps beyond full parity appear realistic or even desirable.
Full-duplex optical communication system
NASA Technical Reports Server (NTRS)
Shay, Thomas M. (Inventor); Hazzard, David A. (Inventor); Horan, Stephen (Inventor); Payne, Jason A. (Inventor)
2004-01-01
A method of full-duplex electromagnetic communication wherein a pair of data modulation formats are selected for the forward and return data links respectively such that the forward data electro-magnetic beam serves as a carrier for the return data. A method of encoding optical information is used wherein right-hand and left-hand circular polarizations are assigned to optical information to represent binary states. An application for an earth to low earth orbit optical communications system is presented which implements the full-duplex communication and circular polarization keying modulation format.
Quantifying Global Uncertainties in a Simple Microwave Rainfall Algorithm
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Berg, Wesley; Thomas-Stahle, Jody; Masunaga, Hirohiko
2006-01-01
While a large number of methods exist in the literature for retrieving rainfall from passive microwave brightness temperatures, little has been written about the quantitative assessment of the expected uncertainties in these rainfall products at various time and space scales. The latter is the result of two factors: sparse validation sites over most of the world's oceans, and algorithm sensitivities to rainfall regimes that cause inconsistencies against validation data collected at different locations. To make progress in this area, a simple probabilistic algorithm is developed. The algorithm uses an a priori database constructed from the Tropical Rainfall Measuring Mission (TRMM) radar data coupled with radiative transfer computations. Unlike efforts designed to improve rainfall products, this algorithm takes a step backward in order to focus on uncertainties. In addition to inversion uncertainties, the construction of the algorithm allows errors resulting from incorrect databases, incomplete databases, and time- and space-varying databases to be examined. These are quantified. Results show that the simple algorithm reduces errors introduced by imperfect knowledge of precipitation radar (PR) rain by a factor of 4 relative to an algorithm that is tuned to the PR rainfall. Database completeness does not introduce any additional uncertainty at the global scale, while climatologically distinct space/time domains add approximately 25% uncertainty that cannot be detected by a radiometer alone. Of this value, 20% is attributed to changes in cloud morphology and microphysics, while 5% is a result of changes in the rain/no-rain thresholds. All but 2%-3% of this variability can be accounted for by considering the implicit assumptions in the algorithm. Additional uncertainties introduced by the details of the algorithm formulation are not quantified in this study because of the need for independent measurements that are beyond the scope of this paper. A validation strategy
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-29
... of the Otariid family (eared seals). The species, Zalophus californianus, includes three subspecies... would authorize small numbers of Level B harassment takes of California sea lions (Zalophus californianus), harbor seals (Phoca vitulina), and northern elephant seals (Mirounga angustirostris)...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-12
... June 12, 2013 Part IV Department of Commerce National Oceanic and Atmospheric Administration Takes of..., 2013 / Notices#0;#0; ] DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration RIN 0648...), National Oceanic and Atmospheric Administration (NOAA), Commerce. ACTION: Notice; proposed...
Now, It's Your Turn: How You Can Take Medicine Safely
... turn Javascript on. Feature: Taking Medicines Safely Now, It's Your Turn: How You Can Take Medicine Safely ... medicine. The pharmacist has filled the prescription. Now it's up to you to take the medicine safely. ...
Talk with Your Doctor about Taking Aspirin Every Day
... Talk with Your Doctor about Taking Aspirin Every Day Browse Sections The Basics Overview Benefits and Risks ... sure why this works. Can taking aspirin every day cause any side effects? Taking aspirin daily isn' ...
Optimum take-off angle in the long jump.
Linthorne, Nicholas P; Guzman, Maurice S; Bridgett, Lisa A
2005-07-01
In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.
Improving the medical ‘take sheet’
Reed, Oliver
2014-01-01
The GMC states that “Trainees in hospital posts must have well organised handover arrangements, ensuring continuity of patient care[1]”. In the Belfast City Hospital throughout the day there can be multiple new medical admissions. These can be via the GP Unit, transfers for tertiary care, and transfers due to bed shortages in other hospitals. Over the course of 24 hours there can be up to four medical SHOs and three registrars that fill in the take sheet. Due to the variety of admission routes and number of doctors looking after the medical take information can be lost during handover between SHOs. In the current format there is little room to write and key and relevant information on the medical take sheet about new and transferring patients. I felt that this handover sheet could be improved. An initial questionnaire demonstrated that 47% found the old proforma easy to use and 28.2% felt that it allowed them to identify sick patients. 100% of SHOs and Registrars surveyed felt that it could be improved from its current form. From feedback from my colleagues I created a new template and trialled it in the hospital. A repeat questionnaire demonstrated that 92.3% of responders felt the new format had improved medical handover and that 92.6% felt that it allowed safe handover most of the time/always. The success of this new proforma resulted in it being implemented on a permanent basis for new medical admissions and transfers to the hospital. PMID:26734303
Classical spin glass system in external field with taking into account relaxation effects
Gevorkyan, A. S. Abajyan, H. G.
2013-08-15
We study statistical properties of disordered spin systems under the influence of an external field with taking into account relaxation effects. For description of system the spatial 1D Heisenberg spin-glass Hamiltonian is used. In addition, we suppose that interactions occur between nearest-neighboring spins and they are random. Exact solutions which define angular configuration of the spin in nodes were obtained from the equations of stationary points of Hamiltonian and the corresponding conditions for the energy local minimum. On the basis of these recurrent solutions an effective parallel algorithm is developed for simulation of stabile spin-chains of an arbitrary length. It is shown that by way of an independent order of N{sup 2} numerical simulations (where N is number of spin in each chain) it is possible to generate ensemble of spin-chains, which is completely ergodic which is equivalent to full self-averaging of spin-chains' vector polarization. Distributions of different parameters (energy, average polarization by coordinates, and spin-spin interaction constant) of unperturbed system are calculated. In particular, analytically is proved and numerically is shown, that for the Heisenberg nearest-neighboring Hamiltonian model, the distribution of spin-spin interaction constants as opposed to widely used Gauss-Edwards-Anderson distribution satisfies Levy alpha-stable distribution law. This distribution is nonanalytic function and does not have variance. In the work we have in detail studied critical properties of an ensemble depending on value of external field parameters (from amplitude and frequency) and have shown that even at weak external fields the spin-glass systemis strongly frustrated. It is shown that frustrations have fractal behavior, they are selfsimilar and do not disappear at scale decreasing of area. By the numerical computation is shown that the average polarization of spin-glass on a different coordinates can have values which can lead to
Aircraft Engineering Conference 1934 - Full Scale Tunnel
NASA Technical Reports Server (NTRS)
1934-01-01
Gathered together in the only facility big enough to hold them, attendees at Langleys 1934 aircraft Engineering Conference pose in the Full Scale Wind Tunnel underneath a Boeing P-26A Peashooter. Present, among other notables, were Orville Wright, Charles Lindbergh, and Howard Hughes.
Keeping Rural Schools up to Full Speed
ERIC Educational Resources Information Center
Beesley, Andrea
2011-01-01
Rural schools are long accustomed to meeting challenges in innovative ways. For them, the challenge is not so much a lack of technology as it is adequate internet access, which affects both teachers and students. In this article, the author discusses how to keep rural schools up to full speed. The author suggests that the best approach when…
Adaptive, full-spectrum solar energy system
Muhs, Jeffrey D.; Earl, Dennis D.
2003-08-05
An adaptive full spectrum solar energy system having at least one hybrid solar concentrator, at least one hybrid luminaire, at least one hybrid photobioreactor, and a light distribution system operably connected to each hybrid solar concentrator, each hybrid luminaire, and each hybrid photobioreactor. A lighting control system operates each component.