Science.gov

Sample records for algorithmic procedure including

  1. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  2. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the

  3. Using an admittance algorithm for bone drilling procedures.

    PubMed

    Accini, Fernando; Díaz, Iñaki; Gil, Jorge Juan

    2016-01-01

    Bone drilling is a common procedure in many types of surgeries, including orthopedic, neurological and otologic surgeries. Several technologies and control algorithms have been developed to help the surgeon automatically stop the drill before it goes through the boundary of the tissue being drilled. However, most of them rely on thrust force and cutting torque to detect bone layer transitions which has many drawbacks that affect the reliability of the process. This paper describes in detail a bone-drilling algorithm based only on the position control of the drill bit that overcomes such problems and presents additional advantages. The implication of each component of the algorithm in the drilling procedure is analyzed and the efficacy of the algorithm is experimentally validated with two types of bones. PMID:26516110

  4. A dynamic programming algorithm for RNA structure prediction including pseudoknots.

    PubMed

    Rivas, E; Eddy, S R

    1999-02-01

    We describe a dynamic programming algorithm for predicting optimal RNA secondary structure, including pseudoknots. The algorithm has a worst case complexity of O(N6) in time and O(N4) in storage. The description of the algorithm is complex, which led us to adopt a useful graphical representation (Feynman diagrams) borrowed from quantum field theory. We present an implementation of the algorithm that generates the optimal minimum energy structure for a single RNA sequence, using standard RNA folding thermodynamic parameters augmented by a few parameters describing the thermodynamic stability of pseudoknots. We demonstrate the properties of the algorithm by using it to predict structures for several small pseudoknotted and non-pseudoknotted RNAs. Although the time and memory demands of the algorithm are steep, we believe this is the first algorithm to be able to fold optimal (minimum energy) pseudoknotted RNAs with the accepted RNA thermodynamic model. PMID:9925784

  5. A computational procedure for multibody systems including flexible beam dynamics

    NASA Technical Reports Server (NTRS)

    Downer, J. D.; Park, K. C.; Chiou, J. C.

    1990-01-01

    A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.

  6. [Algorithm of nursing procedure in debridement protocol].

    PubMed

    Fumić, Nera; Marinović, Marin; Brajan, Dolores

    2014-10-01

    Debridement is an essential act in the treatment of various wounds, which removes devitalized and colonized necrotic tissue, also poorly healing tissue and all foreign bodies from the wound, in order to enhance the formation of healthy granulation tissue and accelerate the process of wound healing. Nowadays, debridement is the basic procedure in the management of acute and chronic wounds, where the question remains which way to do it, how extensively, how often and who should perform it. Many parameters affect the decision on what method to use on debridement. It is important to consider the patient's age, environment, choice, presence of pain, quality of life, skills and resources for wound and patient care providers, and also a variety of regulations and guidelines. Irrespective of the level and setting where the care is provided (hospital patients, ambulatory or stationary, home care), care for patients suffering from some form of acute or chronic wound and requiring different interventions and a large number of frequent bandaging and wound care is most frequently provided by nurses/technicians. With timely and systematic interventions in these patients, the current and potential problems in health functioning could be minimized or eliminated in accordance with the resources. Along with daily wound toilette and bandaging, it is important to timely recognize changes in the wound status and the need of tissue debridement. Nurse/technician interventions are focused on preparation of the patient (physical, psychological, education), preparation of materials, personnel and space, assisting or performing procedures of wound care, and documenting the procedures performed. The assumption that having an experienced and competent person for wound care and a variety of methods and approaches in wound treatment is in the patient's best interest poses the need of defining common terms and developing comprehensive guidelines that will lead to universal algorithms in the field

  7. Simulation of Accident Sequences Including Emergency Operating Procedures

    SciTech Connect

    Queral, Cesar; Exposito, Antonio; Hortal, Javier

    2004-07-01

    Operator actions play an important role in accident sequences. However, design analysis (Safety Analysis Report, SAR) seldom includes consideration of operator actions, although they are required by compulsory Emergency Operating Procedures (EOP) to perform some checks and actions from the very beginning of the accident. The basic aim of the project is to develop a procedure validation system which consists of the combination of three elements: a plant transient simulation code TRETA (a C based modular program) developed by the CSN, a computerized procedure system COPMA-III (Java technology based program) developed by the OECD-Halden Reactor Project and adapted for simulation with the contribution of our group and a software interface that provides the communication between COPMA-III and TRETA. The new combined system is going to be applied in a pilot study in order to analyze sequences initiated by secondary side breaks in a Pressurized Water Reactors (PWR) plant. (authors)

  8. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  9. 78 FR 57639 - Request for Comments on Pediatric Planned Procedure Algorithm

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... Procedure Algorithm AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of request for comments on pediatric planned procedure algorithm from the members of the public. SUMMARY... from the public on an algorithm for identifying pediatric planned procedures as part of the...

  10. Dipole splitting algorithm: A practical algorithm to use the dipole subtraction procedure

    NASA Astrophysics Data System (ADS)

    Hasegawa, K.

    2015-11-01

    The Catani-Seymour dipole subtraction is a general and powerful procedure to calculate the QCD next-to-leading order corrections for collider observables. We clearly define a practical algorithm to use the dipole subtraction. The algorithm is called the dipole splitting algorithm (DSA). The DSA is applied to an arbitrary process by following well defined steps. The subtraction terms created by the DSA can be summarized in a compact form by tables. We present a template for the summary tables. One advantage of the DSA is to allow a straightforward algorithm to prove the consistency relation of all the subtraction terms. The proof algorithm is presented in the following paper [K. Hasegawa, arXiv:1409.4174]. We demonstrate the DSA in two collider processes, pp to μ -μ + and 2 jets. Further, as a confirmation of the DSA, it is shown that the analytical results obtained by the DSA in the Drell-Yan process exactly agree with the well known results obtained by the traditional method.

  11. 75 FR 3925 - Proposed Information Collection Request for Administrative Procedures-20 CFR 601 Including Form...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-25

    ...--20 CFR 601 Including Form MA 8-7; Comment Request on Extension Without Change AGENCY: Employment and..., Employment and Training Administration regulations, 20 CFR 601, Administrative Procedures,...

  12. Substructure procedure for including tile flexibility in stress analysis of shuttle thermal protection system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.

    1980-01-01

    A substructure procedure to include the flexibility of the tile in the stress analysis of the shuttle thermal protection system (TPS) is described. In this procedure, the TPS is divided into substructures of (1) the tile which is modeled by linear finite elements and (2) the SIP which is modeled as a nonlinear continuum. This procedure was applied for loading cases of uniform pressure, uniform moment, and an aerodynamic shock on various tile thicknesses. The ratios of through-the-thickness stresses in the SIP which were calculated using a flexible tile compared to using a rigid tile were found to be less than 1.05 for the cases considered.

  13. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  14. Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad

    2015-05-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  15. Clustering algorithm evaluation and the development of a replacement for procedure 1. [for crop inventories

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Johnson, J. K.

    1979-01-01

    An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.

  16. Procedures for Including Secondary Electron Emission in Numerical Simulations of Plasma-Insulator Interactions

    NASA Technical Reports Server (NTRS)

    Beyst, Brian; Rezvani, Ali; Young, Bin; Friauf, Robert J.

    1991-01-01

    Previous Monte Carlo simulations provide a data base for properties of secondary electron emission (SEE) from insulators and metals. Incident primary electrons are considered at energies up to 1200 eV. The behavior of secondary electrons is characterized by (1) yield vs. primary energy E(sub p), (2) distribution vs. secondary energy E(sub s), and (3) distribution vs. angle of emission theta. Special attention is paid to the low energy range E(sub p) up to 50 eV, where the number and energy of secondary electrons is limited by the finite band gap of the insulator. For primary energies above 50 eV the SEE yield curve can be conveniently parameterized by a Haffner formula. The energy distribution of secondary electrons is described by an empirical formula with average energy about 8.0 eV. The angular distribution of secondaries is slightly more peaked in the forward direction than the customary cos theta distribution. Empirical formulas and parameters are given for all yield and distribution curves. Procedures and algorithms are described for using these results to find the SEE yield, and then to choose the energy and angle of emergence of each secondary electron. These procedures can readily be incorporated into numerical simulations of plasma-solid surface interactions in low earth orbit.

  17. Should Title 24 Ventilation Requirements Be Amended to include an Indoor Air Quality Procedure?

    SciTech Connect

    Dutton, Spencer M.; Mendell, Mark J.; Chan, Wanyu R.

    2013-05-13

    Minimum outdoor air ventilation rates (VRs) for buildings are specified in standards, including California?s Title 24 standards. The ASHRAE ventilation standard includes two options for mechanically-ventilated buildings ? a prescriptive ventilation rate procedure (VRP) that specifies minimum VRs that vary among occupancy classes, and a performance-based indoor air quality procedure (IAQP) that may result in lower VRs than the VRP, with associated energy savings, if IAQ meeting specified criteria can be demonstrated. The California Energy Commission has been considering the addition of an IAQP to the Title 24 standards. This paper, based on a review of prior data and new analyses of the IAQP, evaluates four future options for Title 24: no IAQP; adding an alternate VRP, adding an equivalent indoor air quality procedure (EIAQP), and adding an improved ASHRAE-like IAQP. Criteria were established for selecting among options, and feedback was obtained in a workshop of stakeholders. Based on this review, the addition of an alternate VRP is recommended. This procedure would allow lower minimum VRs if a specified set of actions were taken to maintain acceptable IAQ. An alternate VRP could also be a valuable supplement to ASHRAE?s ventilation standard.

  18. The Relationship between the Bock-Aitkin Procedure and the EM Algorithm for IRT Model Estimation.

    ERIC Educational Resources Information Center

    Hsu, Yaowen; Ackerman, Terry A.; Fan, Meichu

    It has previously been shown that the Bock-Aitkin procedure (R. Bock and M. Aitkin, 1981) is an instance of the EM algorithm when trying to find the marginal maximum likelihood estimate for a discrete latent ability variable (latent trait). In this paper, it is shown that the Bock-Aitkin procedure is a numerical implementation of the EM algorithm…

  19. Best Estimate Radiation Flux Value-Added Procedure. Algorithm Operational Details and Explanations

    SciTech Connect

    Shi, Y.; Long, C. N.

    2002-10-01

    This document describes some specifics of the algorithm for best estimate evaluation of radiation fluxes at Southern Great Plains (SGP) Central Facility (CF). It uses the data available from the three co-located surface radiometer platforms at the SGP CF to automatically determine the best estimate of the irradiance measurements available. The Best Estimate Flux (BEFlux) value-added procedure (VAP) was previously named Best Estimate ShortWave (BESW) VAP, which included all of the broadband and spectral shortwave (SW) measurements for the SGP CF. In BESW, multiple measurements of the same quantities were handled simply by designating one as the primary measurement and using all others to merely fill in any gaps. Thus, this “BESW” is better termed “most continuous,” since no additional quality assessment was applied. We modified the algorithm in BESW to use the average of the closest two measurements as the best estimate when possible, if these measurements pass all quality assessment criteria. Furthermore, we included longwave (LW) fields in the best estimate evaluation to include all major components of the surface radiative energy budget, and renamed the VAP to Best Estimate Flux (BEFLUX1LONG).

  20. 1989 Walker Branch Watershed Surveying and Mapping Including a Guide to Coordinate Transformation Procedures

    SciTech Connect

    Timmins, S.

    1991-01-01

    Walker Branch Watershed is a forested, research watershed marked throughout by a 264 ft grid that was surveyed in 1967 using the Oak Ridge National Laboratory (X-10) coordinate system. The Tennessee Valley Authority (TVA) prepared a contour map of the watershed in 1987, and an ARC/INFO{trademark} version of the TVA topographic map with the X-10 grid superimposed has since been used as the primary geographic information system (GIS) data base for the watershed. However, because of inaccuracies observed in mapped locations of some grid markers and permanent research plots, portions of the watershed were resurveyed in 1989 and an extensive investigation of the coordinates used in creating both the TVA map and ARC/INFO data base and of coordinate transformation procedures currently in use on the Oak Ridge Reservation was conducted. They determined that the positional errors resulted from the field orientation of the blazed grid rather than problems in mapmaking. In resurveying the watershed, previously surveyed control points were located or noted as missing, and 25 new control points along the perimeter roads were surveyed. In addition, 67 of 156 grid line intersections (pegs) were physically located and their positions relative to mapped landmarks were recorded. As a result, coordinates for the Walker Branch Watershed grid lines and permanent research plots were revised, and a revised map of the watershed was produced. In conjunction with this work, existing procedures for converting between the local grid systems, Tennessee state plane, and the 1927 and 1983 North American Datums were updated and compiled along with illustrative examples and relevant historical information. Alternative algorithms were developed for several coordinate conversions commonly used on the Oak Ridge Reservation.

  1. Viscous microstructural dampers with aligned holes: design procedure including the edge correction.

    PubMed

    Homentcovschi, Dorel; Miles, Ronald N

    2007-09-01

    The paper is a continuation of the works "Modelling of viscous damping of perforated planar micromechanical structures. Applications in acoustics" [Homentcovschi and Miles, J. Acoust. Soc. Am. 116, 2939-2947 (2004)] and "Viscous Damping of Perforated Planar Micromechanical Structures" [Homentcovschi and Miles, Sensors Actuators, A119, 544-552 (2005)] where design formulas for the case of an offset (staggered) system of holes was provided. The present work contains design formulas for perforated planar microstructures used in MEMS devices (such as proof-masses in accelerometers, backplates in microphones, micromechanical switches, resonators, tunable microoptical interferometers, etc.) in the case of aligned (nonstaggered) holes of circular and square section. The given formulas assure a minimum total damping coefficient (including the squeeze film damping and the direct and indirect resistance of the holes) for an assigned open area. The paper also gives a simple edge correction, making it possible to consider real (finite) perforated planar microstructures. The proposed edge correction is validated by comparison with the results obtained by FEM simulations: the relative error is found to be smaller than 0.04%. By putting together the design formulas with the edge correction a simple integrated design procedure for obtaining viscous perforated dampers with assigned properties is obtained. PMID:17927414

  2. An enhanced bacterial foraging algorithm approach for optimal power flow problem including FACTS devices considering system loadability.

    PubMed

    Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R

    2013-09-01

    Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms. PMID:23759251

  3. 34 CFR 222.94 - What provisions must be included in a local educational agency's Indian policies and procedures?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 1 2011-07-01 2011-07-01 false What provisions must be included in a local educational... IMPACT AID PROGRAMS Special Provisions for Local Educational Agencies That Claim Children Residing on... educational agency's Indian policies and procedures? (a) An LEA's Indian policies and procedures (IPPs)...

  4. 34 CFR 222.94 - What provisions must be included in a local educational agency's Indian policies and procedures?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false What provisions must be included in a local educational... IMPACT AID PROGRAMS Special Provisions for Local Educational Agencies That Claim Children Residing on... educational agency's Indian policies and procedures? (a) An LEA's Indian policies and procedures (IPPs)...

  5. Why McNemar's Procedure Needs to Be Included in the Business Statistics Curriculum

    ERIC Educational Resources Information Center

    Berenson, Mark L.; Koppel, Nicole B.

    2005-01-01

    In business research situations it is often of interest to examine the differences in the responses in repeated measurements of the same subjects or from among matched or paired subjects. A simple and useful procedure for comparing differences between proportions in two related samples was devised by McNemar (1947) nearly 60 years ago. Although…

  6. A Boundary Condition Relaxation Algorithm for Strongly Coupled, Ablating Flows Including Shape Change

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Johnston, Christopher O.

    2011-01-01

    Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.

  7. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  8. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    NASA Astrophysics Data System (ADS)

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  9. A simple procedure to include a free-form measurement capability to standard coordinate measurement machines

    NASA Astrophysics Data System (ADS)

    Schneider, Florian; Rascher, Rolf; Stamp, Richard; Smith, Gordon

    2013-09-01

    The modern optical industry requires objects with complex topographical structures. Free-form shaped objects are of large interest in many branches, especially for size reduced, modern lifestyle products like digital cameras. State of the art multi-axes-coordinate measurement machines (CMM), like the topographical measurement machine TII-3D, are by principle suitable to measure free-form shaped objects. The only limitation is the software package. This paper may illustrate a simple way to enhance coordinate measurement machines in order to add a free-form function. Next to a coordinate measurement machine, only a state of the art CAD† system and a simple piece of software are necessary. For this paper, the CAD software CREO‡ had been used. CREO enables the user to develop a 3D object in two different ways. With the first method, the user might design the shape by drawing one or more 2D sketches and put an envelope around. Using the second method, the user could define one or more formulas in the editor to describe the favoured surface. Both procedures lead to the required three-dimensional shape. However, further features of CREO enable the user to export the XYZ-coordinates of the created surface. A special designed software tool, developed with Matlab§, converts the XYZ-file into a measurement matrix which can be used as a reference file. Finally the result of the free-form measurement, carried out with a CMM, has to be loaded into the software tool and both files will be computed. The result is an error profile which provides the deviation between the measurement and the target-geometry.

  10. 45 CFR 309.75 - What administrative and management procedures must a Tribe or Tribal organization include in a...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false What administrative and management procedures must a Tribe or Tribal organization include in a Tribal IV-D plan? 309.75 Section 309.75 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT ENFORCEMENT PROGRAM), ADMINISTRATION FOR CHILDREN...

  11. Cassini VIMS observations of the Galilean satellites including the VIMS calibration procedure

    USGS Publications Warehouse

    McCord, T.B.; Coradini, A.; Hibbitts, C.A.; Capaccioni, F.; Hansen, G.B.; Filacchione, G.; Clark, R.N.; Cerroni, P.; Brown, R.H.; Baines, K.H.; Bellucci, G.; Bibring, J.-P.; Buratti, B.J.; Bussoletti, E.; Combes, M.; Cruikshank, D.P.; Drossart, P.; Formisano, V.; Jaumann, R.; Langevin, Y.; Matson, D.L.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.

    2004-01-01

    The Visual and Infrared Mapping Spectrometer (VIMS) observed the Galilean satellites during the Cassini spacecraft's 2000/2001 flyby of Jupiter, providing compositional and thermal information about their surfaces. The Cassini spacecraft approached the jovian system no closer than about 126 Jupiter radii, about 9 million kilometers, at a phase angle of < 90 ??, resulting in only sub-pixel observations by VIMS of the Galilean satellites. Nevertheless, most of the spectral features discovered by the Near Infrared Mapping Spectrometer (NIMS) aboard the Galileo spacecraft during more than four years of observations have been identified in the VIMS data analyzed so far, including a possible 13C absorption. In addition, VIMS made observations in the visible part of the spectrum and at several new phase angles for all the Galilean satellites and the calculated phase functions are presented. In the process of analyzing these data, the VIMS radiometric and spectral calibrations were better determined in preparation for entry into the Saturn system. Treatment of these data is presented as an example of the VIMS data reduction, calibration and analysis process and a detailed explanation is given of the calibration process applied to the Jupiter data. ?? 2004 Elsevier Inc. All rights reserved.

  12. A Novel Spectral Data Processing Procedure on Multi-Object Fiber Spectral Data Based on 2-D Algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Ye, Z. F.; Xu, X.

    2016-01-01

    The data processing procedures currently used on most multi-object fiber spectroscopic telescopes, such as Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST), the Sloan Digital Sky Survey (SDSS), the Anglo-Australia Telescope (AAT), etc., are based on one-dimensional (1-D) algorithms. In this paper, LAMOST is taken as an example to display the proposed multi-object fiber spectral data processing procedure. In the using processing procedure on LAMOST, after the pretreatment process, the two-dimensional (2-D) observed raw data are extracted into 1-D intermediate data simply based on 1-D model. Then the subsequent key steps are all done by 1-D algorithms. However, this processing procedure is not in accord with the formation mechanism of the observed spectra. Therefore, it brings a considerable error in each step. To solve the problem, we propose a novel processing procedure that has not been used on LAMOST or other telescopes. The modules of the procedure are reordered, and the main steps are all based on 2-D algorithms. The principles of the core algorithms are explained in detail. Besides, some partial experimental results are shown to prove the effectiveness and superiority of the 2-D algorithms.

  13. BROMOCEA Code: An Improved Grand Canonical Monte Carlo/Brownian Dynamics Algorithm Including Explicit Atoms.

    PubMed

    Solano, Carlos J F; Pothula, Karunakar R; Prajapati, Jigneshkumar D; De Biase, Pablo M; Noskov, Sergei Yu; Kleinekathöfer, Ulrich

    2016-05-10

    All-atom molecular dynamics simulations have a long history of applications studying ion and substrate permeation across biological and artificial pores. While offering unprecedented insights into the underpinning transport processes, MD simulations are limited in time-scales and ability to simulate physiological membrane potentials or asymmetric salt solutions and require substantial computational power. While several approaches to circumvent all of these limitations were developed, Brownian dynamics simulations remain an attractive option to the field. The main limitation, however, is an apparent lack of protein flexibility important for the accurate description of permeation events. In the present contribution, we report an extension of the Brownian dynamics scheme which includes conformational dynamics. To achieve this goal, the dynamics of amino-acid residues was incorporated into the many-body potential of mean force and into the Langevin equations of motion. The developed software solution, called BROMOCEA, was applied to ion transport through OmpC as a test case. Compared to fully atomistic simulations, the results show a clear improvement in the ratio of permeating anions and cations. The present tests strongly indicate that pore flexibility can enhance permeation properties which will become even more important in future applications to substrate translocation. PMID:27088446

  14. An efficient algorithm for solving coupled Schroedinger type ODE`s, whose potentials include {delta}-functions

    SciTech Connect

    Gousheh, S.S.

    1996-01-01

    I have used the shooting method to find the eigenvalues (bound state energies) of a set of strongly coupled Schroedinger type equations. I have discussed the advantages of the shooting method when the potentials include {delta}-functions. I have also discussed some points which are universal in these kind of problems, whose use make the algorithm much more efficient. These points include mapping the domain of the ODE into a finite one, using the asymptotic form of the solutions, best use of the normalization freedom, and converting the {delta}-functions into boundary conditions.

  15. Evaluation and Optimization of an ELISA Procedure to Quantify Antibodies Against Pneumococcal Polysaccharides Included in the 13-Valent Conjugate Vaccine.

    PubMed

    Belmonti, Simone; Lombardi, Francesca; Morandi, Matteo; Fabbiani, Massimiliano; Tordini, Giacinta; Cauda, Roberto; De Luca, Andrea; Di Giambenedetto, Simona; Montagnani, Francesca

    2016-01-01

    The 13-valent pneumococcal conjugate vaccine (PCV-13) is recommended for HIV-infected people, although its effectiveness in this population remains under evaluation. In this study, we describe the development, optimization, and analytical validation of an ELISA procedure to measure specific antibodies for the pneumococcal polysaccharide serotypes included in PCV13 vaccine, testing sera obtained from HIV-infected outpatients (n = 30) who received the vaccine. The protocol followed the last version of WHO guidelines, based on the new standard 007sp, with the modification of employing Statens Serum Institut (SSI) antigens. We supplied the assay performance validation in terms of sensitivity, reproducibility, precision and accuracy. In addition we detailed optimal antigen-coating concentrations and ELISA conditions common to all 13 serotypes, suitable for laboratories performing these assays in order to standardize the method. Our procedure showed reproducibility and reliability, making it a valid alternative for evaluating the response to pneumococcal serotypes included in PCV13 vaccine. PMID:26506438

  16. Enhanced 3-D-reconstruction algorithm for C-arm systems suitable for interventional procedures.

    PubMed

    Wiesent, K; Barth, K; Navab, N; Durlak, P; Brunner, T; Schuetz, O; Seissler, W

    2000-05-01

    Increasingly, three-dimensional (3-D) imaging technologies are used in medical diagnosis, for therapy planning, and during interventional procedures. We describe the possibilities of fast 3-D-reconstruction of high-contrast objects with high spatial resolution from only a small series of two-dimensional (2-D) planar radiographs. The special problems arising from the intended use of an open, mechanically unstable C-arm system are discussed. For the description of the irregular sampling geometry, homogeneous coordinates are used thoroughly. The well-known Feldkamp algorithm is modified to incorporate corresponding projection matrices without any decomposition into intrinsic and extrinsic parameters. Some approximations to speed up the whole reconstruction procedure and the tradeoff between image quality and computation time are also considered. Using standard hardware the reconstruction of a 256(3) cube is now possible within a few minutes, a time that is acceptable during interventions. Examples for cranial vessel imaging from some clinical test installations will be shown as well as promising results for bone imaging with a laboratory C-arm system. PMID:11021683

  17. A Spatio-Temporal Algorithmic Procedure for Environmental Policymaking in the Municipality of Arkalochori in the Greek Island of Crete

    NASA Astrophysics Data System (ADS)

    Batzias, F. A.; Sidiras, D. K.; Giannopoulos, Ch.; Spetsidis, I.

    2009-08-01

    This work deals with a methodological framework designed/developed under the form of a spatio-temporal algorithmic procedure for environmental policymaking at local level. The procedure includes 25 activity stages and 9 decision nodes, putting emphasis on (i) mapping on GIS layers water supply/demand and modeling of aquatic pollution coming from point and non-point sources, (ii) environmental monitoring by periodically measuring the main pollutants in situ and in the laboratory, (iii) design of environmental projects, decomposition of them into sub-projects and combination of the latter to form attainable alternatives, (iv) multicriteria ranking of alternatives, according to a modified Delphi method, by using as criteria the expected environmental benefit, the attitude of inhabitants, the priority within the programme of regional development, the capital required for the investment and the operating cost, and (v) knowledge Base (KB) operation/enrichment, functioning in combination with a data mining mechanism to extract knowledge/information/data from external Bases. An implementation is presented referring to the Municipality of Arkalochori in the Greek island of Crete.

  18. A Procedure to Determine the Coordinated Chromium and Calcium Isotopic Composition of Astromaterials Including the Chelyabinsk Meteorite

    NASA Technical Reports Server (NTRS)

    Tappa, M. J.; Mills, R. D.; Ware, B.; Simon, J. I.

    2014-01-01

    The isotopic compositions of elements are often used to characterize nucelosynthetic contributions in early Solar System objects. Coordinated multiple middle-mass elements with differing volatilities may provide information regarding the location of condensation of early Solar System solids. Here we detail new procedures that we have developed to make high-precision multi-isotope measurements of chromium and calcium using thermal ionization mass spectrometry, and characterize a suite of chondritic and terrestrial material including two fragments of the Chelyabinsk LL-chondrite.

  19. Detecting protein complexes in protein interaction networks using a ranking algorithm with a refined merging procedure

    PubMed Central

    2014-01-01

    Background Developing suitable methods for the identification of protein complexes remains an active research area. It is important since it allows better understanding of cellular functions as well as malfunctions and it consequently leads to producing more effective cures for diseases. In this context, various computational approaches were introduced to complement high-throughput experimental methods which typically involve large datasets, are expensive in terms of time and cost, and are usually subject to spurious interactions. Results In this paper, we propose ProRank+, a method which detects protein complexes in protein interaction networks. The presented approach is mainly based on a ranking algorithm which sorts proteins according to their importance in the interaction network, and a merging procedure which refines the detected complexes in terms of their protein members. ProRank + was compared to several state-of-the-art approaches in order to show its effectiveness. It was able to detect more protein complexes with higher quality scores. Conclusions The experimental results achieved by ProRank + show its ability to detect protein complexes in protein interaction networks. Eventually, the method could potentially identify previously-undiscovered protein complexes. The datasets and source codes are freely available for academic purposes at http://faculty.uaeu.ac.ae/nzaki/Research.htm. PMID:24944073

  20. Development of a computer algorithm for the analysis of variable-frequency AC drives: Case studies included

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Benjamin, Owen

    1991-01-01

    The development of computer software for performance prediction and analysis of voltage-fed, variable-frequency AC drives for space power applications is discussed. The AC drives discussed include the pulse width modulated inverter (PWMI), a six-step inverter and the pulse density modulated inverter (PDMI), each individually connected to a wound-rotor induction motor. Various d-q transformation models of the induction motor are incorporated for user-selection of the most applicable model for the intended purpose. Simulation results of selected AC drives correlate satisfactorily with published results. Future additions to the algorithm are indicated. These improvements should enhance the applicability of the computer program to the design and analysis of space power systems.

  1. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  2. The ATAMM procedure model for concurrent processing of large grained control and signal processing algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    An overview is presented of a model for describing data and control flow associated with the execution of large-grained, decision-free algorithms in a special distributed computer environment. The ATAMM (Algorithm-To-Architecture Mapping Model) model provides a basis for relating an algorithm to its execution in a dataflow multicomputer environment. The ATAMM model features a marked graph Petri net description of the algorithm behavior with regard to both data and control flow. The model provides an analytical basis for calculating performance bounds on throughput characteristics which are demonstrated here.

  3. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    NASA Astrophysics Data System (ADS)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  4. Implementation of the multi-channel monolith reactor in an optimisation procedure for heterogeneous oxidation catalysts based on genetic algorithms.

    PubMed

    Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter

    2007-01-01

    A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals. PMID:17266517

  5. Parameter Trending, Geolocation Quality Control and the Procedures to Support Preparation of Next Versions of the TRMM Reprocessing Algorithm

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2004-01-01

    TRMM has been an imminently successful mission from an engineering standpoint but even more from a science standpoint. An important part of this science success has been the careful quality control of the TRMM standard products. This paper will present the quality monitoring efforts that the TRMM Science Data and Information System (TSDIS) conducts on a routine basis. The paper will detail parameter trending, geolocation quality control and the procedures to support the preparation of next versions of the algorithm used for reprocessing.

  6. Effective detection of toxigenic Clostridium difficile by a two-step algorithm including tests for antigen and cytotoxin.

    PubMed

    Ticehurst, John R; Aird, Deborah Z; Dam, Lisa M; Borek, Anita P; Hargrove, John T; Carroll, Karen C

    2006-03-01

    We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were > or = 99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by approximately 75 to 80% and two-step testing was complete in < or = 3 days, we decided that this algorithm would be effective. Over 6 months, our laboratories' expenses were US dollar 143,000 less than if CCNA alone had been performed on all 5,887 specimens. PMID:16517916

  7. An Algorithm for Real-Time Optimal Photocurrent Estimation Including Transient Detection for Resource-Constrained Imaging Applications

    NASA Astrophysics Data System (ADS)

    Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak

    2016-06-01

    Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote

  8. Locating critical points on multi-dimensional surfaces by genetic algorithm: test cases including normal and perturbed argon clusters

    NASA Astrophysics Data System (ADS)

    Chaudhury, Pinaki; Bhattacharyya, S. P.

    1999-03-01

    It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.

  9. Risk-stratified cardiovascular screening including angiographic and procedural outcomes of percutaneous coronary interventions in renal transplant candidates.

    PubMed

    König, Julian; Möckel, Martin; Mueller, Eda; Bocksch, Wolfgang; Baid-Agrawal, Seema; Babel, Nina; Schindler, Ralf; Reinke, Petra; Nickel, Peter

    2014-01-01

    Background. Benefits of cardiac screening in kidney transplant candidates (KTC) will be dependent on the availability of effective interventions. We retrospectively evaluated characteristics and outcome of percutaneous coronary interventions (PCI) in KTC selected for revascularization by a cardiac screening approach. Methods. In 267 patients evaluated 2003 to 2006, screening tests performed were reviewed and PCI characteristics correlated with major adverse cardiovascular events (MACE) during a follow-up of 55 months. Results. Stress tests in 154 patients showed ischemia in 28 patients (89% high risk). Of 58 patients with coronary angiography, 38 had significant stenoses and 18 cardiac interventions (6.7% of all). 29 coronary lesions in 17/18 patients were treated by PCI. Angiographic success rate was 93.1%, but procedural success rate was only 86.2%. Long lesions (P = 0.029) and diffuse disease (P = 0.043) were associated with MACE. In high risk patients, cardiac screening did not improve outcome as 21.7% of patients with versus 15.5% of patients without properly performed cardiac screening had MACE (P = 0.319). Conclusion. The moderate procedural success of PCI and poor outcome in long and diffuse coronary lesions underscore the need to define appropriate revascularization strategies in KTC, which will be a prerequisite for cardiac screening to improve outcome in these high-risk patients. PMID:25045528

  10. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆

    PubMed Central

    López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874

  11. Surgical accuracy of three-dimensional virtual planning: a pilot study of bimaxillary orthognathic procedures including maxillary segmentation.

    PubMed

    Stokbro, K; Aagaard, E; Torkov, P; Bell, R B; Thygesen, T

    2016-01-01

    This retrospective study evaluated the precision and positional accuracy of different orthognathic procedures following virtual surgical planning in 30 patients. To date, no studies of three-dimensional virtual surgical planning have evaluated the influence of segmentation on positional accuracy and transverse expansion. Furthermore, only a few have evaluated the precision and accuracy of genioplasty in placement of the chin segment. The virtual surgical plan was compared with the postsurgical outcome by using three linear and three rotational measurements. The influence of maxillary segmentation was analyzed in both superior and inferior maxillary repositioning. In addition, transverse surgical expansion was compared with the postsurgical expansion obtained. An overall, high degree of linear accuracy between planned and postsurgical outcomes was found, but with a large standard deviation. Rotational difference showed an increase in pitch, mainly affecting the maxilla. Segmentation had no significant influence on maxillary placement. However, a posterior movement was observed in inferior maxillary repositioning. A lack of transverse expansion was observed in the segmented maxilla independent of the degree of expansion. PMID:26250603

  12. Including health in transport policy agendas: the role of health impact assessment analyses and procedures in the European experience.

    PubMed Central

    Dora, Carlos; Racioppi, Francesca

    2003-01-01

    From the mid-1990s, research began to highlight the importance of a wide range of health impacts of transport policy decisions. The Third Ministerial Conference on Environment and Health adopted a Charter on Transport, Environment and Health based on four main components: bringing awareness of the nature, magnitude and costs of the health impacts of transport into intergovernmental processes; strengthening the arguments for integration of health into transport policies by developing in-depth analysis of the evidence; developing national case studies; and engaging ministries of environment, health and transport as well as intergovernmental and nongovernmental organizations. Negotiation of the Charter was based on two converging processes: the political process involved the interaction of stakeholders in transport, health and environment in Europe, which helped to frame the issues and the approaches to respond to them; the scientific process involved an international group of experts who produced state-of- the-art reviews of the health impacts resulting from transportation activities, identifying gaps in existing knowledge and methodological tools, specifying the policy implications of their findings, and suggesting possible targets for health improvements. Health arguments were used to strengthen environmental ones, clarify costs and benefits, and raise issues of health equity. The European experience shows that HIA can fulfil the need for simple procedures to be systematically applied to decisions regarding transport strategies at national, regional and local levels. Gaps were identified concerning models for quantifying health impacts and capacity building on how to use such tools. PMID:12894322

  13. A Fuzzy Goal Programming Procedure for Solving Multiobjective Load Flow Problems via Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Biswas, Papun; Chakraborti, Debjani

    2010-10-01

    This paper describes how the genetic algorithms (GAs) can be efficiently used to fuzzy goal programming (FGP) formulation of optimal power flow problems having multiple objectives. In the proposed approach, the different constraints, various relationships of optimal power flow calculations are fuzzily described. In the model formulation of the problem, the membership functions of the defined fuzzy goals are characterized first for measuring the degree of achievement of the aspiration levels of the goals specified in the decision making context. Then, the achievement function for minimizing the regret for under-deviations from the highest membership value (unity) of the defined membership goals to the extent possible on the basis of priorities is constructed for optimal power flow problems. In the solution process, the GA method is employed to the FGP formulation of the problem for achievement of the highest membership value (unity) of the defined membership functions to the extent possible in the decision making environment. In the GA based solution search process, the conventional Roulette wheel selection scheme, arithmetic crossover and random mutation are taken into consideration to reach a satisfactory decision. The developed method has been tested on IEEE 6-generator 30-bus System. Numerical results show that this method is promising for handling uncertain constraints in practical power systems.

  14. Driver Performance Measurement Research. Volume 2: Guide for Training Observer/Raters in the Driver Performance Measurements Procedure. (Including Course and Content).

    ERIC Educational Resources Information Center

    Nolan, R. O.; And Others

    The Final Report, Volume 1, covers research results of the Michigan State University Driver Performance Measurement Project. This volume (Volume 2) constitutes a guide for training observers/raters in the driver performance measurement procedures developed in this research by MSU. The guide includes a training course plan and content materials…

  15. A weighted reverse Cuthill-McKee procedure for finite element method algorithms to solve strongly anisotropic electrodynamic problems

    SciTech Connect

    Cristofolini, Andrea; Latini, Chiara; Borghi, Carlo A.

    2011-02-01

    This paper presents a technique for improving the convergence rate of a generalized minimum residual (GMRES) algorithm applied for the solution of a algebraic system produced by the discretization of an electrodynamic problem with a tensorial electrical conductivity. The electrodynamic solver considered in this work is a part of a magnetohydrodynamic (MHD) code in the low magnetic Reynolds number approximation. The code has been developed for the analysis of MHD interaction during the re-entry phase of a space vehicle. This application is a promising technique intensively investigated for the shock mitigation and the vehicle control in the higher layers of a planetary atmosphere. The medium in the considered application is a low density plasma, characterized by a tensorial conductivity. This is a result of the behavior of the free electric charges, which tend to drift in a direction perpendicular both to the electric field and to the magnetic field. In the given approximation, the electrodynamics is described by an elliptical partial differential equation, which is solved by means of a finite element approach. The linear system obtained by discretizing the problem is solved by means of a GMRES iterative method with an incomplete LU factorization threshold preconditioning. The convergence of the solver appears to be strongly affected by the tensorial characteristic of the conductivity. In order to deal with this feature, the bandwidth reduction in the coefficient matrix is considered and a novel technique is proposed and discussed. First, the standard reverse Cuthill-McKee (RCM) procedure has been applied to the problem. Then a modification of the RCM procedure (the weighted RCM procedure, WRCM) has been developed. In the last approach, the reordering is performed taking into account the relation between the mesh geometry and the magnetic field direction. In order to investigate the effectiveness of the methods, two cases are considered. The RCM and WRCM procedures

  16. Algorithms for fast axisymmetric drop shape analysis measurements by a charge coupled device video camera and simulation procedure for test and evaluation

    NASA Astrophysics Data System (ADS)

    Busoni, Lorenzo; Carlà, Marcello; Lanzi, Leonardo

    2001-06-01

    A set of fast algorithms for axisymmetric drop shape analysis measurements is described. Speed has been improved by more than 1 order of magnitude over previously available procedures. Frame analysis is performed and drop characteristics and interfacial tension γ are computed in less than 40 ms on a Pentium III 450 MHz PC, while preserving an overall accuracy in Δγ/γ close to 1×10-4. A new procedure is described to evaluate both the algorithms performance and the contribution of each source of experimental error to the overall measurement accuracy.

  17. An integrated portfolio optimisation procedure based on data envelopment analysis, artificial bee colony algorithm and genetic programming

    NASA Astrophysics Data System (ADS)

    Hsu, Chih-Ming

    2014-12-01

    Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.

  18. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  19. A New Lidar Data Processing Algorithm Including Full Uncertainty Budget and Standardized Vertical Resolution for use Within the NDACC and GRUAN Networks

    NASA Astrophysics Data System (ADS)

    Leblanc, T.; Haefele, A.; Sica, R. J.; van Gijsel, A.

    2014-12-01

    A new lidar data processing algorithm for the retrieval of ozone, temperature and water vapor has been developed for centralized use within the Network for the Detection of Atmospheric Composition Change (NDACC) and the GCOS Reference Upper Air Network (GRUAN). The program is written with the objective that raw data from a large number of lidar instruments can be analyzed consistently. The uncertainty budget includes 13 sources of uncertainty that are explicitly propagated taking into account vertical and inter-channel dependencies. Several standardized definitions of vertical resolution can be used, leading to a maximum flexibility, and to the production of tropospheric ozone, stratospheric ozone, middle atmospheric temperature and tropospheric water vapor profiles optimized for multiple user needs such as long-term monitoring, process studies and model and satellite validation. A review of the program's functionalities as well as the first retrieved products will be presented.

  20. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  1. Reducing the need for central dual-energy X-ray absorptiometry in postmenopausal women: efficacy of a clinical algorithm including peripheral densitometry.

    PubMed

    Jiménez-Núñez, Francisco Gabriel; Manrique-Arija, Sara; Ureña-Garnica, Inmaculada; Romero-Barco, Carmen María; Panero-Lamothe, Blanca; Descalzo, Miguel Angel; Carmona, Loreto; Rodríguez-Pérez, Manuel; Fernández-Nebro, Antonio

    2013-07-01

    We evaluated the efficacy of a triage approach based on a combination of osteoporosis risk-assessment tools plus peripheral densitometry to identify low bone density accurately enough to be useful for clinical decision making in postmenopausal women. We conducted a cross-sectional diagnostic study in postmenopausal Caucasian women from primary and tertiary care. All women underwent dual-energy X-ray absorptiometric (DXA) measurement at the hip and lumbar spine and were categorized as osteoporotic or not. Additionally, patients had a nondominant heel densitometry performed with a PIXI densitometer. Four osteoporosis risk scores were tested: SCORE, ORAI, OST, and OSIRIS. All measurements were cross-blinded. We estimated the area under the curve (AUC) to predict the DXA results of 16 combinations of PIXI plus risk scores. A formula including the best combination was derived from a regression model and its predictability estimated. We included 505 women, in whom the prevalence of osteoporosis was 20 %, similar in both settings. The best algorithm was a combination of PIXI + OST + SCORE with an AUC of 0.826 (95 % CI 0.782-0.869). The proposed formula is Risk = (-12) × [PIXI + (-5)] × [OST + (-2)] × SCORE and showed little bias in the estimation (0.0016). If the formula had been implemented and the intermediate risk cutoff set at -5 to 20, the system would have saved 4,606.34 in the study year. The formula proposed, derived from previously validated risk scores plus a peripheral bone density measurement, can be used reliably in primary care to avoid unnecessary central DXA measurements in postmenopausal women. PMID:23608922

  2. Survey analysis and chemical characterization of solid inhomogeneous samples using a general homogenization procedure including acid digestion, drying, grinding and briquetting together with X-ray fluorescence.

    PubMed

    Sahlin, Eskil; Magnusson, Bertil

    2012-08-15

    A survey analysis and chemical characterization methodology for inhomogeneous solid waste samples of relatively large samples (typically up to 100g) using X-ray fluorescence following a general homogenization procedure is presented. By using a combination of acid digestion and grinding various materials can be homogenized e.g. pure metals, alloys, salts, ores, plastics, organics. In the homogenization step, solid material is fully or partly digested in a mixture of nitric acid and hydrochloric acid in an open vessel. The resulting mixture is then dried, grinded, and finally pressed to a wax briquette. The briquette is analyzed using wave-length dispersive X-ray fluorescence with fundamental parameters evaluation. The recovery of 55 elements were tested by preparing samples with known compositions using different alloys, pure metals or elements, oxides, salts and solutions of dissolved compounds. It was found that the methodology was applicable to 49 elements including Na, Mg, Al, Si, P, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, As, Se, Rb, Sr, Y, Zr, Nb, Mo, Ru, Rh, Pd, Ag, Cd, In, Sn, Sb, Te, Cs, Ba, La, Ce, Ta, W, Re, Ir, Pt, Au, Tl, Pb, Bi, and Th, that all had recoveries >0.8. 6 elements were lost by volatilization, including Br, I, Os, and Hg that were completely lost, and S and Ge that were partly lost. Since all lanthanides are chemically similar to La and Ce, all actinides are chemically similar to Th, and Hf is chemically similar to Zr, it is likely that the method is applicable to 77 elements. By using an internal standard such as strontium, added as strontium nitrate, samples containing relatively high concentrations of elements not measured by XRF (hydrogen to fluorine), e.g. samples containing plastics, can be analyzed. PMID:22841048

  3. A Physics-based Algorithm for Real-time Simulation of Electrosurgery Procedures in Minimally Invasive Surgery

    PubMed Central

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F.; De, Suvranu

    2014-01-01

    Background High-frequency electricity is used in a majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. Methods We present a real-time and physically realistic simulation of electrosurgery, by modeling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide sub-finite-element graphical rendering of vaporized tissue, a dual mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. Results We have demonstrated our physics based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Conclusions Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. PMID:24357156

  4. Improved methodology for surface and atmospheric soundings, error estimates, and quality control procedures: the atmospheric infrared sounder science team version-6 retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Susskind, Joel; Blaisdell, John M.; Iredell, Lena

    2014-01-01

    The atmospheric infrared sounder (AIRS) science team version-6 AIRS/advanced microwave sounding unit (AMSU) retrieval algorithm is now operational at the Goddard Data and Information Services Center (DISC). AIRS version-6 level-2 products are generated near real time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. Some of the significant improvements in retrieval methodology contained in the version-6 retrieval algorithm compared to that previously used in version-5 are described. In particular, the AIRS science team made major improvements with regard to the algorithms used to (1) derive surface skin temperature and surface spectral emissivity; (2) generate the initial state used to start the cloud clearing and retrieval procedures; and (3) derive error estimates and use them for quality control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, version-6 also operates in an AIRS only (AO) mode, which produces results almost as good as those of the full AIRS/AMSU mode. The improvements of some AIRS version-6 and version-6 AO products compared to those obtained using version-5 are also demonstrated.

  5. Improved Methodology for Surface and Atmospheric Soundings, Error Estimates, and Quality Control Procedures: the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2014-01-01

    The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.

  6. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  7. Effects of deformable registration algorithms on the creation of statistical maps for preoperative targeting in deep brain stimulation procedures

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; D'Haese, Pierre-Francois; Dawant, Benoit M.

    2014-03-01

    Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.

  8. Presentation of a general algorithm to include effect assessment on secondary poisoning in the derivation of environmental quality criteria. Part 1. Aquatic food chains.

    PubMed

    Romijn, C A; Luttik, R; van de Meent, D; Slooff, W; Canton, J H

    1993-08-01

    Effect assessment on secondary poisoning can be an asset to effect assessments on direct poisoning in setting quality criteria for the environment. This study presents an algorithm for effect assessment on secondary poisoning. The water-fish-fish-eating bird or mammal pathway was analyzed as an example of a secondary poisoning pathway. Parameters used in this algorithm are the bioconcentration factor for fish (BCF) and the no-observed-effect concentration for the group of fish-eating birds and mammals (NOECfish-eater). For the derivation of reliable BCFs preference is given to the use of experimentally derived BCFs over QSAR estimates. NOECs for fish eaters are derived by extrapolating toxicity data on single species. Because data on fish-eating species are seldom available, toxicity data on all birds and mammalian species were used. The proposed algorithm (MAR = NOECfish-eater/BCF) was used to calculate MARS (maximum acceptable risk levels) for the compounds lindane, dieldrin, cadmium, mercury, PCB153, and PCB118. By subsequently, comparing these MARs to MARs derived by effect assessment for aquatic organisms, it was concluded that for methyl mercury and PCB153 secondary poisoning of fish-eating birds and mammals could be a critical pathway. For these compounds, effects on populations of fish-eating birds and mammals can occur at levels in surface water below the MAR calculated for aquatic ecosystems. Secondary poisoning of fish-eating birds and mammals is not likely to occur for cadmium at levels in water below the MAR calculated for aquatic ecosystems. PMID:7691536

  9. Yield of stool culture with isolate toxin testing versus a two-step algorithm including stool toxin testing for detection of toxigenic Clostridium difficile.

    PubMed

    Reller, Megan E; Lema, Clara A; Perl, Trish M; Cai, Mian; Ross, Tracy L; Speck, Kathleen A; Carroll, Karen C

    2007-11-01

    We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) CCNA and conventional PCR if the direct CCNA was negative. The "gold standard" for toxigenic C. difficile was detection of C. difficile by the GDH screen or by culture and toxin production by direct or indirect CCNA. We tested 439 specimens from 439 patients. GDH screening detected all culture-positive specimens. The sensitivity of the two-step algorithm was 77% (95% confidence interval [CI], 70 to 84%), and that of culture was 87% (95% CI, 80 to 92%). PCR results correlated completely with those of CCNA testing on isolates (29/29 positive and 32/32 negative, respectively). We conclude that GDH is an excellent screening test and that culture with isolate CCNA testing detects an additional 23% of toxigenic C. difficile missed by direct CCNA. Since culture is tedious and also detects nontoxigenic C. difficile, we conclude that culture is most useful (i) when the direct CCNA is negative but a high clinical suspicion of toxigenic C. difficile remains, (ii) in the evaluation of new diagnostic tests for toxigenic C. difficile (where the best reference standard is essential), and (iii) in epidemiologic studies (where the availability of an isolate allows for strain typing and antimicrobial susceptibility testing). PMID:17804652

  10. Behavior of an inversion-based precipitation retrieval algorithm with high-resolution AMPR measurements including a low-frequency 10.7-GHz channel

    NASA Technical Reports Server (NTRS)

    Smith, E. A.; Xiang, X.; Mugnai, A.; Hood, R. E.; Spencer, R. W.

    1994-01-01

    A microwave-based, profile-type precipitation retrieval algorithm has been used to analyze high-resolution passsive microwave measurements over an ocean background, obtained by the Advanced Microwave Precipitation Radiometer (AMPR) flown on a NASA ER-2 aircraft. The analysis is designed to first determine the improvements that can be gained by adding brightness temperature information from the AMPR low-frequency channel (10.7 GHz) to a multispectral retrieval algorithm nominally run with satellite information at 19, 37, and 85 GHz. The impact of spatial resolution degradation of the high-resolution brightness temperature information on the retrieved rain/cloud liquid water contents and ice water contents is then quantified in order to assess the possible biases inherent to satellite-based retrieval. Careful inspection of the high-resolution aircraft dataset reveals five distinctive brightness temperature features associated with cloud structure and scattering effects that are not generally detectable in current passive microwave satellite measurements. Results suggest that the inclusion of 10.7-GHz information overcomes two basic problems associated with three-channel retrieval. Intercomparisons of retrievals carried out at high-resolution and then averaged to a characteristic satellite scale to the corresponding retrievals in which the brightness temperatures are first convolved down to the satellite scale suggest that with the addition of the 10.7-GHz channel, the rain liquid water contents will not be negatively impacted by special resolution degradation. That is not the case with the ice water contents as they appear ti be quite sensitive to the imposed scale, the implication being that as spatial resolution is reduced, ice water contents will become increasingly underestimated.

  11. A Runge-Kutta Nystrom algorithm.

    NASA Technical Reports Server (NTRS)

    Bettis, D. G.

    1973-01-01

    A Runge-Kutta algorithm of order five is presented for the solution of the initial value problem where the system of ordinary differential equations is of second order and does not contain the first derivative. The algorithm includes the Fehlberg step control procedure.

  12. Development of an HL7 interface engine, based on tree structure and streaming algorithm, for large-size messages which include image data.

    PubMed

    Um, Ki Sung; Kwak, Yun Sik; Cho, Hune; Kim, Il Kon

    2005-11-01

    A basic assumption of Health Level Seven (HL7) protocol is 'No limitation of message length'. However, most existing commercial HL7 interface engines do limit message length because they use the string array method, which is run in the main memory for the HL7 message parsing process. Specifically, messages with image and multi-media data create a long string array and thus cause the computer system to raise critical and fatal problem. Consequently, HL7 messages cannot handle the image and multi-media data necessary in modern medical records. This study aims to solve this problem with the 'streaming algorithm' method. This new method for HL7 message parsing applies the character-stream object which process character by character between the main memory and hard disk device with the consequence that the processing load on main memory could be alleviated. The main functions of this new engine are generating, parsing, validating, browsing, sending, and receiving HL7 messages. Also, the engine can parse and generate XML-formatted HL7 messages. This new HL7 engine successfully exchanged HL7 messages with 10 megabyte size images and discharge summary information between two university hospitals. PMID:16181703

  13. Including adaptation and mitigation responses to climate change in a multiobjective evolutionary algorithm framework for urban water supply systems incorporating GHG emissions

    NASA Astrophysics Data System (ADS)

    Paton, F. L.; Maier, H. R.; Dandy, G. C.

    2014-08-01

    Cities around the world are increasingly involved in climate action and mitigating greenhouse gas (GHG) emissions. However, in the context of responding to climate pressures in the water sector, very few studies have investigated the impacts of changing water use on GHG emissions, even though water resource adaptation often requires greater energy use. Consequently, reducing GHG emissions, and thus focusing on both mitigation and adaptation responses to climate change in planning and managing urban water supply systems, is necessary. Furthermore, the minimization of GHG emissions is likely to conflict with other objectives. Thus, applying a multiobjective evolutionary algorithm (MOEA), which can evolve an approximation of entire trade-off (Pareto) fronts of multiple objectives in a single run, would be beneficial. Consequently, the main aim of this paper is to incorporate GHG emissions into a MOEA framework to take into consideration both adaptation and mitigation responses to climate change for a city's water supply system. The approach is applied to a case study based on Adelaide's southern water supply system to demonstrate the framework's practical management implications. Results indicate that trade-offs exist between GHG emissions and risk-based performance, as well as GHG emissions and economic cost. Solutions containing rainwater tanks are expensive, while GHG emissions greatly increase with increased desalinated water supply. Consequently, while desalination plants may be good adaptation options to climate change due to their climate-independence, rainwater may be a better mitigation response, albeit more expensive.

  14. Human organ/tissue growth algorithms that include obese individuals and black/white population organ weight similarities from autopsy data.

    PubMed

    Young, John F; Luecke, Richard H; Pearce, Bruce A; Lee, Taewon; Ahn, Hongshik; Baek, Songjoon; Moon, Hojin; Dye, Daniel W; Davis, Thomas M; Taylor, Susan J

    2009-01-01

    Physiologically based pharmacokinetic (PBPK) models need the correct organ/tissue weights to match various total body weights in order to be applied to children and the obese individual. Baseline data from Reference Man for the growth of human organs (adrenals, brain, heart, kidneys, liver, lungs, pancreas, spleen, thymus, and thyroid) were augmented with autopsy data to extend the describing polynomials to include the morbidly obese individual (up to 250 kg). Additional literature data similarly extends the growth curves for blood volume, muscle, skin, and adipose tissue. Collectively these polynomials were used to calculate blood/organ/tissue weights for males and females from birth to 250 kg, which can be directly used to help parameterize PBPK models. In contrast to other black/white anthropomorphic measurements, the data demonstrated no observable or statistical difference in weights for any organ/tissue between individuals identified as black or white in the autopsy reports. PMID:19267313

  15. A Procedure for Extending Input Selection Algorithms to Low Quality Data in Modelling Problems with Application to the Automatic Grading of Uploaded Assignments

    PubMed Central

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  16. A procedure for extending input selection algorithms to low quality data in modelling problems with application to the automatic grading of uploaded assignments.

    PubMed

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis; Couso, Inés; Sánchez, Luciano

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  17. Using Independent NCDC Rain Gauges to Analyze Precipitation Values from the OneRain Corporation Algorithm and the National Weather Service Procedure

    NASA Astrophysics Data System (ADS)

    Martinaitis, S. M.; Fuelberg, H. E.; Sullivan, J. L.; Pathak, C.

    2007-12-01

    . Individual gauge sites also will be evaluated. Intervals of precipitation are analyzed to see how each scheme handles light, moderate, and heavy rainfall events. Finally, case studies describe how each scheme estimates particular rainfall events, including land-falling tropical cyclones. In summary, this paper will describe which procedure compares best with the NCDC independent gauges, and whether the OneRain and MPE products can be used interchangeably.

  18. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  19. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  20. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  1. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  2. Quality control by HyperSpectral Imaging (HSI) in solid waste recycling: logics, algorithms and procedures

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Serranti, Silvia

    2014-03-01

    In secondary raw materials and recycling sectors, the products quality represents, more and more, the key issue to pursuit in order to be competitive in a more and more demanding market, where quality standards and products certification play a preheminent role. These goals assume particular importance when recycling actions are applied. Recovered products, resulting from waste materials, and/or dismissed products processing, are, in fact, always seen with a certain suspect. An adequate response of the industry to the market can only be given through the utilization of equipment and procedures ensuring pure, high-quality production, and efficient work and cost. All these goals can be reached adopting not only more efficient equipment and layouts, but also introducing new processing logics able to realize a full control of the handled material flow streams fulfilling, at the same time, i) an easy management of the procedures, ii) an efficient use of the energy, iii) the definition and set up of reliable and robust procedures, iv) the possibility to implement network connectivity capabilities finalized to a remote monitoring and control of the processes and v) a full data storage, analysis and retrieving. Furthermore the ongoing legislation and regulation require the implementation of recycling infrastructure characterised by high resources efficiency and low environmental impacts, both aspects being strongly linked to the waste materials and/or dismissed products original characteristics. For these reasons an optimal recycling infrastructure design primarily requires a full knowledge of the characteristics of the input waste. What previously outlined requires the introduction of a new important concept to apply in solid waste recycling, the recycling-oriented characterization, that is the set of actions addressed to strategically determine selected attributes, in order to get goaloriented data on waste for the development, implementation or improvement of recycling

  3. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  4. Evaluation of feedback-reduction algorithms for hearing aids.

    PubMed

    Greenberg, J E; Zurek, P M; Brantley, M

    2000-11-01

    Three adaptive feedback-reduction algorithms were implemented in a laboratory-based digital hearing aid system and evaluated with dynamic feedback paths and hearing-impaired subjects. The evaluation included measurements of maximum stable gain and subjective quality ratings. The continuously adapting CNN algorithm (Closed-loop processing with No probe Noise) provided the best performance: 8.5 dB of added stable gain (ASG) relative to a reference algorithm averaged over all subjects, ears, and vent conditions. Two intermittently adapting algorithms, ONO (Open-loop with Noise when Oscillation detected) and ONQ (Open-loop with Noise when Quiet detected), provided an average of 5 dB of ASG. Subjects with more severe hearing losses received greater benefits: 13 dB average ASG for the CNN algorithm and 7-8 dB average ASG for the ONO and ONQ algorithms. These values are conservative estimates of ASG because the fitting procedure produced a frequency-gain characteristic that already included precautions against feedback. Speech quality ratings showed no substantial algorithm effect on pleasantness or intelligibility, although subjects informally expressed strong objections to the probe noise used by the ONO and ONQ algorithms. This objection was not reflected in the speech quality ratings because of limitations of the experimental procedure. The results clearly indicate that the CNN algorithm is the most promising choice for adaptive feedback reduction in hearing aids. PMID:11108377

  5. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  6. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  7. Memetic algorithm for community detection in networks.

    PubMed

    Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng

    2011-11-01

    Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method. PMID:22181467

  8. An optimized procedure for determining incremental heat rate characteristics

    SciTech Connect

    Noyola, A.H.; Grady, W.M. ); Viviani, G.L. )

    1990-05-01

    This paper describes an optimized procedure for producing generator incremental heat rate curves from continually sampled unit performance data. A generalized reduced gradient algorithm is applied to optimally locate break points in incremental heat rate curves. The advantages include the ability to automatically take into consideration slow time-varying effects such as unit aging and temperature variations in combustion air and cooling water. The procedure is tested using actual fuel rate data for four generators.

  9. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  10. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  11. Science to practice: what do molecular biologic studies in rodent models add to our understanding of interventional oncologic procedures including percutaneous ablation by using glyceraldehyde-3-phosphate dehydrogenase antagonists?

    PubMed

    Goldberg, S Nahum

    2012-03-01

    In this basic research study, Ganapathy-Kanniappan et al advance our understanding of how to block the glycolytic pathway to inhibit tumor progression by using image guided procedures (1). This was accomplished by demonstrating their ability to perform molecular targeting of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) in human hepatocellular carcinoma (HCC) by using percutaneous injection of either inhibitor--3-bromopyruvate (3-BrPA) or short hairpin RNA (shRNA). They take the critical step of providing further rationale for potentially advancing this therapy into clinical trials by demonstrating that GAPDH expression strongly correlates with c-jun, a proto-oncogene involved in liver tumorigenesis in human HCC (2). PMID:22357877

  12. Algorithmic Procedure for Finding Semantically Related Journals.

    ERIC Educational Resources Information Center

    Pudovkin, Alexander I.; Garfield, Eugene

    2002-01-01

    Using citations, papers and references as parameters a relatedness factor (RF) is computed for a series of journals. Sorting these journals by the RF produces a list of journals most closely related to a specified starting journal. The method appears to select a set of journals that are semantically most similar to the target journal. The…

  13. Adamantyl-group containing mixed-mode acrylamide-based continuous beds for capillary electrochromatography. Part I: study of a synthesis procedure including solubilization of N-adamantyl-acrylamide via complex formation with a water-soluble cyclodextrin.

    PubMed

    Al-Massaedh, Ayat Allah; Pyell, Ute

    2013-04-19

    A new synthesis procedure for highly crosslinked macroporous amphiphilic N-adamantyl-functionalized mixed-mode acrylamide-based monolithic stationary phases for capillary electrochromatography (CEC) is investigated employing solubilization of the hydrophobic monomer by complexation with a cyclodextrin. N-(1-adamantyl)acrylamide is synthesized and characterized as a hydrophobic monomer forming a water soluble-inclusion complex with statistically methylated-β-cyclodextrin. The stoichiometry, the complex formation constant and the spatial arrangement of the formed complex are determined. Mixed-mode monolithic stationary phases are synthesized by in situ free radical copolymerization of cyclodextrin-solubilized N-adamantyl acrylamide, a water soluble crosslinker (piperazinediacrylamide), a hydrophilic monomer (methacrylamide), and a negatively charged monomer (vinylsulfonic acid) in aqueous medium in bind silane-pretreated fused silica capillaries. The synthesized monolithic stationary phases are amphiphilic and can be employed in the reversed- and in the normal-phase mode (depending on the composition of the mobile phase), which is demonstrated with polar and non-polar analytes. Observations made with polar analytes and polar mobile phase can only be explained by a mixed-mode retention mechanism. The influence of the total monomer concentration (%T) on the chromatographic properties, the electroosmotic mobility, and on the specific permeability is investigated. With a homologues series of alkylphenones it is confirmed that the hydrophobicity (methylene selectivity) of the stationary phase increases with increasing mass fraction of N-(1-adamantyl)acrylamide in the synthesis mixture. PMID:23489493

  14. Algorithmic commonalities in the parallel environment

    NASA Technical Reports Server (NTRS)

    Mcanulty, Michael A.; Wainer, Michael S.

    1987-01-01

    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.

  15. Simultaneous bilateral hip replacement reveals superior outcome and fewer complications than two-stage procedures: a prospective study including 1819 patients and 5801 follow-ups from a total joint replacement registry

    PubMed Central

    2010-01-01

    Background Total joint replacements represent a considerable part of day-to-day orthopaedic routine and a substantial proportion of patients undergoing unilateral total hip arthroplasty require a contralateral treatment after the first operation. This report compares complications and functional outcome of simultaneous versus early and delayed two-stage bilateral THA over a five-year follow-up period. Methods The study is a post hoc analysis of prospectively collected data in the framework of the European IDES hip registry. The database query resulted in 1819 patients with 5801 follow-ups treated with bilateral THA between 1965 and 2002. According to the timing of the two operations the sample was divided into three groups: I) 247 patients with simultaneous bilateral THA, II) 737 patients with two-stage bilateral THA within six months, III) 835 patients with two-stage bilateral THA between six months and five years. Results Whereas postoperative hip pain and flexion did not differ between the groups, the best walking capacity was observed in group I and the worst in group III. The rate of intraoperative complications in the first group was comparable to that of the second. The frequency of postoperative local and systemic complication in group I was the lowest of the three groups. The highest rate of complications was observed in group III. Conclusions From the point of view of possible intra- and postoperative complications, one-stage bilateral THA is equally safe or safer than two-stage interventions. Additionally, from an outcome perspective the one-stage procedure can be considered to be advantageous. PMID:20973941

  16. Public Sector Impasse Procedures.

    ERIC Educational Resources Information Center

    Vadakin, James C.

    The subject of collective bargaining negotiation impasse procedures in the public sector, which includes public school systems, is a broad one. In this speech, the author introduces the various procedures, explains how they are used, and lists their advantages and disadvantages. Procedures discussed are mediation, fact-finding, arbitration,…

  17. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  18. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  19. Pyroshock prediction procedures

    NASA Astrophysics Data System (ADS)

    Piersol, Allan G.

    2002-05-01

    Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.

  20. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  1. A direct element resequencing procedure

    NASA Technical Reports Server (NTRS)

    Akin, J. E.; Fulford, R. E.

    1978-01-01

    Element by element frontal solution algorithms are utilized in many of the existing finite element codes. The overall computational efficiency of this type of procedure is directly related to the element data input sequence. Thus, it is important to have a pre-processor which will resequence these data so as to reduce the element wavefronts to be encountered in the solution algorithm. A direct element resequencing algorithm is detailed for reducing element wavefronts. It also generates computational by products that can be utilized in pre-front calculations and in various post-processors. Sample problems are presented and compared with other algorithms.

  2. 34 CFR 303.15 - Include; including.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Include; including. 303.15 Section 303.15 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS...

  3. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  4. FOHI-D: An iterative Hirshfeld procedure including atomic dipoles

    SciTech Connect

    Geldof, D.; Blockhuys, F.; Van Alsenoy, C.; Krishtal, A.

    2014-04-14

    In this work, a new partitioning method based on the FOHI method (fractional occupation Hirshfeld-I method) will be discussed. The new FOHI-D method uses an iterative scheme in which both the atomic charge and atomic dipole are calculated self-consistently. In order to induce the dipole moment on the atom, an electric field is applied during the atomic SCF calculations. Based on two sets of molecules, the atomic charge and intrinsic atomic dipole moment of hydrogen and chlorine atoms are compared using the iterative Hirshfeld (HI) method, the iterative Stockholder atoms (ISA) method, the FOHI method, and the FOHI-D method. The results obtained are further analyzed as a function of the group electronegativity of Boyd et al. [J. Am. Chem. Soc. 110, 4182 (1988); Boyd et al., J. Am. Chem. Soc. 114, 1652 (1992)] and De Proft et al. [J. Phys. Chem. 97, 1826 (1993)]. The molecular electrostatic potential (ESP) based on the HI, ISA, FOHI, and FOHI-D charges is compared with the ab initio ESP. Finally, the effect of adding HI, ISA, FOHI, and FOHI-D atomic dipoles to the multipole expansion as a function of the precision of the ESP is analyzed.

  5. FOHI-D: an iterative Hirshfeld procedure including atomic dipoles.

    PubMed

    Geldof, D; Krishtal, A; Blockhuys, F; Van Alsenoy, C

    2014-04-14

    In this work, a new partitioning method based on the FOHI method (fractional occupation Hirshfeld-I method) will be discussed. The new FOHI-D method uses an iterative scheme in which both the atomic charge and atomic dipole are calculated self-consistently. In order to induce the dipole moment on the atom, an electric field is applied during the atomic SCF calculations. Based on two sets of molecules, the atomic charge and intrinsic atomic dipole moment of hydrogen and chlorine atoms are compared using the iterative Hirshfeld (HI) method, the iterative Stockholder atoms (ISA) method, the FOHI method, and the FOHI-D method. The results obtained are further analyzed as a function of the group electronegativity of Boyd et al. [J. Am. Chem. Soc. 110, 4182 (1988); Boyd et al., J. Am. Chem. Soc. 114, 1652 (1992)] and De Proft et al. [J. Phys. Chem. 97, 1826 (1993)]. The molecular electrostatic potential (ESP) based on the HI, ISA, FOHI, and FOHI-D charges is compared with the ab initio ESP. Finally, the effect of adding HI, ISA, FOHI, and FOHI-D atomic dipoles to the multipole expansion as a function of the precision of the ESP is analyzed. PMID:24735285

  6. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  7. Improved piecewise orthogonal signal correction algorithm.

    PubMed

    Feudale, Robert N; Tan, Huwei; Brown, Steven D

    2003-10-01

    Piecewise orthogonal signal correction (POSC), an algorithm that performs local orthogonal filtering, was recently developed to process spectral signals. POSC was shown to improve partial leastsquares regression models over models built with conventional OSC. However, rank deficiencies within the POSC algorithm lead to artifacts in the filtered spectra when removing two or more POSC components. Thus, an updated OSC algorithm for use with the piecewise procedure is reported. It will be demonstrated how the mathematics of this updated OSC algorithm were derived from the previous version and why some OSC versions may not be as appropriate to use with the piecewise modeling procedure as the algorithm reported here. PMID:14639746

  8. Procedural Quantum Programming

    NASA Astrophysics Data System (ADS)

    Ömer, Bernhard

    2002-09-01

    While classical computing science has developed a variety of methods and programming languages around the concept of the universal computer, the typical description of quantum algorithms still uses a purely mathematical, non-constructive formalism which makes no difference between a hydrogen atom and a quantum computer. This paper investigates, how the concept of procedural programming languages, the most widely used classical formalism for describing and implementing algorithms, can be adopted to the field of quantum computing, and how non-classical features like the reversibility of unitary transformations, the non-observability of quantum states or the lack of copy and erase operations can be reflected semantically. It introduces the key concepts of procedural quantum programming (hybrid target architecture, operator hierarchy, quantum data types, memory management, etc.) and presents the experimental language QCL, which implements these principles.

  9. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  10. Wavelet periodicity detection algorithms

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Pfander, Goetz E.

    1998-10-01

    This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.

  11. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  12. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  13. [Examination procedures].

    PubMed

    Vassault, A; Arnaud, J; Szymanovicz, A

    2010-12-01

    Examination procedures have to be written for each examination according to the standard requirements. Using CE marked devices, technical inserts can be used, but because of their lack of homogeneity, it could be easier to document their use as a standard procedure. Document control policy applies for those procedures, the content of which could be as provided in this document. Electronic manuals can be used as well. PMID:21613016

  14. Enhanced decomposition algorithm for multistage stochastic hydroelectric scheduling. Technical report

    SciTech Connect

    Morton, D.P.

    1994-01-01

    Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.

  15. Science Safety Procedure Handbook.

    ERIC Educational Resources Information Center

    Lynch, Mervyn A.; Offet, Lorna

    This booklet outlines general safety procedures in the areas of: (1) student supervision; (2) storage safety regulations, including lists of incompatible chemicals, techniques of disposal and storage; (3) fire; and (4) first aid. Specific sections exist for elementary, junior high school, senior high school, in which special procedures are…

  16. Basic Planning Procedures.

    ERIC Educational Resources Information Center

    Nevada State Dept. of Education, Carson City.

    The procedure described herein entails the use of an educational planning consultant, statements of educational and service problems to be solved by proposed construction, a site plan, and architect selection. Also included in the outline of procedures is a tentative statement of specifications, tentative cost estimates and matrices for conducting…

  17. The E-MS Algorithm: Model Selection with Incomplete Data

    PubMed Central

    Jiang, Jiming; Nguyen, Thuan; Rao, J. Sunil

    2014-01-01

    We propose a procedure associated with the idea of the E-M algorithm for model selection in the presence of missing data. The idea extends the concept of parameters to include both the model and the parameters under the model, and thus allows the model to be part of the E-M iterations. We develop the procedure, known as the E-MS algorithm, under the assumption that the class of candidate models is finite. Some special cases of the procedure are considered, including E-MS with the generalized information criteria (GIC), and E-MS with the adaptive fence (AF; Jiang et al. 2008). We prove numerical convergence of the E-MS algorithm as well as consistency in model selection of the limiting model of the E-MS convergence, for E-MS with GIC and E-MS with AF. We study the impact on model selection of different missing data mechanisms. Furthermore, we carry out extensive simulation studies on the finite-sample performance of the E-MS with comparisons to other procedures. The methodology is also illustrated on a real data analysis involving QTL mapping for an agricultural study on barley grains. PMID:26783375

  18. A Monotonically Convergent Algorithm for FACTALS.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; And Others

    1993-01-01

    A new procedure is proposed for handling nominal variables in the analysis of variables of mixed measurement levels, and a procedure is developed for handling ordinal variables. Using these procedures, a monotonically convergent algorithm is constructed for the FACTALS method for any mixture of variables. (SLD)

  19. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  20. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  1. Optimization of a chemical identification algorithm

    NASA Astrophysics Data System (ADS)

    Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren

    2010-04-01

    A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.

  2. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  3. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  4. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  5. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  6. A parallel algorithm for the non-symmetric eigenvalue problem

    SciTech Connect

    Dongarra, J.; Sidani, M. . Dept. of Computer Science Oak Ridge National Lab., TN )

    1991-12-01

    This paper describes a parallel algorithm for computing the eigenvalues and eigenvectors of a non-symmetric matrix. The algorithm is based on a divide-and-conquer procedure and uses an iterative refinement technique.

  7. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  8. Object-oriented algorithmic laboratory for ordering sparse matrices

    SciTech Connect

    Kumfert, G K

    2000-05-01

    We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively

  9. Local flow management/profile descent algorithm. Fuel-efficient, time-controlled profiles for the NASA TSRV airplane

    NASA Technical Reports Server (NTRS)

    Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.

    1986-01-01

    The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.

  10. Pump apparatus including deconsolidator

    DOEpatents

    Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

    2014-10-07

    A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

  11. Dental Procedures.

    PubMed

    Ramponi, Denise R

    2016-01-01

    Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures. PMID:27482994

  12. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  13. Optical modulator including grapene

    DOEpatents

    Liu, Ming; Yin, Xiaobo; Zhang, Xiang

    2016-06-07

    The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

  14. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  15. Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.

    PubMed

    Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J

    2015-06-01

    Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. PMID:25782189

  16. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  17. Traffic Noise Ground Attenuation Algorithm Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, Lloyd Allen

    The Federal Highway Administration traffic noise prediction program, STAMINA 2.0, was evaluated for its accuracy. In addition, the ground attenuation algorithm used in the Ontario ORNAMENT method was evaluated to determine its potential to improve these predictions. Field measurements of sound levels were made at 41 sites on I-440 in Nashville, Tennessee in order to both study noise barrier effectiveness and to evaluate STAMINA 2.0 and the performance of the ORNAMENT ground attenuation algorithm. The measurement sites, which contain large variations in terrain, included several cross sections. Further, all sites contain some type of barrier, natural or constructed, which could more fully expose the strength and weaknesses of the ground attenuation algorithms. The noise barrier evaluation was accomplished in accordance with American National Standard Methods for Determination of Insertion Loss of Outdoor Noise Barriers which resulted in an evaluation of this standard. The entire 7.2 mile length of I-440 was modeled using STAMINA 2.0. A multiple run procedure was developed to emulate the results that would be obtained if the ORNAMENT algorithm was incorporated into STAMINA 2.0. Finally, the predicted noise levels based on STAMINA 2.0 and STAMINA with the ORNAMENT ground attenuation algorithm were compared with each other and with the field measurements. It was found that STAMINA 2.0 overpredicted noise levels by an average of over 2 dB for the receivers on I-440, whereas, the STAMINA with ORNAMENT ground attenuation algorithm overpredicted noise levels by an average of less than 0.5 dB. The mean errors for the two predictions were found to be statistically different from each other, and the mean error for the prediction with the ORNAMENT ground attenuation algorithm was not found to be statistically different from zero. The STAMINA 2.0 program predicts little, if any, ground attenuation for receivers at typical first-row distances from highways where noise barriers

  18. Nursing Procedures. NAVMED P-5066.

    ERIC Educational Resources Information Center

    Bureau of Medicine and Surgery (Navy), Washington, DC.

    The revised manual of nursing procedures covers fundamental nursing care, admission and discharge of the patient, assisting with therapeutic measures, pre- and postoperative care, diagnostic tests and procedures, and isolation technique. Each of the over 300 topics includes the purpose, equipment, and procedure to be used and, where relevant, such…

  19. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    NASA Astrophysics Data System (ADS)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  20. Old And New Algorithms For Toeplitz Systems

    NASA Astrophysics Data System (ADS)

    Brent, Richard P.

    1988-02-01

    Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.

  1. On-line learning algorithms for locally recurrent neural networks.

    PubMed

    Campolucci, P; Uncini, A; Piazza, F; Rao, B D

    1999-01-01

    This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal recursive backpropagation (CRBP), presents some advantages with respect to the other on-line training methods. The new CRBP algorithm includes as particular cases backpropagation (BP), temporal backpropagation (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among others, thereby providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. The only learning method that has been proposed for locally recurrent networks with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and higher speed of convergence with respect to the Back-Tsoi algorithm, which is supported by the theoretical development and confirmed by simulations. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with the new CRBP method. The simulations show that CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space. PMID:18252525

  2. A Generalization of Takane's Algorithm for DEDICOM.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; And Others

    1990-01-01

    An algorithm is described for fitting the DEDICOM model (proposed by R. A. Harshman in 1978) for the analysis of asymmetric data matrices. The method modifies a procedure proposed by Y. Takane (1985) to provide guaranteed monotonic convergence. The algorithm is based on a technique known as majorization. (SLD)

  3. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  4. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  5. Procedural simulation.

    PubMed

    Patel, Aalpen A; Glaiberman, Craig; Gould, Derek A

    2007-06-01

    In the past few decades, medicine has started to look at the potential use of simulators in medical education. Procedural medicine lends itself well to the use of simulators. Efforts are under way to establish national agendas to change the way medical education is approached and thereby improve patient safety. Universities, credentialing organizations, and hospitals are investing large sums of money to build and use simulation centers for undergraduate and graduate medical education. PMID:17574195

  6. Optimization of the double dosimetry algorithm for interventional cardiologists

    NASA Astrophysics Data System (ADS)

    Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena

    2014-11-01

    A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.

  7. 34 CFR 303.170 - Procedural safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... process procedures in 34 CFR 300.506 through 300.512; or (2) The procedures that the State has developed... 34 Education 2 2011-07-01 2010-07-01 true Procedural safeguards. 303.170 Section 303.170 Education... Procedural safeguards. Each application must include procedural safeguards that— (a) Are consistent...

  8. 34 CFR 303.170 - Procedural safeguards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...— (1) The due process procedures in 34 CFR 300.506 through 300.512; or (2) The procedures that the... 34 Education 2 2010-07-01 2010-07-01 false Procedural safeguards. 303.170 Section 303.170... Requirements § 303.170 Procedural safeguards. Each application must include procedural safeguards that— (a)...

  9. Minimally invasive procedures

    PubMed Central

    Baltayiannis, Nikolaos; Michail, Chandrinos; Lazaridis, George; Anagnostopoulos, Dimitrios; Baka, Sofia; Mpoukovinas, Ioannis; Karavasilis, Vasilis; Lampaki, Sofia; Papaiwannou, Antonis; Karavergou, Anastasia; Kioumis, Ioannis; Pitsiou, Georgia; Katsikogiannis, Nikolaos; Tsakiridis, Kosmas; Rapti, Aggeliki; Trakada, Georgia; Zissimopoulos, Athanasios; Zarogoulidis, Konstantinos

    2015-01-01

    Minimally invasive procedures, which include laparoscopic surgery, use state-of-the-art technology to reduce the damage to human tissue when performing surgery. Minimally invasive procedures require small “ports” from which the surgeon inserts thin tubes called trocars. Carbon dioxide gas may be used to inflate the area, creating a space between the internal organs and the skin. Then a miniature camera (usually a laparoscope or endoscope) is placed through one of the trocars so the surgical team can view the procedure as a magnified image on video monitors in the operating room. Specialized equipment is inserted through the trocars based on the type of surgery. There are some advanced minimally invasive surgical procedures that can be performed almost exclusively through a single point of entry—meaning only one small incision, like the “uniport” video-assisted thoracoscopic surgery (VATS). Not only do these procedures usually provide equivalent outcomes to traditional “open” surgery (which sometimes require a large incision), but minimally invasive procedures (using small incisions) may offer significant benefits as well: (I) faster recovery; (II) the patient remains for less days hospitalized; (III) less scarring and (IV) less pain. In our current mini review we will present the minimally invasive procedures for thoracic surgery. PMID:25861610

  10. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  11. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  12. A subzone reconstruction algorithm for efficient staggered compatible remapping

    SciTech Connect

    Starinshak, D.P. Owen, J.M.

    2015-09-01

    Staggered-grid Lagrangian hydrodynamics algorithms frequently make use of subzonal discretization of state variables for the purposes of improved numerical accuracy, generality to unstructured meshes, and exact conservation of mass, momentum, and energy. For Arbitrary Lagrangian–Eulerian (ALE) methods using a geometric overlay, it is difficult to remap subzonal variables in an accurate and efficient manner due to the number of subzone–subzone intersections that must be computed. This becomes prohibitive in the case of 3D, unstructured, polyhedral meshes. A new procedure is outlined in this paper to avoid direct subzonal remapping. The new algorithm reconstructs the spatial profile of a subzonal variable using remapped zonal and nodal representations of the data. The reconstruction procedure is cast as an under-constrained optimization problem. Enforcing conservation at each zone and node on the remapped mesh provides the set of equality constraints; the objective function corresponds to a quadratic variation per subzone between the values to be reconstructed and a set of target reference values. Numerical results for various pure-remapping and hydrodynamics tests are provided. Ideas for extending the algorithm to staggered-grid radiation-hydrodynamics are discussed as well as ideas for generalizing the algorithm to include inequality constraints.

  13. 47 CFR 65.820 - Included items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Included items. 65.820 Section 65.820... OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items. (a... allowance either by performing a lead-lag study of interstate revenue and expense items or by using...

  14. 47 CFR 65.820 - Included items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Included items. 65.820 Section 65.820... OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items. (a... allowance either by performing a lead-lag study of interstate revenue and expense items or by using...

  15. 47 CFR 65.820 - Included items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Included items. 65.820 Section 65.820... OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items. (a... allowance either by performing a lead-lag study of interstate revenue and expense items or by using...

  16. 47 CFR 65.820 - Included items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Included items. 65.820 Section 65.820 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items....

  17. 47 CFR 65.820 - Included items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Included items. 65.820 Section 65.820 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items....

  18. Search properties of some sequential decoding algorithms.

    NASA Technical Reports Server (NTRS)

    Geist, J. M.

    1973-01-01

    Sequential decoding procedures are studied in the context of selecting a path through a tree. Several algorithms are considered, and their properties are compared. It is shown that the stack algorithm introduced by Zigangirov (1966) and by Jelinek (1969) is essentially equivalent to the Fano algorithm with regard to the set of nodes examined and the path selected, although the description, implementation, and action of the two algorithms are quite different. A modified Fano algorithm is introduced, in which the quantizing parameter is eliminated. It can be inferred from limited simulation results that, at least in some applications, the new algorithm is computationally inferior to the old. However, it is of some theoretical interest since the conventional Fano algorithm may be considered to be a quantized version of it.

  19. Recursive Algorithm For Linear Regression

    NASA Technical Reports Server (NTRS)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  20. New correction procedures for the fast field program which extend its range

    NASA Technical Reports Server (NTRS)

    West, M.; Sack, R. A.

    1990-01-01

    A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.

  1. Quarantine document system indexing procedure

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Quarantine Document System (QDS) is described including the indexing procedures and thesaurus of indexing terms. The QDS consists of these functional elements: acquisition, cataloging, indexing, storage, and retrieval. A complete listing of the collection, and the thesaurus are included.

  2. Iterative reconstruction algorithm for analyzer-based phase-contrast computed tomography of hard and soft tissue

    NASA Astrophysics Data System (ADS)

    Sunaguchi, Naoki; Yuasa, Tetsuya; Ando, Masami

    2013-09-01

    We propose a reconstruction algorithm for analyzer-based phase-contrast computed tomography (CT) applicable to biological samples including hard tissue that may generate conspicuous artifacts with the conventional reconstruction method. The algorithm is an iterative procedure that goes back and forth between a tomogram and its sinogram through the Radon transform and CT reconstruction, while imposing a priori information in individual regions. We demonstrate the efficacy of the algorithm using synthetic data generated by computer simulation reflecting actual experimental conditions and actual data acquired from a rat foot by a dark field imaging system.

  3. Procedural knowledge

    NASA Technical Reports Server (NTRS)

    Georgeff, Michael P.; Lansky, Amy L.

    1986-01-01

    Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, the formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's Space Shuttle are provided.

  4. Parliamentary Procedure Made Easy.

    ERIC Educational Resources Information Center

    Hayden, Ellen T.

    Based on the newly revised "Robert's Rules of Order," these self-contained learning activities will help students successfully and actively participate in school, social, civic, political, or professional organizations. There are 13 lessons. Topics studied include the what, why, and history of parliamentary procedure; characteristics of the ideal…

  5. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  6. Procedures and Policies Manual

    ERIC Educational Resources Information Center

    Davis, Jane M.

    2006-01-01

    This document was developed by the Middle Tennessee State University James E. Walker Library Collection Management Department to provide policies and procedural guidelines for the cataloging and processing of bibliographic materials. This document includes policies for cataloging monographs, serials, government documents, machine-readable data…

  7. Costing imaging procedures.

    PubMed

    Bretland, P M

    1988-01-01

    The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting. PMID:3349241

  8. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  9. A Pressure Based Multi-Fluid Algorithm for Multiphase Flow

    NASA Astrophysics Data System (ADS)

    Ming, P. J.; Zhang, W. P.; Lei, G. D.; Zhu, M. G.

    A new finite volume-based numerical algorithm for predicting multiphase flow phenomena is presented. The method is formulated on an orthogonal coordinate system in collocated primitive variables. The SIMPLE-like algorithms are based on the prediction and correction procedure, and they are extended for all speed range. The object of the present work is to extent single phase SIMPLE algorithm to multiphase flow. The overview of the algorithm is described and relevant numerical issues are discussed extensively, including implicit process of the moment interaction with “partial elimination” (of the drag term), introduction of under-relaxation factor, formulation of momentum interpolation, and pressure correction equation. This model is based on the k-ɛ model assumed that the turbulence is dictated by the continuous phase. Thus only the transport equation for the continuous phase turbulence energy kc needed to be solved while a algebraic turbulence model is used for dispersed phase. The present author also designed a general program with FORTRAN90 program language for the new algorithm based on the household code General Transport Equation Analyzer (GTEA). The performance of the new method is assessed by solving a 3D bubbly two-phase flow in a vertical pipe. A good agreement is achieved between the numerical result and experimental data in the literature.

  10. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  11. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  12. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  13. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  14. Efficient estimation algorithms for a satellite-aided search and rescue mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Garza-Robles, R.

    1977-01-01

    It has been suggested to establish a search and rescue orbiting satellite system as a means for locating distress signals from downed aircraft, small boats, and overland expeditions. Emissions from Emergency Locator Transmitters (ELT), now available in most U.S. aircraft are to be utilized in the positioning procedure. A description is presented of a set of Doppler navigation algorithms for extracting ELT position coordinates from Doppler data. The algorithms have been programmed for a small computing machine and the resulting system has successfully processed both real and simulated Doppler data. A software system for solving the Doppler navigation problem must include an orbit propagator, a first guess algorithm, and an algorithm for estimating longitude and latitude from Doppler data. Each of these components is considered.

  15. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  16. Environmental Test Screening Procedure

    NASA Technical Reports Server (NTRS)

    Zeidler, Janet

    2000-01-01

    This procedure describes the methods to be used for environmental stress screening (ESS) of the Lightning Mapper Sensor (LMS) lens assembly. Unless otherwise specified, the procedures shall be completed in the order listed, prior to performance of the Acceptance Test Procedure (ATP). The first unit, S/N 001, will be subjected to the Qualification Vibration Levels, while the remainder will be tested at the Operational Level. Prior to ESS, all units will undergo Pre-ESS Functional Testing that includes measuring the on-axis and plus or minus 0.95 full field Modulation Transfer Function and Back Focal Length. Next, all units will undergo ESS testing, and then Acceptance testing per PR 460.

  17. Antialiasing procedural shaders with reduction maps.

    PubMed

    Van Horn, R Brooks; Turk, Greg

    2008-01-01

    Both image textures and procedural textures suffer from minification aliasing, however, unlike image textures, there is no good automatic method to anti-alias procedural textures. Given a procedural texture on a surface, we present a method that automatically creates an anti-aliased version of the procedural texture. The new procedural texture maintains the original texture's details, but reduces minification aliasing artifacts. This new algorithm creates a pyramid similar to MIP-Maps to represent the texture. Instead of storing per-texel color, our texture hierarchy stores weighted sums of reflectance functions, allowing a wider range of effects to be anti-aliased. The stored reflectance functions are automatically selected based on an analysis of the different reflectances found over the surface. When the texture is viewed at close range, the original texture is used, but as the texture footprint grows, the algorithm gradually replaces the texture's result with an anti-aliased one. PMID:18369263

  18. Laboratory test interpretations and algorithms in utilization management.

    PubMed

    Van Cott, Elizabeth M

    2014-01-01

    Appropriate assimilation of laboratory test results into patient care is enhanced when pathologist interpretations of the laboratory tests are provided for clinicians, and when reflex algorithm testing is utilized. Benefits of algorithms and interpretations include avoidance of misdiagnoses, reducing the number of laboratory tests needed, reducing the number of procedures, transfusions and admissions, shortening the amount of time needed to reach a diagnosis, reducing errors in test ordering, and providing additional information about how the laboratory results might affect other aspects of a patient's care. Providing interpretations can be challenging for pathologists, therefore mechanisms to facilitate the successful implementation of an interpretation service are described. These include algorithm-based testing and interpretation, optimizing laboratory requisitions and/or order-entry systems, proficiency testing programs that assess interpretations and provide constructive feedback, utilization of a collection of interpretive sentences or paragraphs that can be building blocks ("coded comments") for constructing preliminary interpretations, middleware, and pathology resident participation and education. In conclusion, the combination of algorithms and interpretations for laboratory testing has multiple benefits for the medical care for the patient. PMID:24080245

  19. Fast algorithms for combustion kinetics calculations: A comparison

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    To identify the fastest algorithm currently available for the numerical integration of chemical kinetic rate equations, several algorithms were examined. Findings to date are summarized. The algorithms examined include two general-purpose codes EPISODE and LSODE and three special-purpose (for chemical kinetic calculations) codes CHEMEQ, CRK1D, and GCKP84. In addition, an explicit Runge-Kutta-Merson differential equation solver (IMSL Routine DASCRU) is used to illustrate the problems associated with integrating chemical kinetic rate equations by a classical method. Algorithms were applied to two test problems drawn from combustion kinetics. These problems included all three combustion regimes: induction, heat release and equilibration. Variations of the temperature and species mole fraction are given with time for test problems 1 and 2, respectively. Both test problems were integrated over a time interval of 1 ms in order to obtain near-equilibration of all species and temperature. Of the codes examined in this study, only CREK1D and GCDP84 were written explicitly for integrating exothermic, non-isothermal combustion rate equations. These therefore have built-in procedures for calculating the temperature.

  20. 47 CFR 1.9005 - Included services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... to 47 CFR 90.187(b)(2)(v)); (z) The 218-219 MHz band (part 95 of this chapter); (aa) The Local... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules of this subpart apply to...

  1. 47 CFR 1.9005 - Included services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to 47 CFR 90.187(b)(2)(v)); (z) The 218-219 MHz band (part 95 of this chapter); (aa) The Local... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules of this subpart apply to...

  2. Proposed first-generation WSQ bit allocation procedure

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-09-08

    The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.

  3. [Neural basis of procedural memory].

    PubMed

    Mochizuki-Kawai, Hiroko

    2008-07-01

    Procedural memory is acquired by trial and error. Our daily life is supported by a number of procedural memories such as those for riding bicycle, typing, reading words, etc. Procedural memory is divided into 3 types; motor, perceptual, and cognitive. Here, the author reviews the cognitive and neural basis of procedural memory according to these 3 types. It is reported that the basal ganglia or cerebellum dysfunction causes deficits in procedural memory. Compared with age-matched healthy participants, patients with Parkinson disease (PD), Huntington disease (HD) or spinocerebellar degeneration (SCD) show deterioration in improvements in motor-type procedural memory tasks. Previous neuroimaging studies have reported that motor-type procedural memory may be supported by multiple brain regions, including the frontal and parietal regions as well as the basal ganglia (cerebellum); this was found with a serial reaction time task (SRT task). Although 2 other types of procedural memory are also maintained by multiple brain regions, the related cerebral areas depend on the type of memory. For example, it was suggested that acquisition of the perceptual type of procedural memory (e.g., ability to read mirror images of words) might be maintained by the bilateral fusiform region, while the acquisition of cognitive procedural memory might be supported by the frontal, parietal, or cerebellar regions as well as the basal ganglia. In the future, we need to cleary understand the neural "network" related to the procedural memory. PMID:18646622

  4. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  5. An ROLAP Aggregation Algorithm with the Rules Being Specified

    NASA Astrophysics Data System (ADS)

    Zhengqiu, Weng; Tai, Kuang; Lina, Zhang

    This paper introduces the base theory of data warehouse and ROLAP, and presents a new kind of ROLAP aggregation algorithm, which has calculation algorithms. It covers the shortage of low accuracy of traditional aggregation algorithm that aggregates only by addition. The ROLAP aggregation with calculation algorithm which can aggregate according to business rules improves accuracy. And key designs and procedures are presented. Compared with the traditional method, its efficiency is displayed in an experiment.

  6. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  7. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  8. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  9. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698

  10. A Procedure for Morphological Analysis.

    ERIC Educational Resources Information Center

    Chapin, Paul G.; Norton, Lewis M.

    A procedure, designated "MORPH," has been developed for the automatic morphological analysis of complex English words. Each word is reduced to a stem in canonical or dictionary form, plus affixes, inflectional and derivational, represented as morphemes or as syntactic features of the stem. The procedure includes the task of analyzing as many…

  11. Ultrasound-Guided Hip Procedures.

    PubMed

    Payne, Jeffrey M

    2016-08-01

    This article describes the techniques for performing ultrasound-guided procedures in the hip region, including intra-articular hip injection, iliopsoas bursa injection, greater trochanter bursa injection, ischial bursa injection, and piriformis muscle injection. The common indications, pitfalls, accuracy, and efficacy of these procedures are also addressed. PMID:27468669

  12. Evaluation of Mechanical Losses in Piezoelectric Plates using Genetic algorithm

    NASA Astrophysics Data System (ADS)

    Arnold, F. J.; Gonçalves, M. S.; Massaro, F. R.; Martins, P. S.

    Numerical methods are used for the characterization of piezoelectric ceramics. A procedure based on genetic algorithm is applied to find the physical coefficients and mechanical losses. The coefficients are estimated from a minimum scoring of cost function. Electric impedances are calculated from Mason's model including mechanical losses constant and dependent on frequency as a linear function. The results show that the electric impedance percentage error in the investigated interval of frequencies decreases when mechanical losses depending on frequency are inserted in the model. A more accurate characterization of the piezoelectric ceramics mechanical losses should be considered as frequency dependent.

  13. Optimal Design of Geodetic Network Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Vajedian, Sanaz; Bagheri, Hosein

    2010-05-01

    A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied

  14. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  15. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  16. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    Theoretical models of electron transport in condensed matter require an effective source of the Chandrasekhar H(x,omega) function. A code providing the H(x,omega) function has to be both accurate and very fast. The current revision of the code published earlier [A. Jablonski, Comput. Phys. Commun. 183 (2012) 1773] decreased the running time, averaged over different pairs of arguments x and omega, by a factor of more than 20. The decrease of the running time in the range of small values of the argument x, less than 0.05, is even more pronounced, reaching a factor of 30. The accuracy of the current code is not affected, and is typically better than 12 decimal places. New version program summaryProgram title: CHANDRAS_v2 Catalogue identifier: AEMC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 976 No. of bytes in distributed program, including test data, etc.: 11416 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows 7, Windows XP, Unix/Linux RAM: 0.7 MB Classification: 2.4, 7.2 Catalogue identifier of previous version: AEMC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 1773 Does the new version supersede the old program: Yes Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two

  17. Subspace scheduling and parallel implementation of non-systolic regular iterative algorithms

    SciTech Connect

    Roychowdhury, V.P.; Kailath, T.

    1989-01-01

    The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.

  18. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    SciTech Connect

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-09-15

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  19. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    PubMed Central

    Chen, Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-01-01

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK’s interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization. PMID:20964213

  20. Algorithmic causets

    NASA Astrophysics Data System (ADS)

    Bolognesi, Tommaso

    2011-07-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  1. Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"

    SciTech Connect

    Petzold, Linda R.

    2012-10-25

    Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.

  2. Solar Eclipse Monitoring for Solar Energy Applications Using the Solar and Moon Position Algorithms

    SciTech Connect

    Reda, I.

    2010-03-01

    This report includes a procedure for implementing an algorithm (described by Jean Meeus) to calculate the moon's zenith angle with uncertainty of +/-0.001 degrees and azimuth angle with uncertainty of +/-0.003 degrees. The step-by-step format presented here simplifies the complicated steps Meeus describes to calculate the Moon's position, and focuses on the Moon instead of the planets and stars. It also introduces some changes to accommodate for solar radiation applications.

  3. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  4. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  5. A Frequency-Domain Substructure System Identification Algorithm

    NASA Technical Reports Server (NTRS)

    Blades, Eric L.; Craig, Roy R., Jr.

    1996-01-01

    A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.

  6. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  7. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  8. Non-intrusive parameter identification procedure user's guide

    NASA Technical Reports Server (NTRS)

    Hanson, G. D.; Jewell, W. F.

    1983-01-01

    Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.

  9. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  10. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  11. A Short Survey of Document Structure Similarity Algorithms

    SciTech Connect

    Buttler, D

    2004-02-27

    This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.

  12. An algorithmic approach for clinical management of chronic spinal pain.

    PubMed

    Manchikanti, Laxmaiah; Helm, Standiford; Singh, Vijay; Benyamin, Ramsin M; Datta, Sukdeb; Hayek, Salim M; Fellows, Bert; Boswell, Mark V

    2009-01-01

    Interventional pain management, and the interventional techniques which are an integral part of that specialty, are subject to widely varying definitions and practices. How interventional techniques are applied by various specialties is highly variable, even for the most common procedures and conditions. At the same time, many payors, publications, and guidelines are showing increasing interest in the performance and costs of interventional techniques. There is a lack of consensus among interventional pain management specialists with regards to how to diagnose and manage spinal pain and the type and frequency of spinal interventional techniques which should be utilized to treat spinal pain. Therefore, an algorithmic approach is proposed, providing a step-by-step procedure for managing chronic spinal pain patients based upon evidence-based guidelines. The algorithmic approach is developed based on the best available evidence regarding the epidemiology of various identifiable sources of chronic spinal pain. Such an approach to spinal pain includes an appropriate history, examination, and medical decision making in the management of low back pain, neck pain and thoracic pain. This algorithm also provides diagnostic and therapeutic approaches to clinical management utilizing case examples of cervical, lumbar, and thoracic spinal pain. An algorithm for investigating chronic low back pain without disc herniation commences with a clinical question, examination and imaging findings. If there is evidence of radiculitis, spinal stenosis, or other demonstrable causes resulting in radiculitis, one may proceed with diagnostic or therapeutic epidural injections. In the algorithmic approach, facet joints are entertained first in the algorithm because of their commonality as a source of chronic low back pain followed by sacroiliac joint blocks if indicated and provocation discography as the last step. Based on the literature, in the United States, in patients without disc

  13. Chiari malformation Type I surgery in pediatric patients. Part 1: validation of an ICD-9-CM code search algorithm.

    PubMed

    Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2016-05-01

    OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes. PMID:26799412

  14. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  15. Detecting multiple periodicities in observational data with the multifrequency periodogram—II. Frequency Decomposer, a parallelized time-series analysis algorithm

    NASA Astrophysics Data System (ADS)

    Baluev, Roman V.

    2013-11-01

    This is a parallelized algorithm performing a decomposition of a noisy time series into a number of sinusoidal components. The algorithm analyses all suspicious periodicities that can be revealed, including the ones that look like an alias or noise at a glance, but later may prove to be a real variation. After the selection of the initial candidates, the algorithm performs a complete pass through all their possible combinations and computes the rigorous multifrequency statistical significance for each such frequency tuple. The largest combinations that still survived this thresholding procedure represent the outcome of the analysis.

  16. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  17. Promoting Understanding of Linear Equations with the Median-Slope Algorithm

    ERIC Educational Resources Information Center

    Edwards, Michael Todd

    2005-01-01

    The preliminary findings resulting when invented algorithm is used with entry-level students while introducing linear equations is described. As calculations are accessible, the algorithm is preferable to more rigorous statistical procedures in entry-level classrooms.

  18. Jet-calculus approach including coherence effects

    SciTech Connect

    Jones, L.M.; Migneron, R.; Narayanan, K.S.S.

    1987-01-01

    We show how integrodifferential equations typical of jet calculus can be combined with an averaging procedure to obtain jet-calculus-based results including the Mueller interference graphs. Results in longitudinal-momentum fraction x for physical quantities are higher at intermediate x and lower at large x than with the conventional ''incoherent'' jet calculus. These results resemble those of Marchesini and Webber, who used a Monte Carlo approach based on the same dynamics.

  19. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  20. The value of care algorithms.

    PubMed

    Myers, Timothy

    2006-09-01

    The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065

  1. Filtering algorithm for dotted interferences

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.

    2011-09-01

    An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.

  2. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  3. Does videothoracoscopy improve clinical outcomes when implemented as part of a pleural empyema treatment algorithm?

    PubMed Central

    Terra, Ricardo Mingarini; Waisberg, Daniel Reis; de Almeida, José Luiz Jesus; Devido, Marcela Santana; Pêgo-Fernandes, Paulo Manuel; Jatene, Fabio Biscegli

    2012-01-01

    OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005), open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008), videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm) observed by chest scan. The patients were divided into an old algorithm (n = 93) and new algorithm (n = 113) group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge) and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p<0.01). The patients in the new algorithm group were older (41±1 vs. 46.3±16.7 years, p = 0.014) and had higher Charlson Comorbidity Index scores [0(0-3) vs. 2(0-4), p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p = 0.35), although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04). CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery. PMID:22760892

  4. A spreadsheet algorithm for stagewise solvent extraction

    SciTech Connect

    Leonard, R.A.; Regalbuto, M.C.

    1993-01-01

    Part of the novelty is the way in which the problem is organized in the spreadsheet. In addition, to facilitate spreadsheet setup, a new calculational procedure has been developed. The resulting Spreadsheet Algorithm for Stagewise Solvent Extraction (SASSE) can be used with either IBM or Macintosh personal computers as a simple yet powerful tool for analyzing solvent extraction flowsheets.

  5. Interventional radiology neck procedures.

    PubMed

    Zabala Landa, R M; Korta Gómez, I; Del Cura Rodríguez, J L

    2016-05-01

    Ultrasonography has become extremely useful in the evaluation of masses in the head and neck. It enables us to determine the anatomic location of the masses as well as the characteristics of the tissues that compose them, thus making it possible to orient the differential diagnosis toward inflammatory, neoplastic, congenital, traumatic, or vascular lesions, although it is necessary to use computed tomography or magnetic resonance imaging to determine the complete extension of certain lesions. The growing range of interventional procedures, mostly guided by ultrasonography, now includes biopsies, drainages, infiltrations, sclerosing treatments, and tumor ablation. PMID:27138033

  6. Practical pearls for oral procedures.

    PubMed

    Davari, Parastoo; Fazel, Nasim

    2016-01-01

    We provide an overview of clinically relevant principles of oral surgical procedures required in the workup and management of oral mucosal diseases. An understanding of the fundamental concepts of how to perform safely and effectively minor oral procedures is important to the practicing dermatologist and can minimize the need for patient referrals. This chapter reviews the principles of minor oral procedures, including incisional, excisional, and punch biopsies, as well as minor salivary gland excision. Pre- and postoperative patient care is also discussed. PMID:27343958

  7. A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas

    SciTech Connect

    Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q

    2007-04-18

    A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.

  8. Genetic algorithms and MCML program for recovery of optical properties of homogeneous turbid media

    PubMed Central

    Morales Cruzado, Beatriz; y Montiel, Sergio Vázquez; Atencio, José Alberto Delgado

    2013-01-01

    In this paper, we present and validate a new method for optical properties recovery of turbid media with slab geometry. This method is an iterative method that compares diffuse reflectance and transmittance, measured using integrating spheres, with those obtained using the known algorithm MCML. The search procedure is based in the evolution of a population due to selection of the best individual, i.e., using a genetic algorithm. This new method includes several corrections such as non-linear effects in integrating spheres measurements and loss of light due to the finite size of the sample. As a potential application and proof-of-principle experiment of this new method, we use this new algorithm in the recovery of optical properties of blood samples at different degrees of coagulation. PMID:23504404

  9. A comparative study of algorithms for radar imaging from gapped data

    NASA Astrophysics Data System (ADS)

    Xu, Xiaojian; Luan, Ruixue; Jia, Li; Huang, Ying

    2007-09-01

    In ultra wideband (UWB) radar imagery, there are often cases where the radar's operating bandwidth is interrupted due to various reasons, either periodically or randomly. Such interruption produces phase history data gaps, which in turn result in artifacts in the image if conventional image reconstruction techniques are used. The higher level artifacts severely degrade the radar images. In this work, several novel techniques for artifacts suppression in gapped data imaging were discussed. These include: (1) A maximum entropy based gap filling technique using a modified Burg algorithm (MEBGFT); (2) An alternative iteration deconvolution based on minimum entropy (AIDME) and its modified version, a hybrid max-min entropy procedure; (3) A windowed coherent CLEAN algorithm; and (4) Two-dimensional (2-D) periodically-gapped Capon (PG-Capon) and APES (PG-APES) algorithms. Performance of various techniques is comparatively studied.

  10. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends

  11. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  12. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  13. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  14. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    PubMed

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM". PMID:26872036

  15. SamACO: variable sampling ant colony optimization algorithm for continuous optimization.

    PubMed

    Hu, Xiao-Min; Zhang, Jun; Chung, Henry Shu-Hung; Li, Yun; Liu, Ou

    2010-12-01

    An ant colony optimization (ACO) algorithm offers algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution constructions and to realize a pheromone laying-and-following mechanism. Although ACO is first designed for solving discrete (combinatorial) optimization problems, the ACO procedure is also applicable to continuous optimization. This paper presents a new way of extending ACO to solving continuous optimization problems by focusing on continuous variable sampling as a key to transforming ACO from discrete optimization to continuous optimization. The proposed SamACO algorithm consists of three major steps, i.e., the generation of candidate variable values for selection, the ants' solution construction, and the pheromone update process. The distinct characteristics of SamACO are the cooperation of a novel sampling method for discretizing the continuous search space and an efficient incremental solution construction method based on the sampled values. The performance of SamACO is tested using continuous numerical functions with unimodal and multimodal features. Compared with some state-of-the-art algorithms, including traditional ant-based algorithms and representative computational intelligence algorithms for continuous optimization, the performance of SamACO is seen competitive and promising. PMID:20371409

  16. Reference Policies and Procedures Manual.

    ERIC Educational Resources Information Center

    George Mason Univ., Fairfax, VA.

    This guide to services of the reference department of Fenwick Library, George Mason University, is intended for use by staff in the department, as well as the general public. Areas covered include (1) reference desk services to users; (2) reference desk support procedures; (3) off desk services; (4) collection development, including staff…

  17. A Parallel Algorithm for the Vehicle Routing Problem

    SciTech Connect

    Groer, Christopher S; Golden, Bruce; Edward, Wasil

    2011-01-01

    The vehicle routing problem (VRP) is a dicult and well-studied combinatorial optimization problem. We develop a parallel algorithm for the VRP that combines a heuristic local search improvement procedure with integer programming. We run our parallel algorithm with as many as 129 processors and are able to quickly nd high-quality solutions to standard benchmark problems. We assess the impact of parallelism by analyzing our procedure's performance under a number of dierent scenarios.

  18. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  19. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    PubMed

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  20. 36 CFR 908.32 - Review procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Review procedures. 908.32... DEVELOPMENT AREA Review Procedure § 908.32 Review procedures. (a) Upon receipt of a request for review, the... applicable regulations; (2) Information submitted by the applicant including the request for review and...

  1. 36 CFR 908.32 - Review procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Review procedures. 908.32... DEVELOPMENT AREA Review Procedure § 908.32 Review procedures. (a) Upon receipt of a request for review, the... applicable regulations; (2) Information submitted by the applicant including the request for review and...

  2. 46 CFR 148.5 - Alternative procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Alternative procedures. 148.5 Section 148.5 Shipping... MATERIALS THAT REQUIRE SPECIAL HANDLING General § 148.5 Alternative procedures. (a) The Commandant (CG-ENG-5) may authorize the use of an alternative procedure, including exemptions to the IMSBC...

  3. 46 CFR 148.5 - Alternative procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Alternative procedures. 148.5 Section 148.5 Shipping... MATERIALS THAT REQUIRE SPECIAL HANDLING General § 148.5 Alternative procedures. (a) The Commandant (CG-5223) may authorize the use of an alternative procedure, including exemptions to the IMSBC...

  4. 46 CFR 148.5 - Alternative procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Alternative procedures. 148.5 Section 148.5 Shipping... MATERIALS THAT REQUIRE SPECIAL HANDLING General § 148.5 Alternative procedures. (a) The Commandant (CG-ENG-5) may authorize the use of an alternative procedure, including exemptions to the IMSBC...

  5. 46 CFR 148.5 - Alternative procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Alternative procedures. 148.5 Section 148.5 Shipping... MATERIALS THAT REQUIRE SPECIAL HANDLING General § 148.5 Alternative procedures. (a) The Commandant (CG-ENG-5) may authorize the use of an alternative procedure, including exemptions to the IMSBC...

  6. Medical Service Clinical Laboratory Procedures--Bacteriology.

    ERIC Educational Resources Information Center

    Department of the Army, Washington, DC.

    This manual presents laboratory procedures for the differentiation and identification of disease agents from clinical materials. Included are procedures for the collection of specimens, preparation of culture media, pure culture methods, cultivation of the microorganisms in natural and simulated natural environments, and procedures in…

  7. 7 CFR 15b.25 - Procedural safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 1 2011-01-01 2011-01-01 false Procedural safeguards. 15b.25 Section 15b.25... Education § 15b.25 Procedural safeguards. A recipient that provides a public elementary or secondary... related services, a system of procedural safeguards that includes notice, an opportunity for the...

  8. 45 CFR 84.36 - Procedural safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Procedural safeguards. 84.36 Section 84.36 Public... Secondary Education § 84.36 Procedural safeguards. A recipient that operates a public elementary or... need special instruction or related services, a system of procedural safeguards that includes...

  9. 34 CFR 104.36 - Procedural safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 1 2011-07-01 2011-07-01 false Procedural safeguards. 104.36 Section 104.36 Education... Preschool, Elementary, and Secondary Education § 104.36 Procedural safeguards. A recipient that operates a... procedural safeguards that includes notice, an opportunity for the parents or guardian of the person...

  10. Procedures for Peer Review of Grant Applications

    ERIC Educational Resources Information Center

    US Department of Education, 2006

    2006-01-01

    This guide presents information on the procedures for peer review of grant applications. It begins with an overview of the review process for grant application submission and review. The review process includes: (1) pre-submission procedures that enable the Institute to plan for specific review sessions; (2) application processing procedures; (3)…

  11. Squint mode SAR processing algorithms

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Jin, M.; Curlander, J. C.

    1989-01-01

    The unique characteristics of a spaceborne SAR (synthetic aperture radar) operating in a squint mode include large range walk and large variation in the Doppler centroid as a function of range. A pointing control technique to reduce the Doppler drift and a new processing algorithm to accommodate large range walk are presented. Simulations of the new algorithm for squint angles up to 20 deg and look angles up to 44 deg for the Earth Observing System (Eos) L-band SAR configuration demonstrate that it is capable of maintaining the resolution broadening within 20 percent and the ISLR within a fraction of a decibel of the theoretical value.

  12. Backtracking search algorithm for effective and efficient surface wave analysis

    NASA Astrophysics Data System (ADS)

    Song, Xianhai; Zhang, Xueqiang; Zhao, Sutao; Li, Lei

    2015-03-01

    Surface wave dispersion analysis is widely used in geophysics to infer near-surface shear (S)-wave velocity profiles for a wide variety of applications. However, inversion of surface wave data is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this work, we proposed and implemented a new Rayleigh wave dispersion curve inversion scheme based on backtracking search algorithm (BSA), a novel and powerful evolutionary algorithm (EA). Development of BSA is motivated by studies that attempt to develop an algorithm that possesses desirable features for different optimization problems which include the ability to reach a problem's global minimum more quickly and successfully with a small number of control parameters and low computational cost, as well as robustness and ease of application to different problem models. The proposed inverse procedure is applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and effectiveness of BSA, four noise-free and four noisy synthetic data sets are firstly inverted. Then, the performance of BSA is compared with that of genetic algorithms (GA) by two noise-free synthetic data sets. Finally, a real-world example from a waste disposal site in NE Italy is inverted to examine the applicability and robustness of the proposed approach on real surface wave data. Furthermore, the performance of BSA is compared against that of GA by real data to further evaluate scores of BSA. Results from both synthetic and actual data demonstrate that BSA applied to nonlinear inversion of surface wave data should be considered good not only in terms of the accuracy but also in terms of the convergence speed. The great advantages of BSA are that the algorithm is simple, robust and easy to implement. Also there are fewer control parameters to tune.

  13. 48 CFR 2805.503-70 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Acquisition Planning PUBLICIZING CONTRACT ACTIONS Paid Advertisements 2805.503-70 Procedures. (a) Agency... includes the names of newspapers or journals concerned, frequency and dates of proposed...

  14. An eigenvalue/eigenvector assignment algorithm using output feedback

    NASA Technical Reports Server (NTRS)

    Mielke, R. R.; Liberty, S. R.

    1983-01-01

    An eigenvalue/eigenvector assignment algorithm using stationary output feedback is presented. The algorithm permits assignment of min (n, m + r - 1) eigenvalues and max (m-1, r-1) eigenvectors, where n, m, r refer to the system state, input and output dimensions, respectively. An example is given to illustrate the design procedures.

  15. SSME structural computer program development: BOPACE theoretical manual, addendum. [algorithms

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An algorithm developed and incorporated into BOPACE for improving the convergence and accuracy of the inelastic stress-strain calculations is discussed. The implementation of separation of strains in the residual-force iterative procedure is defined. The elastic-plastic quantities used in the strain-space algorithm are defined and compared with previous quantities.

  16. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1981-01-01

    Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

  17. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  18. Analysis and Evaluation of GPM Pre-launch Algorithms

    NASA Astrophysics Data System (ADS)

    Chandrasekar, Venkatachalam; Le, Minda

    2014-05-01

    The Global Precipitation Measurement (GPM) mission is the next satellite mission to obtain global precipitation measurements following success of TRMM (Tropical Rainfall Measuring Mission). GPM will be launched on February 28, 2014. The GPM mission architecture consists of satellite instruments flying within a constellation to provide accurate precipitation measurements around the globe every 2 to 4 hours and the its orbits cover up to 65 degree latitude of the earth. The GPM core satellite will be equipped with a dual-frequency precipitation radar (DPR) operating at Ku- (13.6 GHz) and Ka- (35.5 GHz) band. DPR on aboard the GPM core satellite is expected to improve our knowledge of precipitation processes relative to the single-frequency (Ku- band) radar used in TRMM by providing greater dynamic range, more detailed information on microphysics, and better accuracies in rainfall and liquid water content retrievals. New Ka- band channel observation of DPR will help to improve the detection thresholds for light rain and snow relative to TRMM PR. The dual-frequency signals will allow us to distinguish regions of liquid, frozen, and mixed-phase precipitation. GPM-DPR level 2 pre-launch algorithms include seven modules. Classification module plays a critical function in the retrieval system of DPR. The outputs of the classification module determine the nature of microphysical models and algorithms to be used in the retrievals. Classification module involves two main aspects: 1) precipitation type classification, including classifying stratiform, convective, and other rain type; and 2) hydrometeor profile characterization or hydrometeor phase state detection. DPR offers dual-frequency observations along the vertical profile, which provides additional information for investigating the microphysical properties using the difference in measured radar reflectivities at the two frequencies, a quantity often called the measured dual-frequency ratio (DFRm). The vertical profile

  19. Algorithm for genome contig assembly. Final report

    SciTech Connect

    1995-09-01

    An algorithm was developed for genome contig assembly which extended the range of data types that could be included in assembly and which ran on the order of a hundred times faster than the algorithm it replaced. Maps of all existing cosmid clone and YAC data at the Human Genome Information Resource were assembled using ICA. The resulting maps are summarized.

  20. IUS guidance algorithm gamma guide assessment

    NASA Technical Reports Server (NTRS)

    Bray, R. E.; Dauro, V. A.

    1980-01-01

    The Gamma Guidance Algorithm which controls the inertial upper stage is described. The results of an independent assessment of the algorithm's performance in satisfying the NASA missions' targeting objectives are presented. The results of a launch window analysis for a Galileo mission, and suggested improvements are included.

  1. Excursion-Set-Mediated Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Noever, David; Baskaran, Subbiah

    1995-01-01

    Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.

  2. An improved algorithm for wildfire detection

    NASA Astrophysics Data System (ADS)

    Nakau, K.

    2010-12-01

    consider the way to cancel sunlight reflection. In this study, author utilizes simple linear correction for estimation of infrared emission considering sunlight reflection. As well as bran new core part of wildfire algorithm, we need to eliminate bright reflectance matters, including cloud, desert and sun glint. Also, we need to eliminate the false alarms at coastal area for difference of surface temperature between land and ocean. An existing algorithm MOD14 has same procedure, however, some of these ancillary parts are newly introduced or improved. Snow mask is newly introduced to reduce a bright reflectance with snow and ice covered area. Also, the improved ancillary parts include candidate selection of fire pixel, cloud mask, water body mask. With these improvements, wildfire with dense smoke or wildfire under thin cloud could be detected by this algorithm. This wild fire product is not validated by ground observations yet. However, distribution is well corresponded with wildfire location in same periods. Unfortunately, this algorithm also detects false alarm in urban area same as existing one. This should be corrected adopting other bands. Current algorithm will be performed in JASMES website.

  3. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  4. An algorithm for the detection of the white-tide ('mucilage') phenomenon in the Adriatic Sea using AVHRR data

    SciTech Connect

    Tassan, S. )

    1993-06-01

    An algorithm using AVHRR data has been set up for the detection of a white tide consisting of algae secretion ('mucilage'), an event occurring in the Adriatic Sea under particular meteorological conditions. The algorithm, which includes an ad hoc procedure for cloud masking, has been tested with reference to the mucilage map obtained from the analysis of contemporary Thematic Mapper data, as well as by comparing consecutive AVHRR scenes. The main features of the exceptional mucilage phenomenon that took place in the northern basin of the Adriatic Sea in summer 1989 are shown by a time series of maps.

  5. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  6. A new reconstruction algorithm for Radon data

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Tischenko, O.; Hoeschen, C.

    2006-03-01

    A new reconstruction algorithm for Radon data is introduced. We call the new algorithm OPED as it is based on Orthogonal Polynomial Expansion on the Disk. OPED is fundamentally different from the filtered back projection (FBP) method. It allows one to use fan beam geometry directly without any additional procedures such as interpolation or rebinning. It reconstructs high degree polynomials exactly and works for smooth functions without the assumption that functions are band- limited. Our initial tests indicate that the algorithm is stable, provides high resolution images, and has a small global error. Working with the geometry specified by the algorithm and a new mask, OPED could also lead to a reconstruction method that works with reduced x-ray dose (see the paper by Tischenko et al in these proceedings).

  7. New journal: Algorithms for Molecular Biology.

    PubMed

    Morgenstern, Burkhard; Stadler, Peter F

    2006-01-01

    This editorial announces Algorithms for Molecular Biology, a new online open access journal published by BioMed Central. By launching the first open access journal on algorithmic bioinformatics, we provide a forum for fast publication of high-quality research articles in this rapidly evolving field. Our journal will publish thoroughly peer-reviewed papers without length limitations covering all aspects of algorithmic data analysis in computational biology. Publications in Algorithms for Molecular Biology are easy to find, highly visible and tracked by organisations such as PubMed. An established online submission system makes a fast reviewing procedure possible and enables us to publish accepted papers without delay. All articles published in our journal are permanently archived by PubMed Central and other scientific archives. We are looking forward to receiving your contributions. PMID:16722576

  8. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  9. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053

  10. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  11. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  12. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  13. 42 CFR 493.1251 - Standard: Procedure manual.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Procedure manual. 493.1251 Section 493... Systems § 493.1251 Standard: Procedure manual. (a) A written procedure manual for all tests, assays, and.... (b) The procedure manual must include the following when applicable to the test procedure:...

  14. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  15. Fast ordering algorithm for exact histogram specification.

    PubMed

    Nikolova, Mila; Steidl, Gabriele

    2014-12-01

    This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881

  16. Shape optimization including finite element grid adaptation

    NASA Technical Reports Server (NTRS)

    Kikuchi, N.; Taylor, J. E.

    1984-01-01

    The prediction of optimal shape design for structures depends on having a sufficient level of precision in the computation of structural response. These requirements become critical in situations where the region to be designed includes stress concentrations or unilateral contact surfaces, for example. In the approach to shape optimization discussed here, a means to obtain grid adaptation is incorporated into the finite element procedures. This facility makes it possible to maintain a level of quality in the computational estimate of response that is surely adequate for the shape design problem.

  17. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  18. The Treatment Results of a Standard Algorithm for Choosing the Best Entry Vessel for Intravenous Port Implantation

    PubMed Central

    Wei, Wen-Cheng; Wu, Ching-Yang; Wu, Ching-Feng; Fu, Jui-Ying; Su, Ta-Wei; Yu, Sheng-Yueh; Kao, Tsung-Chi; Ko, Po-Jen

    2015-01-01

    Abstract Vascular cutdown and echo guide puncture methods have its own limitations under certain conditions. There was no available algorithm for choosing entry vessel. A standard algorithm was introduced to help choose the entry vessel location according to our clinical experience and review of the literature. The goal of this study is to analyze the treatment results of the standard algorithm used to choose the entry vessel for intravenous port implantation. During the period between March 2012 and March 2013, 507 patients who received intravenous port implantation due to advanced chemotherapy were included into this study. Choice of entry vessel was according to standard algorithm. All clinical characteristic factors were collected and complication rate and incidence were further analyzed. Compared with our clinical experience in 2006, procedure-related complication rate declined from 1.09% to 0.4%, whereas the late complication rate decreased from 19.97% to 3.55%. No more pneumothorax, hematoma, catheter kinking, fractures, and pocket erosion were identified after using the standard algorithm. In alive oncology patients, 98% implanted port could serve a functional vascular access to fit therapeutic needs. This standard algorithm for choosing the best entry vessel is a simple guideline that is easy to follow. The algorithm has excellent efficiency and can minimize complication rates and incidence. PMID:26287429

  19. Pipe Cleaning Operating Procedures

    SciTech Connect

    Clark, D.; Wu, J.; /Fermilab

    1991-01-24

    This cleaning procedure outlines the steps involved in cleaning the high purity argon lines associated with the DO calorimeters. The procedure is broken down into 7 cycles: system setup, initial flush, wash, first rinse, second rinse, final rinse and drying. The system setup involves preparing the pump cart, line to be cleaned, distilled water, and interconnecting hoses and fittings. The initial flush is an off-line flush of the pump cart and its plumbing in order to preclude contaminating the line. The wash cycle circulates the detergent solution (Micro) at 180 degrees Fahrenheit through the line to be cleaned. The first rinse is then intended to rid the line of the majority of detergent and only needs to run for 30 minutes and at ambient temperature. The second rinse (if necessary) should eliminate the remaining soap residue. The final rinse is then intended to be a check that there is no remaining soap or other foreign particles in the line, particularly metal 'chips.' The final rinse should be run at 180 degrees Fahrenheit for at least 90 minutes. The filters should be changed after each cycle, paying particular attention to the wash cycle and the final rinse cycle return filters. These filters, which should be bagged and labeled, prove that the pipeline is clean. Only distilled water should be used for all cycles, especially rinsing. The level in the tank need not be excessive, merely enough to cover the heater float switch. The final rinse, however, may require a full 50 gallons. Note that most of the details of the procedure are included in the initial flush description. This section should be referred to if problems arise in the wash or rinse cycles.

  20. 17 CFR 38.3 - Procedures for designation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... description of the trading system, algorithm, security and access limitation procedures with a timeline for an order from input through settlement, and a copy of any system test procedures, tests conducted, test....3 Section 38.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION...

  1. 17 CFR 38.3 - Procedures for designation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... description of the trading system, algorithm, security and access limitation procedures with a timeline for an order from input through settlement, and a copy of any system test procedures, tests conducted, test....3 Section 38.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION...

  2. An algorithm for a generalization of the Richardson extrapolation process

    NASA Technical Reports Server (NTRS)

    Ford, William F.; Sidi, Avram

    1987-01-01

    The paper presents a recursive method, designated the W exp (m)-algorithm, for implementing a generalization of the Richardson extrapolation process. Compared to the direct solution of the linear sytems of equations defining the extrapolation procedure, this method requires a small number of arithmetic operations and very little storage. The technique is also applied to solve recursively the coefficient problem associated with the rational approximations obtained by applying a d-transformation to power series. In the course of development a new recursive algorithm for implementing a very general extrapolation procedure is introduced, for solving the same problem. A FORTRAN program for the W exp (m)-algorithm is also appended.

  3. A general Bayesian image reconstruction algorithm with entropy prior: Preliminary application to HST data

    NASA Astrophysics Data System (ADS)

    Nunez, Jorge; Llacer, Jorge

    1993-10-01

    This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.

  4. A comparison of heuristic search algorithms for molecular docking.

    PubMed

    Westhead, D R; Clark, D E; Murray, C W

    1997-05-01

    This paper describes the implementation and comparison of four heuristic search algorithms (genetic algorithm, evolutionary programming, simulated annealing and tabu search) and a random search procedure for flexible molecular docking. To our knowledge, this is the first application of the tabu search algorithm in this area. The algorithms are compared using a recently described fast molecular recognition potential function and a diverse set of five protein-ligand systems. Statistical analysis of the results indicates that overall the genetic algorithm performs best in terms of the median energy of the solutions located. However, tabu search shows a better performance in terms of locating solutions close to the crystallographic ligand conformation. These results suggest that a hybrid search algorithm may give superior results to any of the algorithms alone. PMID:9263849

  5. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  6. Atmospheric Correction Algorithm for Hyperspectral Imagery

    SciTech Connect

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolute calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.

  7. Computerized procedures system

    DOEpatents

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  8. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  9. Algorithmic enhancements and experience with a large scale SQP code for general nonlinear programming problems

    SciTech Connect

    Boggs, P.; Tolle, J.; Kearsley, A.

    1994-12-31

    We have developed a large scale sequential quadratic programming (SQP) code based on an interior-point method for solving general (convex or nonconvex) quadratic programs (QP). We often halt this QP solver prematurely by employing a trust-region strategy. This procedure typically reduces the overall cost of the code. In this talk we briefly review the algorithm and some of its theoretical justification and then discuss recent enhancements including automatic procedures for both increasing and decreasing the parameter in the merit function, a regularization procedure for dealing with linearly dependent active constraint gradients, and a method for modifying the linearized equality constraints. Some numerical results on a significant set of {open_quotes}real-world{close_quotes} problems will be presented.

  10. GRISOTTO: A greedy approach to improve combinatorial algorithms for motif discovery with prior knowledge

    PubMed Central

    2011-01-01

    Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505

  11. Overview of an Algorithm Plugin Package (APP)

    NASA Astrophysics Data System (ADS)

    Linda, M.; Tilmes, C.; Fleig, A. J.

    2004-12-01

    Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.

  12. Production scheduling and rescheduling with genetic algorithms.

    PubMed

    Bierwirth, C; Mattfeld, D C

    1999-01-01

    A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs. PMID:10199993

  13. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  14. Cloud Screening and Quality Control Algorithm for Star Photometer Data: Assessment with Lidar Measurements and with All-sky Images

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.

    2012-01-01

    This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.

  15. Parallel automated adaptive procedures for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Shephard, M. S.; Flaherty, J. E.; Decougny, H. L.; Ozturan, C.; Bottasso, C. L.; Beall, M. W.

    1995-01-01

    Consideration is given to the techniques required to support adaptive analysis of automatically generated unstructured meshes on distributed memory MIMD parallel computers. The key areas of new development are focused on the support of effective parallel computations when the structure of the numerical discretization, the mesh, is evolving, and in fact constructed, during the computation. All the procedures presented operate in parallel on already distributed mesh information. Starting from a mesh definition in terms of a topological hierarchy, techniques to support the distribution, redistribution and communication among the mesh entities over the processors is given, and algorithms to dynamically balance processor workload based on the migration of mesh entities are given. A procedure to automatically generate meshes in parallel, starting from CAD geometric models, is given. Parallel procedures to enrich the mesh through local mesh modifications are also given. Finally, the combination of these techniques to produce a parallel automated finite element analysis procedure for rotorcraft aerodynamics calculations is discussed and demonstrated.

  16. Effect of qubit losses on Grover's quantum search algorithm

    NASA Astrophysics Data System (ADS)

    Rao, D. D. Bhaktavatsala; Mølmer, Klaus

    2012-10-01

    We investigate the performance of Grover's quantum search algorithm on a register that is subject to a loss of particles that carry qubit information. Under the assumption that the basic steps of the algorithm are applied correctly on the correspondingly shrinking register, we show that the algorithm converges to mixed states with 50% overlap with the target state in the bit positions still present. As an alternative to error correction, we present a procedure that combines the outcome of different trials of the algorithm to determine the solution to the full search problem. The procedure may be relevant for experiments where the algorithm is adapted as the loss of particles is registered and for experiments with Rydberg blockade interactions among neutral atoms, where monitoring of atom losses is not even necessary.

  17. Collected radiochemical and geochemical procedures

    SciTech Connect

    Kleinberg, J

    1990-05-01

    This revision of LA-1721, 4th Ed., Collected Radiochemical Procedures, reflects the activities of two groups in the Isotope and Nuclear Chemistry Division of the Los Alamos National Laboratory: INC-11, Nuclear and radiochemistry; and INC-7, Isotope Geochemistry. The procedures fall into five categories: I. Separation of Radionuclides from Uranium, Fission-Product Solutions, and Nuclear Debris; II. Separation of Products from Irradiated Targets; III. Preparation of Samples for Mass Spectrometric Analysis; IV. Dissolution Procedures; and V. Geochemical Procedures. With one exception, the first category of procedures is ordered by the positions of the elements in the Periodic Table, with separate parts on the Representative Elements (the A groups); the d-Transition Elements (the B groups and the Transition Triads); and the Lanthanides (Rare Earths) and Actinides (the 4f- and 5f-Transition Elements). The members of Group IIIB-- scandium, yttrium, and lanthanum--are included with the lanthanides, elements they resemble closely in chemistry and with which they occur in nature. The procedures dealing with the isolation of products from irradiated targets are arranged by target element.

  18. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  19. Program Planning Procedures Wall Chart.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento. Right to Read Unit.

    These ten program planning procedure wall charts include: "Right to Read Center Data," for identifying school, grade, enrollment by grade, size of community, ethnic balance, percentage on aid for Dependent Children, and transiency rate; "Needs Assessment Summary," for information on student performance, reading program, teacher performance,…

  20. Teaching Assistant Policies and Procedures.

    ERIC Educational Resources Information Center

    Wisconsin Univ., Madison.

    Policies and procedures covering graduate teaching assistants (TAs) at the University of Wisconsin-Madison are presented. A TA's duties may include classroom teaching under the direction of a faculty member, assisting in teaching classes, discussion groups, problem-solving sessions or laboratories, assisting in planning courses and developing…

  1. Faster fourier transformation: The algorithm of S. Winograd

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1979-01-01

    The new DFT algorithm of S. Winograd is developed and presented in detail. This is an algorithm which uses about 1/5 of the number of multiplications used by the Cooley-Tukey algorithm and is applicable to any order which is a product of relatively prime factors from the following list: 2,3,4,5,7,8,9,16. The algorithm is presented in terms of a series of tableaus which are convenient, compact, graphical representations of the sequence of arithmetic operations in the corresponding parts of the algorithm. Using these in conjunction with included Tables makes it relatively easy to apply the algorithm and evaluate its performance.

  2. Algorithm Helps Monitor Engine Operation

    NASA Technical Reports Server (NTRS)

    Eckerling, Sherry J.; Panossian, Hagop V.; Kemp, Victoria R.; Taniguchi, Mike H.; Nelson, Richard L.

    1995-01-01

    Real-Time Failure Control (RTFC) algorithm part of automated monitoring-and-shutdown system being developed to ensure safety and prevent major damage to equipment during ground tests of main engine of space shuttle. Includes redundant sensors, controller voting logic circuits, automatic safe-limit logic circuits, and conditional-decision logic circuits, all monitored by human technicians. Basic principles of system also applicable to stationary powerplants and other complex machinery systems.

  3. Pollutant Assessments Group Procedures Manual: Volume 1, Administrative and support procedures

    SciTech Connect

    Not Available

    1992-03-01

    This manual describes procedures currently in use by the Pollutant Assessments Group. The manual is divided into two volumes: Volume 1 includes administrative and support procedures, and Volume 2 includes technical procedures. These procedures are revised in an ongoing process to incorporate new developments in hazardous waste assessment technology and changes in administrative policy. Format inconsistencies will be corrected in subsequent revisions of individual procedures. The purpose of the Pollutant Assessments Groups Procedures Manual is to provide a standardized set of procedures documenting in an auditable manner the activities performed by the Pollutant Assessments Group (PAG) of the Health and Safety Research Division (HASRD) of the Environmental Measurements and Applications Section (EMAS) at Oak Ridge National Laboratory (ORNL). The Procedures Manual ensures that the organizational, administrative, and technical activities of PAG conform properly to protocol outlined by funding organizations. This manual also ensures that the techniques and procedures used by PAG and other contractor personnel meet the requirements of applicable governmental, scientific, and industrial standards. The Procedures Manual is sufficiently comprehensive for use by PAG and contractor personnel in the planning, performance, and reporting of project activities and measurements. The Procedures Manual provides procedures for conducting field measurements and includes program planning, equipment operation, and quality assurance elements. Successive revisions of this manual will be archived in the PAG Document Control Department to facilitate tracking of the development of specific procedures.

  4. Atmospheric channel for bistatic optical communication: simulation algorithms

    NASA Astrophysics Data System (ADS)

    Belov, V. V.; Tarasenkov, M. V.

    2015-11-01

    Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.

  5. Multidisciplinary design optimization using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Unal, Resit

    1994-12-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  6. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  7. An Improved Algorithm for Linear Inequalities in Pattern Recognition and Switching Theory.

    ERIC Educational Resources Information Center

    Geary, Leo C.

    This thesis presents a new iterative algorithm for solving an n by l solution vector w, if one exists, to a set of linear inequalities, A w greater than zero which arises in pattern recognition and switching theory. The algorithm is an extension of the Ho-Kashyap algorithm, utilizing the gradient descent procedure to minimize a criterion function…

  8. Universal single level implicit algorithm for gasdynamics

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.; Venkatapthy, E.

    1984-01-01

    A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.

  9. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  10. An algorithm for the automatic synchronization of Omega receivers

    NASA Technical Reports Server (NTRS)

    Stonestreet, W. M.; Marzetta, T. L.

    1977-01-01

    The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.

  11. New algorithms for the symmetric tridiagonal eigenvalue computation

    SciTech Connect

    Pan, V. |

    1994-12-31

    The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.

  12. Comparison of update algorithms for pure Gauge SU(3)

    SciTech Connect

    Gupta, R.; Kilcup, G.W.; Patel, A.; Sharpe, S.R.; Deforcrand, P.

    1988-10-01

    The authors show that the overrelaxed algorithm of Creutz and of Brown and Woch is the optimal local update algorithm for simulation of pure gauge SU(3). The authors' comparison criterion includes computer efficiency and decorrelation times. They also investigate the rate of decorrelation for the Hybrid Monte Carlo algorithm.

  13. Crew procedures development techniques

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Benbow, R. L.; Hawk, M. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.

    1975-01-01

    The study developed requirements, designed, developed, checked out and demonstrated the Procedures Generation Program (PGP). The PGP is a digital computer program which provides a computerized means of developing flight crew procedures based on crew action in the shuttle procedures simulator. In addition, it provides a real time display of procedures, difference procedures, performance data and performance evaluation data. Reconstruction of displays is possible post-run. Data may be copied, stored on magnetic tape and transferred to the document processor for editing and documentation distribution.

  14. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  15. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  16. Procedure for simulating divergent-light halos

    NASA Astrophysics Data System (ADS)

    Gislén, Lars

    2003-11-01

    Divergent-light halos are halos produced by light from nearby light sources, like street lamps being scattered by small crystals of ice floating in the air. The use of ``brute-force'' Monte Carlo methods to simulate such halos is extremely inefficient, as most scattered rays will not hit the eye of the observer. I present a new procedure for Monte Carlo simulations of divergent-light halos. This procedure uses rotational symmetries to make a selected sampling of events that greatly improves the computational efficiency of the algorithm. We can typically generate a simulated halo display in minutes using a personal computer, several orders of magnitude more rapid than a simple brute-force method. The algorithm can also optionally generate three-dimensional pictures of divergent-light halo displays.

  17. High-resolution algorithms for the Navier-Stokes equations for generalized discretizations

    NASA Astrophysics Data System (ADS)

    Mitchell, Curtis Randall

    Accurate finite volume solution algorithms for the two dimensional Navier Stokes equations and the three dimensional Euler equations for both structured and unstructured grid topologies are presented. Results for two dimensional quadrilateral and triangular elements and three dimensional tetrahedral elements will be provided. Fundamental to the solution algorithm is a technique for generating multidimensional polynomials which model the spatial variation of the flow variables. Cell averaged data is used to reconstruct pointwise distributions of the dependent variables. The reconstruction errors are evaluated on triangular meshes. The implementation of the algorithm is unique in that three reconstructions are performed for each cell face in the domain. Two of the reconstructions are used to evaluate the inviscid fluxes and correspond to the right and left interface states needed for the solution of a Riemann problem. The third reconstruction is used to evaluate the viscous fluxes. The gradient terms that appear in the viscous fluxes are formed by simply differentiating the polynomial. By selecting the appropriate cell control volumes, centered, upwind and upwind-biased stencils are possible. Numerical calculations in two dimensions include solutions to elliptic boundary value problems, Ringlebs' flow, an inviscid shock reflection, a flat plate boundary layer, and a shock induced separation over a flat plate. Three dimensional results include the ONERA M6 wing. All of the unstructured grids were generated using an advancing front mesh generation procedure. Modifications to the three dimensional grid generator were necessary to discretize the surface grids for bodies with high curvature. In addition, mesh refinement algorithms were implemented to improve the surface grid integrity. Examples include a Glasair fuselage, High Speed Civil Transport, and the ONERA M6 wing. The role of reconstruction as applied to adaptive remeshing is discussed and a new first order error

  18. Efficient algorithms for Hirshfeld-I charges

    SciTech Connect

    Finzel, Kati; Martín Pendás, Ángel; Francisco, Evelio

    2015-08-28

    A new viewpoint on iterative Hirshfeld charges is presented, whereby the atomic populations obtained from such a scheme are interpreted as such populations which reproduce themselves. This viewpoint yields a self-consistent requirement for the Hirshfeld-I populations rather than being understood as the result of an iterative procedure. Based on this self-consistent requirement, much faster algorithms for Hirshfeld-I charges have been developed. In addition, new atomic reference densities for the Hirshfeld-I procedure are presented. The proposed reference densities are N-representable, display proper atomic shell structure and can be computed for any charged species.

  19. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  20. Algorithm refinement for the stochastic Burgers' equation

    SciTech Connect

    Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org

    2007-04-10

    In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.

  1. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  2. A new algorithm for coding geological terminology

    NASA Astrophysics Data System (ADS)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  3. Randomised Trial Support for Orthopaedic Surgical Procedures

    PubMed Central

    Lim, Hyeung C.; Adie, Sam; Naylor, Justine M.; Harris, Ian A.

    2014-01-01

    We investigated the proportion of orthopaedic procedures supported by evidence from randomised controlled trials comparing operative procedures to a non-operative alternative. Orthopaedic procedures conducted in 2009, 2010 and 2011 across three metropolitan teaching hospitals were identified, grouped and ranked according to frequency. Searches of the Cochrane Central Register of Controlled Trials (CENTRAL), the Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE) were performed to identify RCTs evaluating the most commonly performed orthopaedic procedures. Included studies were categorised as “supportive” or “not supportive” of operative treatment. A risk of bias analysis was conducted for included studies using the Cochrane Collaboration's Risk of Bias tool. A total of 9,392 orthopaedic procedures were performed across the index period. 94.6% (8886 procedures) of the total volume, representing the 32 most common operative procedure categories, were used for this analysis. Of the 83 included RCTs, 22.9% (19/83) were classified as supportive of operative intervention. 36.9% (3279/8886) of the total volume of procedures performed were supported by at least one RCT showing surgery to be superior to a non-operative alternative. 19.6% (1743/8886) of the total volume of procedures performed were supported by at least one low risk of bias RCT showing surgery to be superior to a non-operative alternative. The level of RCT support for common orthopaedic procedures compares unfavourably with other fields of medicine. PMID:24927114

  4. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  5. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD Without ID: A Multi-site Study.

    PubMed

    Pugliese, Cara E; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B; White, Susan W; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D; Schultz, Robert T; Martin, Alex; Anthony, Laura Gutermuth

    2015-12-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility. PMID:26385796

  6. Universal charge algorithm for telecommunication batteries

    SciTech Connect

    Tsenter, B.; Schwartzmiller, F.

    1997-12-01

    Three chemistries are used extensively in today`s portable telecommunication devices: nickel-cadmium, nickel-metal hydride, and lithium-ion. Nickel-cadmium and nickel-metal hydride batteries (also referred to as nickel-based batteries) are well known while lithium-ion batteries are less known. An universal charging algorithm should satisfactorily charge all chemistries while providing recognition among them. Total Battery Management, Inc. (TBM) has developed individual charging algorithms for nickel-based and lithium-ion batteries and a procedure for recognition, if necessary, to incorporate in an universal algorithm. TBM`s charging philosophy is the first to understand the battery from the chemical point of view and then provide an electronic solution.

  7. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  8. Writer`s guide for technical procedures

    SciTech Connect

    1998-12-01

    A primary objective of operations conducted in the US Department of Energy (DOE) complex is safety. Procedures are a critical element of maintaining a safety envelope to ensure safe facility operation. This DOE Writer`s Guide for Technical Procedures addresses the content, format, and style of technical procedures that prescribe production, operation of equipment and facilities, and maintenance activities. The DOE Writer`s Guide for Management Control Procedures and DOE Writer`s Guide for Emergency and Alarm Response Procedures are being developed to assist writers in developing nontechnical procedures. DOE is providing this guide to assist writers across the DOE complex in producing accurate, complete, and usable procedures that promote safe and efficient operations that comply with DOE orders, including DOE Order 5480.19, Conduct of Operations for DOE Facilities, and 5480.6, Safety of Department of Energy-Owned Nuclear Reactors.

  9. Clutter discrimination algorithm simulation in pulse laser radar imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  10. Molecular classification of pesticides including persistent organic pollutants, phenylurea and sulphonylurea herbicides.

    PubMed

    Torrens, Francisco; Castellano, Gloria

    2014-01-01

    Pesticide residues in wine were analyzed by liquid chromatography-tandem mass spectrometry. Retentions are modelled by structure-property relationships. Bioplastic evolution is an evolutionary perspective conjugating effect of acquired characters and evolutionary indeterminacy-morphological determination-natural selection principles; its application to design co-ordination index barely improves correlations. Fractal dimensions and partition coefficient differentiate pesticides. Classification algorithms are based on information entropy and its production. Pesticides allow a structural classification by nonplanarity, and number of O, S, N and Cl atoms and cycles; different behaviours depend on number of cycles. The novelty of the approach is that the structural parameters are related to retentions. Classification algorithms are based on information entropy. When applying procedures to moderate-sized sets, excessive results appear compatible with data suffering a combinatorial explosion. However, equipartition conjecture selects criterion resulting from classification between hierarchical trees. Information entropy permits classifying compounds agreeing with principal component analyses. Periodic classification shows that pesticides in the same group present similar properties; those also in equal period, maximum resemblance. The advantage of the classification is to predict the retentions for molecules not included in the categorization. Classification extends to phenyl/sulphonylureas and the application will be to predict their retentions. PMID:24905607

  11. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James F.; Cobb, Ernest D.; Kilpatrick, F.A.

    1986-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The advantages of dye tracing are (1) low detection and measurement limits and (2) simplicity and accuracy in measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section on aerial photography is included because of its possible use to supplement ground-level fluorometry.

  12. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James E., Jr.; Cobb, E.D.; Kilpatrick, F.A.

    1984-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The outstanding characteristics of dye tracing are: (1) the low detection and measurement limits, and (2) the simplicity and accuracy of measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a general guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section is included on aerial photography because of its possible use to supplement ground-level fluorometry. (USGS)

  13. Fluorometric procedures for dye tracing

    USGS Publications Warehouse

    Wilson, James F.

    1968-01-01

    This manual describes the current fluorometric procedures used by the U.S. Geological Survey in dye tracer studies such as time of travel, dispersion, reaeration, and dilution-type discharge measurements. The advantages of dye tracing are (1) low detection and measurement limits and (2) simplicity and accuracy in measuring dye tracer concentrations using fluorometric techniques. The manual contains necessary background information about fluorescence, dyes, and fluorometers and a description of fluorometric operation and calibration procedures as a guide for laboratory and field use. The background information should be useful to anyone wishing to experiment with dyes, fluorometer components, or procedures different from those described. In addition, a brief section on aerial photography is included because of its possible use to supplement ground-level fluorometry.

  14. An improved harmony search algorithm for emergency inspection scheduling

    NASA Astrophysics Data System (ADS)

    Kallioras, Nikos A.; Lagaros, Nikos D.; Karlaftis, Matthew G.

    2014-11-01

    The ability of nature-inspired search algorithms to efficiently handle combinatorial problems, and their successful implementation in many fields of engineering and applied sciences, have led to the development of new, improved algorithms. In this work, an improved harmony search (IHS) algorithm is presented, while a holistic approach for solving the problem of post-disaster infrastructure management is also proposed. The efficiency of IHS is compared with that of the algorithms of particle swarm optimization, differential evolution, basic harmony search and the pure random search procedure, when solving the districting problem that is the first part of post-disaster infrastructure management. The ant colony optimization algorithm is employed for solving the associated routing problem that constitutes the second part. The comparison is based on the quality of the results obtained, the computational demands and the sensitivity on the algorithmic parameters.

  15. Comparative study of state-of-the-art algorithms for hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Rivera-Borrero, Carlos; Hunt, Shawn D.

    2007-04-01

    This work studies the end-to-end performance of hyperspectral classification and unmixing systems. Specifically, it compares widely used current state of the art algorithms with those developed at the University of Puerto Rico. These include algorithms for image enhancement, band subset selection, feature extraction, supervised and unsupervised classification, and constrained and unconstrained abundance estimation. The end to end performance for different combinations of algorithms is evaluated. The classification algorithms are compared in terms of percent correct classification. This method, however, cannot be applied to abundance estimation, as the binary evaluation used for supervised and unsupervised classification is not directly applicable to unmixing performance analysis. A procedure to evaluate unmixing performance is described in this paper and tested using coregistered data acquired by various sensors at different spatial resolutions. Performance results are generally specific to the image used. In an effort to try and generalize the results, a formal description of the complexity of the images used for the evaluations is required. Techniques for image complexity analysis currently available for automatic target recognizers are included and adapted to quantify the performance of the classifiers for different image classes.

  16. Reporting Child Language Sampling Procedures

    PubMed Central

    Finestack, Lizbeth H.; Payesteh, Bita; Disher, Jill Rentmeester; Julien, Hannah M.

    2015-01-01

    Purpose Despite the long history of language sampling use in the study of child language development and disorders, there are no set guidelines specifying the reporting of language sampling procedures. The authors propose reporting standards for use by investigators who employ language samples in their research. Method The authors conducted a literature search of child-focused studies published in journals of the American Speech-Language-Hearing Association between January 2000 and December 2011 that included language sampling procedures to help characterize child participants or to derive measures to serve as dependent variables. Following this search, they reviewed each study and documented the language sampling procedures reported. Results The authors’ synthesis revealed that approximately 25% of all child-focused studies use language samples to help characterize participants and/or derive dependent variables. They found remarkable inconsistencies in the reporting of language sampling procedures. Conclusion To maximize the conclusions drawn from research using language samples, the authors strongly encourage investigators of child language to consistently report language sampling procedures using the proposed reporting checklist. PMID:25399013

  17. The computational structural mechanics testbed procedures manual

    NASA Technical Reports Server (NTRS)

    Stewart, Caroline B. (Compiler)

    1991-01-01

    The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.

  18. Making policy and procedure systems work effectively.

    PubMed

    Virani, T

    1996-01-01

    Policy and procedure manuals can be cumbersome to keep current and updated. One approach to meet this challenge is by implementing a decentralized system to develop, review, revise and approve policies and procedures. Mechanisms to operationalize such a system involve sharing of responsibility and accountability of specified policies and procedures by various existing committees and development of coordinating systems and support mechanisms. Other key attributes of a decentralized system included collaboration and extensive communication strategies. PMID:8695607

  19. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  20. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  1. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  2. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  3. Procedure improvement enterprises

    SciTech Connect

    Davis, P.L.

    1992-01-01

    At Allied-Signal's Kansas City Division (KCD), we recognize the importance of clear, concise and timely procedures for sharing information, promoting consistency and documenting the way we do business. For these reasons, the KCD has gathered a team of employees to analyze the process we currently use to publish procedures, identify the procedure needs of KCD employees, and design a system that meets or exceeds the requirements and expectations of DOE. The name of our group is the Procedure Improvement Enterprise Critical Process Team, or PIE CPT. The mission statement of Procedure Improvement Enterprise is to develop and implement within the Kansas City Division an effective nd flexible procedure system that will establish a model of excellence, will emphasize team work and open communication, and will ensure compliance with corporate/government requirements.

  4. Procedure improvement enterprises

    SciTech Connect

    Davis, P.L.

    1992-01-01

    At Allied-Signal`s Kansas City Division (KCD), we recognize the importance of clear, concise and timely procedures for sharing information, promoting consistency and documenting the way we do business. For these reasons, the KCD has gathered a team of employees to analyze the process we currently use to publish procedures, identify the procedure needs of KCD employees, and design a system that meets or exceeds the requirements and expectations of DOE. The name of our group is the Procedure Improvement Enterprise Critical Process Team, or PIE CPT. The mission statement of Procedure Improvement Enterprise is to develop and implement within the Kansas City Division an effective nd flexible procedure system that will establish a model of excellence, will emphasize team work and open communication, and will ensure compliance with corporate/government requirements.

  5. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  6. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  7. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  8. Image change detection algorithms: a systematic survey.

    PubMed

    Radke, Richard J; Andra, Srinivas; Al-Kofahi, Omar; Roysam, Badrinath

    2005-03-01

    Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. PMID:15762326

  9. Universal lossless compression algorithm for textual images

    NASA Astrophysics Data System (ADS)

    al Zahir, Saif

    2012-03-01

    In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.

  10. Candidate CDTI procedures study

    NASA Technical Reports Server (NTRS)

    Ace, R. E.

    1981-01-01

    A concept with potential for increasing airspace capacity by involving the pilot in the separation control loop is discussed. Some candidate options are presented. Both enroute and terminal area procedures are considered and, in many cases, a technologically advanced Air Traffic Control structure is assumed. Minimum display characteristics recommended for each of the described procedures are presented. Recommended sequencing of the operational testing of each of the candidate procedures is presented.

  11. Apollo experience report: Systems and flight procedures development

    NASA Technical Reports Server (NTRS)

    Kramer, P. C.

    1973-01-01

    This report describes the process of crew procedures development used in the Apollo Program. The two major categories, Systems Procedures and Flight Procedures, are defined, as are the forms of documentation required. A description is provided of the operation of the procedures change control process, which includes the roles of man-in-the-loop simulations and the Crew Procedures Change Board. Brief discussions of significant aspects of the attitude control, computer, electrical power, environmental control, and propulsion subsystems procedures development are presented. Flight procedures are subdivided by mission phase: launch and translunar injection, rendezvous, lunar descent and ascent, and entry. Procedures used for each mission phase are summarized.

  12. Algorithm Updates for the Fourth SeaWiFS Data Reprocessing

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford, B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Robinson, Wayne D.; Feldman, Gene Carl; Bailey, Sean W.

    2003-01-01

    The efforts to improve the data quality for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data products have continued, following the third reprocessing of the global data set in May 2000. Analyses have been ongoing to address all aspects of the processing algorithms, particularly the calibration methodologies, atmospheric correction, and data flagging and masking. All proposed changes were subjected to rigorous testing, evaluation and validation. The results of these activities culminated in the fourth reprocessing, which was completed in July 2002. The algorithm changes, which were implemented for this reprocessing, are described in the chapters of this volume. Chapter 1 presents an overview of the activities leading up to the fourth reprocessing, and summarizes the effects of the changes. Chapter 2 describes the modifications to the on-orbit calibration, specifically the focal plane temperature correction and the temporal dependence. Chapter 3 describes the changes to the vicarious calibration, including the stray light correction to the Marine Optical Buoy (MOBY) data and improved data screening procedures. Chapter 4 describes improvements to the near-infrared (NIR) band correction algorithm. Chapter 5 describes changes to the atmospheric correction and the oceanic property retrieval algorithms, including out-of-band corrections, NIR noise reduction, and handling of unusual conditions. Chapter 6 describes various changes to the flags and masks, to increase the number of valid retrievals, improve the detection of the flag conditions, and add new flags. Chapter 7 describes modifications to the level-la and level-3 algorithms, to improve the navigation accuracy, correct certain types of spacecraft time anomalies, and correct a binning logic error. Chapter 8 describes the algorithm used to generate the SeaWiFS photosynthetically available radiation (PAR) product. Chapter 9 describes a coupled ocean-atmosphere model, which is used in one of the changes

  13. Neoclassical Transport Including Collisional Nonlinearity

    SciTech Connect

    Candy, J.; Belli, E. A.

    2011-06-10

    In the standard {delta}f theory of neoclassical transport, the zeroth-order (Maxwellian) solution is obtained analytically via the solution of a nonlinear equation. The first-order correction {delta}f is subsequently computed as the solution of a linear, inhomogeneous equation that includes the linearized Fokker-Planck collision operator. This equation admits analytic solutions only in extreme asymptotic limits (banana, plateau, Pfirsch-Schlueter), and so must be solved numerically for realistic plasma parameters. Recently, numerical codes have appeared which attempt to compute the total distribution f more accurately than in the standard ordering by retaining some nonlinear terms related to finite-orbit width, while simultaneously reusing some form of the linearized collision operator. In this work we show that higher-order corrections to the distribution function may be unphysical if collisional nonlinearities are ignored.

  14. Families classification including multiopposition asteroids

    NASA Astrophysics Data System (ADS)

    Milani, Andrea; Spoto, Federica; Knežević, Zoran; Novaković, Bojan; Tsirvoulis, Georgios

    2016-01-01

    In this paper we present the results of our new classification of asteroid families, upgraded by using catalog with > 500,000 asteroids. We discuss the outcome of the most recent update of the family list and of their membership. We found enough evidence to perform 9 mergers of the previously independent families. By introducing an improved method of estimation of the expected family growth in the less populous regions (e.g. at high inclination) we were able to reliably decide on rejection of one tiny group as a probable statistical fluke. Thus we reduced our current list to 115 families. We also present newly determined ages for 6 families, including complex 135 and 221, improving also our understanding of the dynamical vs. collisional families relationship. We conclude with some recommendations for the future work and for the family name problem.

  15. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  16. 7 CFR 18.5 - Formal complaint procedure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 1 2012-01-01 2012-01-01 false Formal complaint procedure. 18.5 Section 18.5... EXTENSION SERVICES § 18.5 Formal complaint procedure. A procedure shall be provided for the filing of a... origin, sex, or religion. (b) Time limits for processing. The procedure will include time limits for...

  17. 7 CFR 18.5 - Formal complaint procedure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 1 2011-01-01 2011-01-01 false Formal complaint procedure. 18.5 Section 18.5... EXTENSION SERVICES § 18.5 Formal complaint procedure. A procedure shall be provided for the filing of a... origin, sex, or religion. (b) Time limits for processing. The procedure will include time limits for...

  18. 7 CFR 18.5 - Formal complaint procedure.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 1 2014-01-01 2014-01-01 false Formal complaint procedure. 18.5 Section 18.5... EXTENSION SERVICES § 18.5 Formal complaint procedure. A procedure shall be provided for the filing of a... origin, sex, or religion. (b) Time limits for processing. The procedure will include time limits for...

  19. 7 CFR 18.5 - Formal complaint procedure.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Formal complaint procedure. 18.5 Section 18.5... EXTENSION SERVICES § 18.5 Formal complaint procedure. A procedure shall be provided for the filing of a... origin, sex, or religion. (b) Time limits for processing. The procedure will include time limits for...

  20. 7 CFR 18.5 - Formal complaint procedure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Formal complaint procedure. 18.5 Section 18.5... EXTENSION SERVICES § 18.5 Formal complaint procedure. A procedure shall be provided for the filing of a... origin, sex, or religion. (b) Time limits for processing. The procedure will include time limits for...

  1. The transfer of analytical procedures.

    PubMed

    Ermer, J; Limberger, M; Lis, K; Wätzig, H

    2013-11-01

    Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. PMID:23978903

  2. Multiangle dynamic light scattering analysis using an improved recursion algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Li, Wei; Wang, Wanyan; Zeng, Xianjiang; Chen, Junyao; Du, Peng; Yang, Kecheng

    2015-10-01

    Multiangle dynamic light scattering (MDLS) compensates for the low information in a single-angle dynamic light scattering (DLS) measurement by combining the light intensity autocorrelation functions from a number of measurement angles. Reliable estimation of PSD from MDLS measurements requires accurate determination of the weighting coefficients and an appropriate inversion method. We propose the Recursion Nonnegative Phillips-Twomey (RNNPT) algorithm, which is insensitive to the noise of correlation function data, for PSD reconstruction from MDLS measurements. The procedure includes two main steps: 1) the calculation of the weighting coefficients by the recursion method, and 2) the PSD estimation through the RNNPT algorithm. And we obtained suitable regularization parameters for the algorithm by using MR-L-curve since the overall computational cost of this method is sensibly less than that of the L-curve for large problems. Furthermore, convergence behavior of the MR-L-curve method is in general superior to that of the L-curve method and the error of MR-L-curve method is monotone decreasing. First, the method was evaluated on simulated unimodal lognormal PSDs and multimodal lognormal PSDs. For comparison, reconstruction results got by a classical regularization method were included. Then, to further study the stability and sensitivity of the proposed method, all examples were analyzed using correlation function data with different levels of noise. The simulated results proved that RNNPT method yields more accurate results in the determination of PSDs from MDLS than those obtained with the classical regulation method for both unimodal and multimodal PSDs.

  3. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  4. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  5. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  6. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  7. The development of flux-split algorithms for flows with non-equilibrium thermodynamics and chemical reactions

    NASA Technical Reports Server (NTRS)

    Grossman, B.; Cinella, P.

    1988-01-01

    A finite-volume method for the numerical computation of flows with nonequilibrium thermodynamics and chemistry is presented. A thermodynamic model is described which simplifies the coupling between the chemistry and thermodynamics and also results in the retention of the homogeneity property of the Euler equations (including all the species continuity and vibrational energy conservation equations). Flux-splitting procedures are developed for the fully coupled equations involving fluid dynamics, chemical production and thermodynamic relaxation processes. New forms of flux-vector split and flux-difference split algorithms are embodied in a fully coupled, implicit, large-block structure, including all the species conservation and energy production equations. Several numerical examples are presented, including high-temperature shock tube and nozzle flows. The methodology is compared to other existing techniques, including spectral and central-differenced procedures, and favorable comparisons are shown regarding accuracy, shock-capturing and convergence rates.

  8. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  9. A computational procedure for large rotational motions in multibody dynamics

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Chiou, J. C.

    1987-01-01

    A computational procedure suitable for the solution of equations of motion for multibody systems is presented. The present procedure adopts a differential partitioning of the translational motions and the rotational motions. The translational equations of motion are then treated by either a conventional explicit or an implicit direct integration method. A principle feature of this procedure is a nonlinearly implicit algorithm for updating rotations via the Euler four-parameter representation. This procedure is applied to the rolling of a sphere through a specific trajectory, which shows that it yields robust solutions.

  10. Bayesian Smoothing Algorithms in Partially Observed Markov Chains

    NASA Astrophysics Data System (ADS)

    Ait-el-Fquih, Boujemaa; Desbouvries, François

    2006-11-01

    Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.

  11. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    PubMed Central

    Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699

  12. 14 CFR 1212.202 - Identification procedures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATIONS Requests for Access to Records § 1212.202 Identification procedures. (a) The system manager will... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Identification procedures. 1212.202 Section... identification which includes the individual's name, signature, and photograph or physical description. (b)...

  13. 48 CFR 2842.1503 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACT ADMINISTRATION Contractor Performance Information 2842.1503 Procedures. Past performance evaluation procedures and systems shall include, to the greatest practicable extent, the evaluation and performance rating factors set forth in the Office of Federal Procurement Policy best practices guide for...

  14. 36 CFR 906.3 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ....3 Parks, Forests, and Public Property PENNSYLVANIA AVENUE DEVELOPMENT CORPORATION AFFIRMATIVE ACTION POLICY AND PROCEDURE Development Program § 906.3 Procedures. (a) Affirmative Action Plans must be... Corporation's solicitation for proposals, the response must include an Affirmative Action Plan; (2) If...

  15. SAPHIRE Change Design and Testing Procedure

    SciTech Connect

    Curtis Smith

    2010-02-01

    This document describes the procedure software developers of SAPHIRE follow when adding a new feature or revising an existing capability. This procedure first describes the general approach to changes, and then describes more specific processes. The process stages include design and development, testing, and documentation.

  16. 48 CFR 410.002 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Procedures. 410.002 Section 410.002 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE COMPETITION AND ACQUISITION PLANNING MARKET RESEARCH 410.002 Procedures. Market research must include obtaining information...

  17. 48 CFR 410.002 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Procedures. 410.002 Section 410.002 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE COMPETITION AND ACQUISITION PLANNING MARKET RESEARCH 410.002 Procedures. Market research must include obtaining information...

  18. 40 CFR 35.920-2 - Procedure.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Procedure. 35.920-2 Section 35.920-2 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE STATE AND LOCAL ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.920-2 Procedure. (a) Preapplication assistance, including,...

  19. Multiple Comparison Procedures when Population Variances Differ.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Lee, JaeShin

    A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…

  20. 49 CFR 383.131 - Test procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Information on the requirements described in § 383.71, the implied consent to alcohol testing described in... refusal to comply with such alcohol testing, State procedures described in § 383.73, and other appropriate...; (4) Details of testing procedures, including the purpose of the tests, how to respond, any...

  1. 14 CFR 183.53 - Procedures manual.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Procedures manual. 183.53 Section 183.53... manual. No ODA Letter of Designation may be issued before the Administrator approves an applicant's procedures manual. The approved manual must: (a) Be available to each member of the ODA Unit; (b) Include...

  2. Spanish Basic Course: Radio Communications Procedures, USAF.

    ERIC Educational Resources Information Center

    Defense Language Inst., Washington, DC.

    This guide to radio communication procedures is offered in Spanish and English as a means of securing a closer working relationship among United States Air Force personnel and Latin American aviators and technicians. Eight dialogues concerning routine flight procedures and aerospace technology are included. It is suggested that two rated students…

  3. Procedure to Generate the MPACT Multigroup Library

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the light water reactor. The objective of this document is focused on reviewing the current procedure to generate the MPACT multigroup library. Detailed methodologies and procedures are included in this document for further discussion to improve the MPACT multigroup library.

  4. W-087 Acceptance test procedure. Revision 1

    SciTech Connect

    Joshi, A.W.

    1997-06-10

    This Acceptance Test Procedure/Operational Test Procedure (ATP/OTP) has been prepared to demonstrate that the Electrical/Instrumentation and Mechanical systems function as required by project criteria and to verify proper operation of the integrated system including the interlocks.

  5. 40 CFR 1507.3 - Agency procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Agency procedures. 1507.3 Section 1507.3 Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY AGENCY COMPLIANCE § 1507.3 Agency... environmental impact statements. (c) Agency procedures may include specific criteria for providing...

  6. 40 CFR 1507.3 - Agency procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Agency procedures. 1507.3 Section 1507.3 Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY AGENCY COMPLIANCE § 1507.3 Agency... environmental impact statements. (c) Agency procedures may include specific criteria for providing...

  7. 40 CFR 1507.3 - Agency procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Agency procedures. 1507.3 Section 1507.3 Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY AGENCY COMPLIANCE § 1507.3 Agency... environmental impact statements. (c) Agency procedures may include specific criteria for providing...

  8. 40 CFR 1507.3 - Agency procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Agency procedures. 1507.3 Section 1507.3 Protection of Environment COUNCIL ON ENVIRONMENTAL QUALITY AGENCY COMPLIANCE § 1507.3 Agency... environmental impact statements. (c) Agency procedures may include specific criteria for providing...

  9. A Cuckoo Search Algorithm for Multimodal Optimization

    PubMed Central

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  10. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  11. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  12. Cabling procedure for the colored HOMFLY polynomials

    NASA Astrophysics Data System (ADS)

    Anokhina, A. S.; Morozov, A. A.

    2014-02-01

    We discuss using the cabling procedure to calculate colored HOMFLY polynomials. We describe how it can be used and how the projectors and -matrices needed for this procedure can be found. The constructed matrix expressions for the projectors and -matrices in the fundamental representation allow calculating the HOMFLY polynomial in an arbitrary representation for an arbitrary knot. The computational algorithm can be used for the knots and links with ¦ Q¦ m ≤ 12, where m is the number of strands in a braid representation of the knot and ¦ Q¦ is the number of boxes in the Young diagram of the representation. We also discuss the justification of the cabling procedure from the group theory standpoint, deriving expressions for the fundamental -matrices and clarifying some conjectures formulated in previous papers.

  13. The procedure safety system

    NASA Technical Reports Server (NTRS)

    Obrien, Maureen E.

    1990-01-01

    Telerobotic operations, whether under autonomous or teleoperated control, require a much more sophisticated safety system than that needed for most industrial applications. Industrial robots generally perform very repetitive tasks in a controlled, static environment. The safety system in that case can be as simple as shutting down the robot if a human enters the work area, or even simply building a cage around the work space. Telerobotic operations, however, will take place in a dynamic, sometimes unpredictable environment, and will involve complicated and perhaps unrehearsed manipulations. This creates a much greater potential for damage to the robot or objects in its vicinity. The Procedural Safety System (PSS) collects data from external sensors and the robot, then processes it through an expert system shell to determine whether an unsafe condition or potential unsafe condition exists. Unsafe conditions could include exceeding velocity, acceleration, torque, or joint limits, imminent collision, exceeding temperature limits, and robot or sensor component failure. If a threat to safety exists, the operator is warned. If the threat is serious enough, the robot is halted. The PSS, therefore, uses expert system technology to enhance safety thus reducing operator work load, allowing him/her to focus on performing the task at hand without the distraction of worrying about violating safety criteria.

  14. Inflight IFR procedures simulator

    NASA Technical Reports Server (NTRS)

    Parker, L. C. (Inventor)

    1984-01-01

    An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.

  15. Handbook of radiologic procedures

    SciTech Connect

    Hedgcock, M.

    1986-01-01

    This book is organized around radiologic procedures with each discussed from the points of view of: indications, contraindications, materials, method of procedures and complications. Covered in this book are: emergency radiology chest radiology, bone radiology, gastrointestinal radiology, GU radiology, pediatric radiology, computerized tomography, neuroradiology, visceral and peripheral angiography, cardiovascular radiology, nuclear medicine, lymphangiography, and mammography.

  16. Procedural Learning and Dyslexia

    ERIC Educational Resources Information Center

    Nicolson, R. I.; Fawcett, A. J.; Brookes, R. L.; Needle, J.

    2010-01-01

    Three major "neural systems", specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed ("Trends Neurosci.",30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief…

  17. Coombs' Type Response Procedures.

    ERIC Educational Resources Information Center

    Koehler, Roger A.

    This paper provides substantial evidence in favor of the continued use of conventional objective testing procedures in lieu of either the Coombs' cross-out technique or the Dressel and Schmid free-choice response procedure. From the studies presented in this paper, the tendency is for the cross-out and the free choice methods to yield a decrement…

  18. Enucleation Procedure Manual.

    ERIC Educational Resources Information Center

    Davis, Kevin; Poston, George

    This manual provides information on the enucleation procedure (removal of the eyes for organ banks). An introductory section focuses on the anatomy of the eye and defines each of the parts. Diagrams of the eye are provided. A list of enucleation materials follows. Other sections present outlines of (1) a sterile procedure; (2) preparation for eye…

  19. Policies and Procedures.

    ERIC Educational Resources Information Center

    Klein, William D.; McKenna, Bernard

    1997-01-01

    States that, although policies and procedure documents play an important role in developing and maintaining a consistent quality of interaction in organizations, research literature is weak in this area. Initiates further discussion by defining and describing policy/procedure documents. Identifies a third kind, work instructions. Uses a genre…

  20. Vectorized Rebinning Algorithm for Fast Data Down-Sampling

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Jeffrey

    2013-01-01

    A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.

  1. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  2. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  3. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  4. Image processing and computer vision algorithm selection and refinement using an operator-assisted meta-algorithm

    NASA Astrophysics Data System (ADS)

    Shaaban, Khaled M.; Schalkoff, Robert J.

    1995-06-01

    Most image processing and feature extraction algorithms consist of a composite sequence of operations to achieve a specific task. Overall algorithm capability depends upon the individual performance of each of these operations. This performance, in turn, is usually controlled by a set of a priori known (or estimated) algorithm parameters. The overall design of an image processing algorithm involves both the selections of the sub-algorithm sequence and the required operating parameters, and is done using the best available knowledge of the problem and the experience of the algorithm designer. This paper presents a dynamic and adaptive image processing algorithm development structure. The implementation of the dynamic algorithm structure requires solving of a classification problem at decision nodes in an algorithm graph, A. The number of required classifiers equals the number of decision nodes. There are several learning techniques that could be used to implement any of these classifiers. Each of these techniques, in turn, requires a training set. This training set could be generated using a modified form of the dynamic algorithm. In this modified form, a human operator interface replaces all of the decision nodes. An optimization procedure (Nelder-Mead) is employed to assist the operator in finding the best parameter values. Examples of the approach using real-world imagery are shown.

  5. THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL

    SciTech Connect

    Werth, D.; O'Steen, L.

    2008-02-11

    We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.

  6. Simultaneous image compression, fusion and encryption algorithm based on compressive sensing and chaos

    NASA Astrophysics Data System (ADS)

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2016-05-01

    In this paper, a novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images. The sparsely represented source images are firstly measured with the key-controlled pseudo-random measurement matrix constructed using logistic map, which reduces the data to be processed and realizes the initial encryption. Then the obtained measurements are fused by the proposed adaptive weighted fusion rule. The fused measurement is further encrypted into the ciphertext through an iterative procedure including improved random pixel exchanging technique and fractional Fourier transform. The fused image can be reconstructed by decrypting the ciphertext and using a recovery algorithm. The proposed algorithm not only reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys. Numerical results demonstrate the feasibility and security of the proposed scheme.

  7. Large spatial, temporal, and algorithmic adaptivity for implicit nonlinear finite element analysis

    SciTech Connect

    Engelmann, B.E.; Whirley, R.G.

    1992-07-30

    The development of effective solution strategies to solve the global nonlinear equations which arise in implicit finite element analysis has been the subject of much research in recent years. Robust algorithms are needed to handle the complex nonlinearities that arise in many implicit finite element applications such as metalforming process simulation. The authors experience indicates that robustness can best be achieved through adaptive solution strategies. In the course of their research, this adaptivity and flexibility has been refined into a production tool through the development of a solution control language called ISLAND. This paper discusses aspects of adaptive solution strategies including iterative procedures to solve the global equations and remeshing techniques to extend the domain of Lagrangian methods. Examples using the newly developed ISLAND language are presented to illustrate the advantages of embedding temporal, algorithmic, and spatial adaptivity in a modem implicit nonlinear finite element analysis code.

  8. Authentication Procedures - The Procedures and Integration Working Group

    SciTech Connect

    Kouzes, Richard T.; Bratcher, Leigh; Gosnell, Tom; Langner, Diana; MacArthur, D.; Mihalczo, John T.; Pura, Carolyn; Riedy, Alex; Rexroth, Paul; Scott, Mary; Springarn, Jay

    2001-05-31

    Authentication is how we establish trust in monitoring systems and measurements to verify compliance with, for example, the storage of nuclear weapons material. Authentication helps assure the monitoring party that accurate and reliable information is provided by any measurement system and that any irregularities are detected. The U.S. is developing its point of view on the procedures for authentication of monitoring systems now planned or contemplated for arms reduction and control applications. The authentication of a system utilizes a set of approaches, including: functional testing using trusted calibration sources, evaluation of documentation, evaluation of software, evaluation of hardware, random selection of hardware and software, tamper-indicating devices, and operational procedures. Authentication of measurement systems should occur throughout their lifecycles, starting with the elements of design, and moving to off-site authentication, on-siste authentication, and continuing with authentication following repair. The most important of these is the initial design of systems. Hardware and software design criteria and procurement decisions can make future authentication relatively straightforward or conversely very difficult. Facility decisions can likewise ease the procedures for authentication since reliable and effective monitoring systems and tampering indicating devices can help provide the assurance needed in the integrity of such items as measurement systems, spare equipment, and reference sources. This paper will summarize the results of the U.S. Authentication Task Force discussion on the role of procedures in authentication.

  9. Toddler test or procedure preparation

    MedlinePlus

    ... procedure; Test/procedure preparation - toddler; Preparing for a medical test or procedure - toddler ... A, Franz BE. Practical communication guide for paediatric procedures. Emerg ... PMID: 19588390 www.ncbi.nlm.nih.gov/pubmed/19588390 .

  10. Solar Position Algorithm for Solar Radiation Applications (Revised)

    SciTech Connect

    Reda, I.; Andreas, A.

    2008-01-01

    This report is a step-by-step procedure for implementing an algorithm to calculate the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of ?0.0003/. It is written in a step-by-step format to simplify otherwise complicated steps, with a focus on the sun instead of the planets and stars in general. The algorithm is written in such a way to accommodate solar radiation applications.

  11. Algorithm development for Maxwell's equations for computational electromagnetism

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  12. Genetic-Algorithm Tool For Search And Optimization

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven

    1995-01-01

    SPLICER computer program used to solve search and optimization problems. Genetic algorithms adaptive search procedures (i.e., problem-solving methods) based loosely on processes of natural selection and Darwinian "survival of fittest." Algorithms apply genetically inspired operators to populations of potential solutions in iterative fashion, creating new populations while searching for optimal or nearly optimal solution to problem at hand. Written in Think C.

  13. Experimental procedure for the evaluation of tooth stiffness in spline coupling including angular misalignment

    NASA Astrophysics Data System (ADS)

    Curà, Francesca; Mura, Andrea

    2013-11-01

    Tooth stiffness is a very important parameter in studying both static and dynamic behaviour of spline couplings and gears. Many works concerning tooth stiffness calculation are available in the literature, but experimental results are very rare, above all considering spline couplings. In this work experimental values of spline coupling tooth stiffness have been obtained by means of a special hexapod measuring device. Experimental results have been compared with the corresponding theoretical and numerical ones. Also the effect of angular misalignments between hub and shaft has been investigated in the experimental planning.

  14. New data evaluation procedure including advanced background subtraction for radiography using the example of insect mandibles

    NASA Astrophysics Data System (ADS)

    Mangold, Stefan; van de Kamp, Thomas; Steininger, Ralph

    2016-05-01

    The usefulness of full field transmission spectroscopy is shown using the example of mandible of the stick insect Peruphasma schultei. An advanced data evaluation tool chain with an energy drift correction and highly reproducible automatic background correction is presented. The results show significant difference between the top and the bottom of the mandible of an adult stick insect.

  15. Establishing a Sentinel Lymph Node Mapping Algorithm for the Treatment of Early Cervical Cancer

    PubMed Central

    Cormier, Beatrice; Diaz, John P.; Shih, Karin; Sampson, Rachael M.; Sonoda, Yukio; Park, Kay J.; Alektiar, Khaled; Chi, Dennis S.; Barakat, Richard R.; Abu-Rustum, Nadeem R.

    2016-01-01

    Objective To establish an algorithm that incorporates sentinel lymph node (SLN) mapping to the surgical treatment of early cervical cancer, ensuring that lymph node (LN) metastases are accurately detected but minimizing the need for complete lymphadenectomy (LND). Methods A prospectively maintained database of all patients who underwent SLN procedure followed by a complete bilateral pelvic LND for cervical cancer (FIGO stages IA1 with LVI to IIA) from 03/2003 to 09/2010 was analyzed. The surgical algorithm we evaluated included the following: 1. SLN are removed and submitted to ultrastaging; 2. Any suspicious LN is removed regardless of mapping; 3. If only unilateral mapping is noted, a contralateral side-specific pelvic LND is performed (including inter-iliac nodes); 4. Parametrectomy en bloc with primary tumor resection is done in all cases. We retrospectively applied the algorithm to determine how it would have performed. Results One hundred twenty-two patients were included. Median SLN count was 3 and median total LN count was 20. At least one SLN was identified in 93% of cases (114/122), while optimal (bilateral) mapping was achieved in 75% (91/122). SLN correctly diagnosed 21 of 25 patients with nodal spread. When the algorithm was applied, all pts with LN metastasis were detected and bilateral pelvic LND could have been spared in the 75% of cases with optimal mapping. Conclusions In the surgical treatment of early cervical cancer, the algorithm we propose allows for comprehensive detection of all patients with nodal disease and spares complete LND in the majority of cases. PMID:21570713

  16. An Efficient Implementation of the Gliding Box Lacunarity Algorithm

    SciTech Connect

    Charles R. Tolle,; Timothy R. McJunkin; David J. Gorsich

    2008-03-01

    Lacunarity is a measure of how data fills space. It complements fractal dimension, which measures how much space is filled. Currently, many researchers use the gliding box algorithm for calculating lacunarity. This paper introduces a fast algorithm for making this calculation. The algorithm presented is akin to fast box counting algorithms used by some researchers in estimating fractal dimension. A simplified gliding box measure equation along with key pseudo code implementations for the algorithm are presented. Applications for the gliding box lacunarity measure have included subjects that range from biological community modeling to target detection.

  17. Novel biomedical tetrahedral mesh methods: algorithms and applications

    NASA Astrophysics Data System (ADS)

    Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu

    2007-12-01

    Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.

  18. 7 CFR 930.32 - Procedure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... MICHIGAN, NEW YORK, PENNSYLVANIA, OREGON, UTAH, WASHINGTON, AND WISCONSIN Order Regulating Handling Administrative Body § 930.32 Procedure. (a) Two-thirds of the members of the Board, including alternates...

  19. 49 CFR 193.2903 - Security procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY LIQUEFIED NATURAL GAS FACILITIES... to be taken, including notification of other appropriate plant personnel and law enforcement...) Liaison with local law enforcement officials to keep them informed about current security procedures...

  20. 17 CFR 10.92 - Shortened procedure.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... section, the term “statement” includes (1) Statements of fact signed and sworn to by persons having... shortened procedure must be sworn to by persons having knowledge thereof and, except under...

  1. 17 CFR 10.92 - Shortened procedure.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... section, the term “statement” includes (1) Statements of fact signed and sworn to by persons having... shortened procedure must be sworn to by persons having knowledge thereof and, except under...

  2. An algorithm to build mock galaxy catalogues using MICE simulations

    NASA Astrophysics Data System (ADS)

    Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.

    2015-02-01

    We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.

  3. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  4. Deciphering and generalizing Demiański-Janis-Newman algorithm

    NASA Astrophysics Data System (ADS)

    Erbin, Harold

    2016-05-01

    In the case of vanishing cosmological constant, Demiański has shown that the Janis-Newman algorithm can be generalized in order to include a NUT charge and another parameter c, in addition to the angular momentum. Moreover it was proved that only a NUT charge can be added for non-vanishing cosmological constant. However despite the fact that the form of the coordinate transformations was obtained, it was not explained how to perform the complexification on the metric function, and the procedure does not follow directly from the usual Janis-Newman rules. The goal of our paper is threefold: explain the hidden assumptions of Demiański's analysis, generalize the computations to topological horizons (spherical and hyperbolic) and to charged solutions, and explain how to perform the complexification of the function. In particular we present a new solution which is an extension of the Demiański metric to hyperbolic horizons. These different results open the door to applications on (gauged) supergravity since they allow for a systematic application of the Demiański-Janis-Newman algorithm.

  5. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  6. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  7. Treatment for cartilage injuries of the knee with a new treatment algorithm.

    PubMed

    Ozmeriç, Ahmet; Alemdaroğlu, Kadir Bahadır; Aydoğan, Nevres Hürriyet

    2014-11-18

    Treatment of articular cartilage injuries to the knee remains a considerable challenge today. Current procedures succeed in providing relief of symptoms, however damaged articular tissue is not replaced with new tissue of the same biomechanical properties and long-term durability as normal hyaline cartilage. Despite many arthroscopic procedures that often manage to achieve these goals, results are far from perfect and there is no agreement on which of these procedures are appropriate, particularly when full-thickness chondral defects are considered.Therefore, the search for biological solution in long-term functional healing and increasing the quality of wounded cartilage has been continuing. For achieving this goal and apply in wide defects, scaffolds are developed.The rationale of using a scaffold is to create an environment with biodegradable polymers for the in vitro growth of living cells and their subsequent implantation into the lesion area. Previously a few numbers of surgical treatment algorithm was described in reports, however none of them contained one-step or two -steps scaffolds. The ultimate aim of this article was to review various arthroscopic treatment options for different stage lesions and develop a new treatment algorithm which included the scaffolds. PMID:25405097

  8. Heat Capacity Mapping Radiometer (HCMR) data processing algorithm, calibration, and flight performance evaluation

    NASA Technical Reports Server (NTRS)

    Bohse, J. R.; Bewtra, M.; Barnes, W. L.

    1979-01-01

    The rationale and procedures used in the radiometric calibration and correction of Heat Capacity Mapping Mission (HCMM) data are presented. Instrument-level testing and calibration of the Heat Capacity Mapping Radiometer (HCMR) were performed by the sensor contractor ITT Aerospace/Optical Division. The principal results are included. From the instrumental characteristics and calibration data obtained during ITT acceptance tests, an algorithm for post-launch processing was developed. Integrated spacecraft-level sensor calibration was performed at Goddard Space Flight Center (GSFC) approximately two months before launch. This calibration provided an opportunity to validate the data calibration algorithm. Instrumental parameters and results of the validation are presented and the performances of the instrument and the data system after launch are examined with respect to the radiometric results. Anomalies and their consequences are discussed. Flight data indicates a loss in sensor sensitivity with time. The loss was shown to be recoverable by an outgassing procedure performed approximately 65 days after the infrared channel was turned on. It is planned to repeat this procedure periodically.

  9. Current procedural terminology; a primer.

    PubMed

    Hirsch, Joshua A; Leslie-Mazwi, Thabele M; Nicola, Gregory N; Barr, Robert M; Bello, Jacqueline A; Donovan, William D; Tu, Raymond; Alson, Mark D; Manchikanti, Laxmaiah

    2015-04-01

    In 1966, The American Medical Association (AMA) working with multiple major medical specialty societies developed an iterative coding system for describing medical procedures and services using uniform language, the Current Procedural Terminology (CPT) system. The current code set, CPT IV, forms the basis of reporting most of the services performed by healthcare providers, physicians and non-physicians as well as facilities allowing effective, reliable communication among physician and other providers, third parties and patients. This coding system and its maintenance has evolved significantly since its inception, and now goes well beyond its readily perceived role in reimbursement. Additional roles include administrative management, tracking new and investigational procedures, and evolving aspects of 'pay for performance'. The system also allows for local, regional and national utilization comparisons for medical education and research. Neurointerventional specialists use CPT category I codes regularly--for example, 36,215 for first-order cerebrovascular angiography, 36,216 for second-order vessels, and 37,184 for acute stroke treatment by mechanical means. Additionally, physicians add relevant modifiers to the CPT codes, such as '-26' to indicate 'professional charge only,' or '-59' to indicate a distinct procedural service performed on the same day. PMID:24589819

  10. Genetic algorithm dose minimization for an operational layout.

    SciTech Connect

    McLawhorn, S. L.; Kornreich, D. E.; Dudziak, Donald J.

    2002-01-01

    In an effort to reduce the dose to operating technicians performing fixed-time procedures on encapsulated source material, a program has been developed to optimize the layout of workstations within a facility by use of a genetic algorithm. Taking into account the sources present at each station and the time required to complete each procedure, the program utilizes a point kernel dose calculation tool for dose estimates. The genetic algorithm driver employs the dose calculation code as a cost function to determine the optimal spatial arrangement of workstations to minimize the total worker dose.

  11. Five-dimensional Janis-Newman algorithm

    NASA Astrophysics Data System (ADS)

    Erbin, Harold; Heurtier, Lucien

    2015-08-01

    The Janis-Newman algorithm has been shown to be successful in finding new stationary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular momentum. In this paper we propose an extension of this algorithm to five-dimensions with two angular momenta—using the prescription of Giampieri—through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d=3,4,5 of the Janis-Newman algorithm, from which several examples are exposed, including the BTZ black hole.

  12. Cell list algorithms for nonequilibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Dobson, Matthew; Fox, Ian; Saracino, Alexandra

    2016-06-01

    We present two modifications of the standard cell list algorithm that handle molecular dynamics simulations with deforming periodic geometry. Such geometry naturally arises in the simulation of homogeneous, linear nonequilibrium flow modeled with periodic boundary conditions, and recent progress has been made developing boundary conditions suitable for general 3D flows of this type. Previous works focused on the planar flows handled by Lees-Edwards or Kraynik-Reinelt boundary conditions, while the new versions of the cell list algorithm presented here are formulated to handle the general 3D deforming simulation geometry. As in the case of equilibrium, for short-ranged pairwise interactions, the cell list algorithm reduces the computational complexity of the force computation from O(N2) to O(N), where N is the total number of particles in the simulation box. We include a comparison of the complexity and efficiency of the two proposed modifications of the standard algorithm.

  13. A comparison of binary and continuous genetic algorithm in parameter estimation of a logistic growth model

    NASA Astrophysics Data System (ADS)

    Windarto, Indratno, S. W.; Nuraini, N.; Soewono, E.

    2014-02-01

    Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The algorithm begins by defining the optimization variables, defining the cost function (in a minimization problem) or the fitness function (in a maximization problem) and selecting genetic algorithm parameters. The main procedures in genetic algorithm are generating initial population, selecting some chromosomes (individual) as parent's individual, mating, and mutation. In this paper, binary and continuous genetic algorithms were implemented to estimate growth rate and carrying capacity parameter from poultry data cited from literature. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, both algorithms can estimate these parameters well. Suitable range for mutation rate in continuous genetic algorithm is wider than the binary one.

  14. Design procedures for fiber composite box beams

    NASA Technical Reports Server (NTRS)

    Chamis, Cristos C.; Murthy, Pappu L. N.

    1989-01-01

    Step-by-step procedures are described which can be used for the preliminary design of fiber composite box beams subjected to combined loadings. These procedures include a collection of approximate closed-form equations so that all the required calculations can be performed using pocket calculators. Included is an illustrative example of a tapered cantilever box beam subjected to combined loads. The box beam is designed to satisfy strength, displacement, buckling, and frequency requirements.

  15. Phase unwrapping algorithms in laser propagation simulation

    NASA Astrophysics Data System (ADS)

    Du, Rui; Yang, Lijia

    2013-08-01

    Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.

  16. Hemispherectomy Procedure in Proteus Syndrome.

    PubMed

    Gunawan, PrastiyaIndra; Lusiana, Lusiana; Saharso, Darto

    2016-01-01

    Objective Proteus syndrome is a rare overgrowth disorder including bone, soft tissue, and skin. Central nervous system manifestations were reported in about 40% of the patients including hemimegalencephaly and the resultant hemicranial hyperplasia, convulsions and mental deficiency. We report a 1-month-old male baby referred to Pediatric Neurology Clinic Soetomo Hospital, Surabaya, Indonesia in 2014 presented recurrent seizures since birth with asymmetric dysmorphic face with the right side larger than the left, subcutaneous mass and linear nevi. Craniocervical MRI revealed hemimegalencephaly right cerebral hemisphere. Triple antiepileptic drugs were already given as well as the ketogenic diet, but the seizures persisted. The seizure then was resolved after hemispherectomy procedure. PMID:27375761

  17. Incorporating Spatial Models in Visual Field Test Procedures

    PubMed Central

    Rubinstein, Nikki J.; McKendrick, Allison M.; Turpin, Andrew

    2016-01-01

    Purpose To introduce a perimetric algorithm (Spatially Weighted Likelihoods in Zippy Estimation by Sequential Testing [ZEST] [SWeLZ]) that uses spatial information on every presentation to alter visual field (VF) estimates, to reduce test times without affecting output precision and accuracy. Methods SWeLZ is a maximum likelihood Bayesian procedure, which updates probability mass functions at VF locations using a spatial model. Spatial models were created from empirical data, computational models, nearest neighbor, random relationships, and interconnecting all locations. SWeLZ was compared to an implementation of the ZEST algorithm for perimetry using computer simulations on 163 glaucomatous and 233 normal VFs (Humphrey Field Analyzer 24-2). Output measures included number of presentations and visual sensitivity estimates. Results There was no significant difference in accuracy or precision of SWeLZ for the different spatial models relative to ZEST, either when collated across whole fields or when split by input sensitivity. Inspection of VF maps showed that SWeLZ was able to detect localized VF loss. SWeLZ was faster than ZEST for normal VFs: median number of presentations reduced by 20% to 38%. The number of presentations was equivalent for SWeLZ and ZEST when simulated on glaucomatous VFs. Conclusions SWeLZ has the potential to reduce VF test times in people with normal VFs, without detriment to output precision and accuracy in glaucomatous VFs. Translational Relevance SWeLZ is a novel perimetric algorithm. Simulations show that SWeLZ can reduce the number of test presentations for people with normal VFs. Since many patients have normal fields, this has the potential for significant time savings in clinical settings. PMID:26981329

  18. The impact of reconstruction algorithms and time of flight information on PET/CT image quality

    PubMed Central

    Suljic, Alen; Tomse, Petra; Jensterle, Luka; Skrk, Damijan

    2015-01-01

    Background The aim of the study was to explore the influence of various time-of-flight (TOF) and non-TOF reconstruction algorithms on positron emission tomography/computer tomography (PET/CT) image quality. Materials and methods. Measurements were performed with a triple line source phantom, consisting of capillaries with internal diameter of ∼ 1 mm and standard Jaszczak phantom. Each of the data sets was reconstructed using analytical filtered back projection (FBP) algorithm, iterative ordered subsets expectation maximization (OSEM) algorithm (4 iterations, 24 subsets) and iterative True-X algorithm incorporating a specific point spread function (PSF) correction (4 iterations, 21 subsets). Baseline OSEM (2 iterations, 8 subsets) was included for comparison. Procedures were undertaken following the National Electrical Manufacturers Association (NEMA) NU-2-2001 protocol. Results Measurement of spatial resolution in full width at half maximum (FWHM) was 5.2 mm, 4.5 mm and 2.9 mm for FBP, OSEM and True-X; and 5.1 mm, 4.5 mm and 2.9 mm for FBP+TOF, OSEM+TOF and True-X+TOF respectively. Assessment of reconstructed Jaszczak images at different concentration ratios showed that incorporation of TOF information improves cold contrast, while hot contrast only slightly, however the most prominent improvement could be seen in background variability - noise reduction. Conclusions On the basis of the results of investigation we concluded, that incorporation of TOF information in reconstruction algorithm mostly affects reduction of the background variability (levels of noise in the image), while the improvement of spatial resolution due to incorporation of TOF information is negligible. Comparison of traditional and modern reconstruction algorithms showed that analytical FBP yields comparable results in some parameter measurements, such as cold contrast and relative count error. Iterative methods show highest levels of hot contrast, when TOF and PSF corrections were applied

  19. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks

    PubMed Central

    Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L.; Sweet, Robert A.; Wang, Jieru; Chen, Wei

    2016-01-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer’s disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named “FastGGM”. PMID:26872036

  20. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  1. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  2. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  3. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  4. Aerodynamic optimum design of transonic turbine cascades using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Li, Jun; Feng, Zhenping; Chang, Jianzhong; Shen, Zuda

    1997-06-01

    This paper presents an aerodynamic optimum design method for transonic turbine cascades based on the Genetic Algorithms coupled to the inviscid flow Euler solver and the boundary-layer calculation. The Genetic Algorithms control the evolution of a population of cascades towards an optimum design. The fitness value of each string is evaluated using the flow solver. The design procedure has been developed and the behavior of the genetic algorithms has been tested. The objective functions of the design examples are the minimum mean-square deviation between the aimed pressure and computed pressure and the minimum amount of user expertise.

  5. Algorithm implementation on the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Zang, Thomas A.

    1987-01-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  6. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  7. Linear-scaling and parallelisable algorithms for stochastic quantum chemistry

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Smart, Simon D.; Alavi, Ali

    2014-07-01

    For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.

  8. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  9. Dynamic alarm response procedures

    SciTech Connect

    Martin, J.; Gordon, P.; Fitch, K.

    2006-07-01

    The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)

  10. Definition of "experimental procedures".

    PubMed

    2009-11-01

    This Practice Committee Opinion provides a revised definition of "experimental procedures." This version replaces the document "Definition of Experimental" that was published most recently in November 2008. PMID:19836733

  11. Common Interventional Radiology Procedures

    MedlinePlus

    ... of common interventional techniques is below. Common Interventional Radiology Procedures Angiography An X-ray exam of the ... into the vertebra. Copyright © 2016 Society of Interventional Radiology. All rights reserved. 3975 Fair Ridge Drive • Suite ...

  12. Cardiac ablation procedures

    MedlinePlus

    ... Accessory pathway, such as Wolff-Parkinson-White Syndrome Atrial fibrillation and atrial flutter Ventricular tachycardia ... consensus statement on catheter and surgical ablation of atrial fibrillation: ... for personnel, policy, procedures and follow-up. ...

  13. Enhanced Sea Ice Concentration and Ice Temperature Algorithms for AMSR

    NASA Technical Reports Server (NTRS)

    Comiso, Josefino C.; Manning, Will; Gersten, Robert

    1998-01-01

    Accurate quantification of sea ice concentration and ice temperature from satellite passive microwave data is important because they provide the only long term, spatially detailed and consistent data set needed to study the climatology of the polar regions. Sea ice concentration data are used to derive large-scale daily ice extents that are utilized in trend analysis of the global sea ice cover. They are also used to quantify the amount of open water and thin ice in polynya and divergence regions which together with ice temperatures are in turn needed to estimate vertical heat and salinity fluxes in these regions. Sea ice concentrations have been derived from the NASA Team and Bootstrap algorithms while a separate technique for deriving ice temperature has been reported. An integrated technique that will utilizes most of the channels of AMSR (Advanced Microwave Scanning Radiometer) has been developed. The technique uses data from the 6 GHz and 37 GHz channels at vertical polarization obtain an initial estimate of sea ice concentration and ice temperature. The derived ice temperature is then utilized to estimate the emissivities for the corresponding observations at all the other channels. A procedure for calculating the ice concentration similar to the Bootstrap technique is then used but with variables being emissivities instead of brightness temperatures to minimizes errors associated with spatial changes in ice temperatures within the ice pack. Comparative studies of ice concentration results with those from other algorithms, including the original Bootstrap algorithm and those from high resolution satellite visible and infrared data will be presented. Also, results from a simulation study that demonstrates the effectiveness of the technique in correcting for spatial variations in ice temperatures will be shown. The ice temperature results are likewise compared with satellite infrared and buoy data with the latter adjusted to account for the effects of the snow

  14. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  15. Automatic Procedure for the Registration of Thermographic Images with Point Clouds

    NASA Astrophysics Data System (ADS)

    Lagüela, S.; Armesto, J.; Arias, P.; Zakhor, A.

    2012-07-01

    This paper presents a procedure for the automatic registration of thermographies with laser scanning point clouds. Given the heterogeneous nature of the two modalities, we propose a feature-based approach, satisfying the requisite that extracted features have to be invariant not only to rotation, translation and scale but also to changes in illumination and dimensionality. As speed and minimum operator interaction are prerequisites for the viability of the process in the building industry, our automatic registration procedure includes automatic feature extraction with no human intervention. With this aim, a line segment detector is used to extract 2D lines from thermographies, and 3D lines are extracted through segmentation of the point cloud. Feature-matching and the relative pose between thermographies and point cloud are obtained from an iterative procedure applied to detect and reject outliers; this includes rotation matrix and translation vector calculation and the application of the RANSAC algorithm to find a consistent set of matches. An automatically textured thermographic 3D model is the expected result of these procedures once the point cloud is filtered and triangulated.

  16. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  17. 40 CFR 86.1235-96 - Dynamometer procedure.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Emission Test Procedures for New Gasoline-Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.1235-96 Dynamometer procedure. Section 86.1235-96 includes...

  18. Algebraic Procedures Used by 13-to-15-Year-Olds.

    ERIC Educational Resources Information Center

    Demby, Agnieszka

    1997-01-01

    Investigates different types of procedures used by students (N=108) to simplify certain algebraic expressions. Findings indicate seven types of procedures including automatization, formulas, guessing-substituting, preparatory modification, concretization, rules, and quasi-rules. Contains 30 references. (JRH)

  19. Automated training for algorithms that learn from genomic data.

    PubMed

    Cilingir, Gokcen; Broschat, Shira L

    2015-01-01

    Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable. PMID:25695053

  20. Designing Integrated Fuzzy Guidance Law for Aerodynamic Homing Missiles Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Omar, Hanafy M.

    The Fuzzy logic controller (FLC) is well-known for robustness to parameter variations and ability to reject noise. However, its design requires definition of many parameters. This work proposes a systematic and simple procedure to develop an integrated fuzzy-based guidance law which consists of three FLC. Each is activated in a region of the interception. Another fuzzy-based switching system is introduced to allow smooth transition between these controllers. The parameters of all the fuzzy controllers, which include the distribution of the membership functions and the rules, are obtained simply by observing the function of each controller. Furthermore, these parameters are tuned by genetic algorithms by solving an optimization problem to minimize the interception time, missile acceleration commands, and miss distance. The simulation results show that the proposed procedure can generate a guidance law with satisfactory performance.

  1. A fuzzy finite element procedure for the calculation of uncertain frequency-response functions of damped structures: Part 1—Procedure

    NASA Astrophysics Data System (ADS)

    Moens, David; Vandepitte, Dirk

    2005-12-01

    This work introduces a numerical algorithm to calculate frequency-response functions (FRFs) of damped finite element (FE) models with fuzzy uncertain parameters. Part one of this paper describes the numerical algorithm for the solution of the underlying interval finite element (IFE) problem. First, the IFE procedure for the calculation of undamped envelope FRFs is discussed. Starting from the undamped procedure, a strategy is developed to analyse damped structures based on the principle of Rayleigh damping. This is achieved by analysing the effect of the proportional damping coefficients on the subsequent steps of the undamped procedure. This finally results in a procedure for the calculation of fuzzy damped FRFs based on an analytical extension of the undamped algorithm. Part one of this paper introduces the numerical procedure. Part two of this paper illustrates the application of the methodology on four numerical case studies.

  2. Full potential unsteady computations including aeroelastic effects

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Ide, Hiroshi

    1989-01-01

    A unified formulation is presented based on the full potential framework coupled with an appropriate structural model to compute steady and unsteady flows over rigid and flexible configurations across the Mach number range. The unsteady form of the full potential equation in conservation form is solved using an implicit scheme maintaining time accuracy through internal Newton iterations. A flux biasing procedure based on the unsteady sonic reference conditions is implemented to compute hyperbolic regions with moving sonic and shock surfaces. The wake behind a trailing edge is modeled using a mathematical cut across which the pressure is satisfied to be continuous by solving an appropriate vorticity convection equation. An aeroelastic model based on the generalized modal deflection approach interacts with the nonlinear aerodynamics and includes both static as well as dynamic structural analyses capability. Results are presented for rigid and flexible configurations at different Mach numbers ranging from subsonic to supersonic conditions. The dynamic response of a flexible wing below and above its flutter point is demonstrated.

  3. The selection of optimal ICA algorithm parameters for robust AEP component estimates using 3 popular ICA algorithms.

    PubMed

    Castañeda-Villa, N; James, C J

    2008-01-01

    Many authors have used the Auditory Evoked Potential (AEP) recordings to evaluate the performance of their ICA algorithms and have demonstrated that this procedure can remove the typical EEG artifact in these recordings (i.e. blinking, muscle noise, line noise, etc.). However, there is little work in the literature about the optimal parameters, for each of those algorithms, for the estimation of the AEP components to reliably recover both the auditory response and the specific artifacts generated for the normal function of a Cochlear Implant (CI), used for the rehabilitation of deaf people. In this work we determine the optimal parameters of three ICA algorithms, each based on different independence criteria, and assess the resulting estimations of both the auditory response and CI artifact. We show that the algorithm utilizing temporal structure, such as TDSEP-ICA, is better in estimating the components of the auditory response, in recordings contaminated by CI artifacts, than higher order statistics based algorithms. PMID:19163893

  4. An Eligibility Determination Algorithm for Part C Early Intervention Enrollment. TRACE Practice Guide, Volume 1, Number 1

    ERIC Educational Resources Information Center

    Dunst, Carl J.

    2006-01-01

    Procedures for using a decision algorithm for determining whether an infant or toddler is eligible for Part C early intervention is the focus of this eligibility determination practice guideline. An algorithm is a step-by-step problem-solving procedure or decision-making process that results in a solution or accurate decision in a finite number of…

  5. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  6. Continuation of advanced crew procedures development techniques

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Benbow, R. L.; Evans, M. E.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.; Tatum, I. C.

    1976-01-01

    An operational computer program, the Procedures and Performance Program (PPP) which operates in conjunction with the Phase I Shuttle Procedures Simulator to provide a procedures recording and crew/vehicle performance monitoring capability was developed. A technical synopsis of each task resulting in the development of the Procedures and Performance Program is provided. Conclusions and recommendations for action leading to the improvements in production of crew procedures development and crew training support are included. The PPP provides real-time CRT displays and post-run hardcopy output of procedures, difference procedures, performance data, parametric analysis data, and training script/training status data. During post-run, the program is designed to support evaluation through the reconstruction of displays to any point in time. A permanent record of the simulation exercise can be obtained via hardcopy output of the display data and via transfer to the Generalized Documentation Processor (GDP). Reference procedures data may be transferred from the GDP to the PPP. Interface is provided with the all digital trajectory program, the Space Vehicle Dynamics Simulator (SVDS) to support initial procedures timeline development.

  7. Advances in Procedural Techniques - Antegrade

    PubMed Central

    Wilson, William; Spratt, James C.

    2014-01-01

    There have been many technological advances in antegrade CTO PCI, but perhaps most importantly has been the evolution of the “hybrid’ approach where ideally there exists a seamless interplay of antegrade wiring, antegrade dissection re-entry and retrograde approaches as dictated by procedural factors. Antegrade wire escalation with intimal tracking remains the preferred initial strategy in short CTOs without proximal cap ambiguity. More complex CTOs, however, usually require either a retrograde or an antegrade dissection re-entry approach, or both. Antegrade dissection re-entry is well suited to long occlusions where there is a healthy distal vessel and limited “interventional” collaterals. Early use of a dissection re-entry strategy will increase success rates, reduce complications, and minimise radiation exposure, contrast use as well as procedural times. Antegrade dissection can be achieved with a knuckle wire technique or the CrossBoss catheter whilst re-entry will be achieved in the most reproducible and reliable fashion by the Stingray balloon/wire. It should be avoided where there is potential for loss of large side branches. It remains to be seen whether use of newer dissection re-entry strategies will be associated with lower restenosis rates compared with the more uncontrolled subintimal tracking strategies such as STAR and whether stent insertion in the subintimal space is associated with higher rates of late stent malapposition and stent thrombosis. It is to be hoped that the algorithms, which have been developed to guide CTO operators, allow for a better transfer of knowledge and skills to increase uptake and acceptance of CTO PCI as a whole. PMID:24694104

  8. Testing Intelligently Includes Double-Checking Wechsler IQ Scores

    ERIC Educational Resources Information Center

    Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas

    2011-01-01

    The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…

  9. 12 CFR 516.120 - What information should a comment include?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false What information should a comment include? 516... APPLICATION PROCESSING PROCEDURES Comment Procedures § 516.120 What information should a comment include? (a) A comment should recite relevant facts, including any demographic, economic, or financial...

  10. Simulations of Dynamical Friction Including Spatially-Varying Magnetic Fields

    SciTech Connect

    Bell, G. I.; Bruhwiler, D. L.; Busby, R.; Abell, D. T.; Messmer, P.; Veitzer, S.; Litvinenko, V. N.; Cary, J. R.

    2006-03-20

    A proposed luminosity upgrade to the Relativistic Heavy Ion Collider (RHIC) includes a novel electron cooling section, which would use {approx}55 MeV electrons to cool fully-ionized 100 GeV/nucleon gold ions. We consider the dynamical friction force exerted on individual ions due to a relevant electron distribution. The electrons may be focussed by a strong solenoid field, with sensitive dependence on errors, or by a wiggler field. In the rest frame of the relativistic co-propagating electron and ion beams, where the friction force can be simulated for nonrelativistic motion and electrostatic fields, the Lorentz transform of these spatially-varying magnetic fields includes strong, rapidly-varying electric fields. Previous friction force simulations for unmagnetized electrons or error-free solenoids used a 4th-order Hermite algorithm, which is not well-suited for the inclusion of strong, rapidly-varying external fields. We present here a new algorithm for friction force simulations, using an exact two-body collision model to accurately resolve close interactions between electron/ion pairs. This field-free binary-collision model is combined with a modified Boris push, using an operator-splitting approach, to include the effects of external fields. The algorithm has been implemented in the VORPAL code and successfully benchmarked.

  11. Staggered solution procedures for multibody dynamics simulation

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Chiou, J. C.; Downer, J. D.

    1990-01-01

    The numerical solution procedure for multibody dynamics (MBD) systems is termed a staggered MBD solution procedure that solves the generalized coordinates in a separate module from that for the constraint force. This requires a reformulation of the constraint conditions so that the constraint forces can also be integrated in time. A major advantage of such a partitioned solution procedure is that additional analysis capabilities such as active controller and design optimization modules can be easily interfaced without embedding them into a monolithic program. After introducing the basic equations of motion for MBD system in the second section, Section 3 briefly reviews some constraint handling techniques and introduces the staggered stabilized technique for the solution of the constraint forces as independent variables. The numerical direct time integration of the equations of motion is described in Section 4. As accurate damping treatment is important for the dynamics of space structures, we have employed the central difference method and the mid-point form of the trapezoidal rule since they engender no numerical damping. This is in contrast to the current practice in dynamic simulations of ground vehicles by employing a set of backward difference formulas. First, the equations of motion are partitioned according to the translational and the rotational coordinates. This sets the stage for an efficient treatment of the rotational motions via the singularity-free Euler parameters. The resulting partitioned equations of motion are then integrated via a two-stage explicit stabilized algorithm for updating both the translational coordinates and angular velocities. Once the angular velocities are obtained, the angular orientations are updated via the mid-point implicit formula employing the Euler parameters. When the two algorithms, namely, the two-stage explicit algorithm for the generalized coordinates and the implicit staggered procedure for the constraint Lagrange

  12. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  13. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  14. General cardinality genetic algorithms

    PubMed

    Koehler; Bhattacharyya; Vose

    1997-01-01

    A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767

  15. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  16. A fast portable implementation of the Secure Hash Algorithm, III.

    SciTech Connect

    McCurley, Kevin S.

    1992-10-01

    In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.

  17. CHASTE: incorporating a novel multi-scale spatial and temporal algorithm into a large-scale open source library.

    PubMed

    Bernabeu, Miguel O; Bordas, Rafel; Pathmanathan, Pras; Pitt-Francis, Joe; Cooper, Jonathan; Garny, Alan; Gavaghan, David J; Rodriguez, Blanca; Southern, James A; Whiteley, Jonathan P

    2009-05-28

    Recent work has described the software engineering and computational infrastructure that has been set up as part of the Cancer, Heart and Soft Tissue Environment (CHASTE) project. CHASTE is an open source software package that currently has heart and cancer modelling functionality. This software has been written using a programming paradigm imported from the commercial sector and has resulted in a code that has been subject to a far more rigorous testing procedure than that is usual in this field. In this paper, we explain how new functionality may be incorporated into CHASTE. Whiteley has developed a numerical algorithm for solving the bidomain equations that uses the multi-scale (MS) nature of the physiology modelled to enhance computational efficiency. Using a simple geometry in two dimensions and a purpose-built code, this algorithm was reported to give an increase in computational efficiency of more than two orders of magnitude. In this paper, we begin by reviewing numerical methods currently in use for solving the bidomain equations, explaining how these methods may be developed to use the MS algorithm discussed above. We then demonstrate the use of this algorithm within the CHASTE framework for solving the monodomain and bidomain equations in a three-dimensional realistic heart geometry. Finally, we discuss how CHASTE may be developed to include new physiological functionality--such as modelling a beating heart and fluid flow in the heart--and how new algorithms aimed at increasing the efficiency of the code may be incorporated. PMID:19380318

  18. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  19. Application of a fast and efficient algorithm to assess landslide-prone areas in sensitive clays in Sweden

    NASA Astrophysics Data System (ADS)

    Melchiorre, C.; Tryggvason, A.

    2015-12-01

    We refine and test an algorithm for landslide susceptibility assessment in areas with sensitive clays. The algorithm uses soil data and digital elevation models to identify areas which may be prone to landslides and has been applied in Sweden for several years. The algorithm is very computationally efficient and includes an intelligent filtering procedure for identifying and removing small-scale artifacts in the hazard maps produced. Where information on bedrock depth is available, this can be included in the analysis, as can information on several soil-type-based cross-sectional angle thresholds for slip. We evaluate how processing choices such as of filtering parameters, local cross-sectional angle thresholds, and inclusion of bedrock depth information affect model performance. The specific cross-sectional angle thresholds used were derived by analyzing the relationship between landslide scarps and the quick-clay susceptibility index (QCSI). We tested the algorithm in the Göta River valley. Several different verification measures were used to compare results with observed landslides and thereby identify the optimal algorithm parameters. Our results show that even though a relationship between the cross-sectional angle threshold and the QCSI could be established, no significant improvement of the overall modeling performance could be achieved by using these geographically specific, soil-based thresholds. Our results indicate that lowering the cross-sectional angle threshold from 1 : 10 (the general value used in Sweden) to 1 : 13 improves results slightly. We also show that an application of the automatic filtering procedure that removes areas initially classified as prone to landslides not only removes artifacts and makes the maps visually more appealing, but it also improves the model performance.

  20. The Applications of Genetic Algorithms in Medicine.

    PubMed

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-11-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.]. PMID:26676060

  1. The Applications of Genetic Algorithms in Medicine

    PubMed Central

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-01-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.] PMID:26676060

  2. Advance crew procedures development techniques: Procedures generation program requirements document

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Benbow, R. L.; Hawk, M. L.

    1974-01-01

    The Procedures Generation Program (PGP) is described as an automated crew procedures generation and performance monitoring system. Computer software requirements to be implemented in PGP for the Advanced Crew Procedures Development Techniques are outlined.

  3. New algorithms for ring artifact removal

    NASA Astrophysics Data System (ADS)

    Ketcham, Richard A.

    2006-08-01

    This paper describes a set of algorithms that enable virtually complete ring artifact removal from tomographic imagery with minimal to negligible contamination of the underlying data. These procedures were created specifically to deal with data as acquired at the University of Texas high-resolution X-ray CT facility, but are likely to be applicable in other settings as well. In most cases corrections are optimally applied to sinogram data before reconstruction, but a variant is developed for correcting already-reconstructed images. The algorithms make particular use of repetitive aspects of the artifact across images to improve behavior. However, fully utilizing this functionality requires processing entire data sets simultaneously, rather than one image at a time. A number of parameters may be adjusted to optimize results for particular data sets.

  4. Parallel Implementation of Katsevich's FBP Algorithm

    PubMed Central

    Guo, Xiaohu; Kong, Qiang; Zhou, Tie; Jiang, Ming

    2006-01-01

    For spiral cone-beam CT, parallel computing is an effective approach to resolving the problem of heavy computation burden. It is well known that the major computation time is spent in the backprojection step for either filtered-backprojection (FBP) or backprojected-filtration (BPF) algorithms. By the cone-beam cover method [1], the backprojection procedure is driven by cone-beam projections, and every cone-beam projection can be backprojected independently. Basing on this fact, we develop a parallel implementation of Katsevich's FBP algorithm. We do all the numerical experiments on a Linux cluster. In one typical experiment, the sequential reconstruction time is 781.3 seconds, while the parallel reconstruction time is 25.7 seconds with 32 processors. PMID:23165019

  5. Fast computation algorithms for speckle pattern simulation

    SciTech Connect

    Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru

    2013-11-13

    We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.

  6. Genetic algorithms in adaptive fuzzy control

    NASA Technical Reports Server (NTRS)

    Karr, C. Lucas; Harper, Tony R.

    1992-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.

  7. Medical image segmentation using genetic algorithms.

    PubMed

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation. PMID:19272859

  8. Towards an automatic coronary artery segmentation algorithm.

    PubMed

    Fallavollita, Pascal; Cheriet, Farida

    2006-01-01

    A method is presented that aims at minimizing image processing time during X-ray fluoroscopy interventions. First, an automatic frame extraction algorithm is proposed in order to extract relevant image frames with respect to their cardiac phase (systole or diastole). Secondly, a 4-step filter is suggested in order to enhance vessel contours. The reciprocal of the enhanced image is used as an alternative speed function to initialize the fast marching method. The complete algorithm was tested on eight clinical angiographic data sets and comparisons with two other vessel enhancement filters (Lorenz and Frangi) are made for the centerline extraction procedure. In order to assess the suitability of our filter the extracted centerline coordinates are compared with the manually traced axis. PMID:17946540

  9. Mobile Energy Laboratory Procedures

    SciTech Connect

    Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.

    1993-09-01

    Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.

  10. Procedural learning and dyslexia.

    PubMed

    Nicolson, R I; Fawcett, A J; Brookes, R L; Needle, J

    2010-08-01

    Three major 'neural systems', specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed (Trends Neurosci., 30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief overview of the increasing evidence relating to the hypothesis, noting that the framework involves two main claims: first that 'neural systems' provides a productive level of description avoiding the underspecificity of cognitive descriptions and the overspecificity of brain structural accounts; and second that a distinctive feature of procedural learning is its extended time course, covering from minutes to months. In this article, we focus on the second claim. Three studies-speeded single word reading, long-term response learning, and overnight skill consolidation-are reviewed which together provide clear evidence of difficulties in procedural learning for individuals with dyslexia, even when the tasks are outside the literacy domain. The educational implications of the results are then discussed, and in particular the potential difficulties that impaired overnight procedural consolidation would entail. It is proposed that response to intervention could be better predicted if diagnostic tests on the different forms of learning were first undertaken. PMID:20680991

  11. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  12. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  13. Computational algorithms to predict Gene Ontology annotations

    PubMed Central

    2015-01-01

    Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper

  14. Reasoning about procedural knowledge

    NASA Technical Reports Server (NTRS)

    Georgeff, M. P.

    1985-01-01

    A crucial aspect of automated reasoning about space operations is that knowledge of the problem domain is often procedural in nature - that is, the knowledge is often in the form of sequences of actions or procedures for achieving given goals or reacting to certain situations. In this paper a system is described that explicitly represents and reasons about procedural knowledge. The knowledge representation used is sufficiently rich to describe the effects of arbitrary sequences of tests and actions, and the inference mechanism provides a means for directly using this knowledge to reach desired operational goals. Furthermore, the representation has a declarative semantics that provides for incremental changes to the system, rich explanatory capabilities, and verifiability. The approach also provides a mechanism for reasoning about the use of this knowledge, thus enabling the system to choose effectively between alternative courses of action.

  15. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  16. Operational Implementation of Space Debris Mitigation Procedures

    NASA Astrophysics Data System (ADS)

    Gicquel, Anne-Helene; Bonaventure, Francois

    2013-08-01

    During the spacecraft lifetime, Astrium supports its customers to manage collision risks alerts from the Joint Space Operations Center (JSpOC). This was previously done with hot-line support and a manual operational procedure. Today, it is automated and integrated in QUARTZ, the Astrium Flight Dynamics operational tool. The algorithms and process details for this new 5- step functionality are provided in this paper. To improve this functionality, some R&D activities such as the study of dilution phenomenon and low relative velocity encounters are going on. Regarding end of life disposal, recent operational experiences as well as studies results are presented.

  17. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  18. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  19. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  20. On constructing optimistic simulation algorithms for the discrete event system specification

    SciTech Connect

    Nutaro, James J

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.