In-Trail Procedure (ITP) Algorithm Design
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.
Algorithm for Video Summarization of Bronchoscopy Procedures
2011-01-01
Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the
Using an admittance algorithm for bone drilling procedures.
Accini, Fernando; Díaz, Iñaki; Gil, Jorge Juan
2016-01-01
Bone drilling is a common procedure in many types of surgeries, including orthopedic, neurological and otologic surgeries. Several technologies and control algorithms have been developed to help the surgeon automatically stop the drill before it goes through the boundary of the tissue being drilled. However, most of them rely on thrust force and cutting torque to detect bone layer transitions which has many drawbacks that affect the reliability of the process. This paper describes in detail a bone-drilling algorithm based only on the position control of the drill bit that overcomes such problems and presents additional advantages. The implication of each component of the algorithm in the drilling procedure is analyzed and the efficacy of the algorithm is experimentally validated with two types of bones. PMID:26516110
A dynamic programming algorithm for RNA structure prediction including pseudoknots.
Rivas, E; Eddy, S R
1999-02-01
We describe a dynamic programming algorithm for predicting optimal RNA secondary structure, including pseudoknots. The algorithm has a worst case complexity of O(N6) in time and O(N4) in storage. The description of the algorithm is complex, which led us to adopt a useful graphical representation (Feynman diagrams) borrowed from quantum field theory. We present an implementation of the algorithm that generates the optimal minimum energy structure for a single RNA sequence, using standard RNA folding thermodynamic parameters augmented by a few parameters describing the thermodynamic stability of pseudoknots. We demonstrate the properties of the algorithm by using it to predict structures for several small pseudoknotted and non-pseudoknotted RNAs. Although the time and memory demands of the algorithm are steep, we believe this is the first algorithm to be able to fold optimal (minimum energy) pseudoknotted RNAs with the accepted RNA thermodynamic model. PMID:9925784
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.
[Algorithm of nursing procedure in debridement protocol].
Fumić, Nera; Marinović, Marin; Brajan, Dolores
2014-10-01
Debridement is an essential act in the treatment of various wounds, which removes devitalized and colonized necrotic tissue, also poorly healing tissue and all foreign bodies from the wound, in order to enhance the formation of healthy granulation tissue and accelerate the process of wound healing. Nowadays, debridement is the basic procedure in the management of acute and chronic wounds, where the question remains which way to do it, how extensively, how often and who should perform it. Many parameters affect the decision on what method to use on debridement. It is important to consider the patient's age, environment, choice, presence of pain, quality of life, skills and resources for wound and patient care providers, and also a variety of regulations and guidelines. Irrespective of the level and setting where the care is provided (hospital patients, ambulatory or stationary, home care), care for patients suffering from some form of acute or chronic wound and requiring different interventions and a large number of frequent bandaging and wound care is most frequently provided by nurses/technicians. With timely and systematic interventions in these patients, the current and potential problems in health functioning could be minimized or eliminated in accordance with the resources. Along with daily wound toilette and bandaging, it is important to timely recognize changes in the wound status and the need of tissue debridement. Nurse/technician interventions are focused on preparation of the patient (physical, psychological, education), preparation of materials, personnel and space, assisting or performing procedures of wound care, and documenting the procedures performed. The assumption that having an experienced and competent person for wound care and a variety of methods and approaches in wound treatment is in the patient's best interest poses the need of defining common terms and developing comprehensive guidelines that will lead to universal algorithms in the field
Simulation of Accident Sequences Including Emergency Operating Procedures
Queral, Cesar; Exposito, Antonio; Hortal, Javier
2004-07-01
Operator actions play an important role in accident sequences. However, design analysis (Safety Analysis Report, SAR) seldom includes consideration of operator actions, although they are required by compulsory Emergency Operating Procedures (EOP) to perform some checks and actions from the very beginning of the accident. The basic aim of the project is to develop a procedure validation system which consists of the combination of three elements: a plant transient simulation code TRETA (a C based modular program) developed by the CSN, a computerized procedure system COPMA-III (Java technology based program) developed by the OECD-Halden Reactor Project and adapted for simulation with the contribution of our group and a software interface that provides the communication between COPMA-III and TRETA. The new combined system is going to be applied in a pilot study in order to analyze sequences initiated by secondary side breaks in a Pressurized Water Reactors (PWR) plant. (authors)
Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms
Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas
2016-01-01
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640
78 FR 57639 - Request for Comments on Pediatric Planned Procedure Algorithm
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... Procedure Algorithm AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of request for comments on pediatric planned procedure algorithm from the members of the public. SUMMARY... from the public on an algorithm for identifying pediatric planned procedures as part of the...
Dipole splitting algorithm: A practical algorithm to use the dipole subtraction procedure
NASA Astrophysics Data System (ADS)
Hasegawa, K.
2015-11-01
The Catani-Seymour dipole subtraction is a general and powerful procedure to calculate the QCD next-to-leading order corrections for collider observables. We clearly define a practical algorithm to use the dipole subtraction. The algorithm is called the dipole splitting algorithm (DSA). The DSA is applied to an arbitrary process by following well defined steps. The subtraction terms created by the DSA can be summarized in a compact form by tables. We present a template for the summary tables. One advantage of the DSA is to allow a straightforward algorithm to prove the consistency relation of all the subtraction terms. The proof algorithm is presented in the following paper [K. Hasegawa, arXiv:1409.4174]. We demonstrate the DSA in two collider processes, pp to μ -μ + and 2 jets. Further, as a confirmation of the DSA, it is shown that the analytical results obtained by the DSA in the Drell-Yan process exactly agree with the well known results obtained by the traditional method.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-25
...--20 CFR 601 Including Form MA 8-7; Comment Request on Extension Without Change AGENCY: Employment and..., Employment and Training Administration regulations, 20 CFR 601, Administrative Procedures,...
NASA Technical Reports Server (NTRS)
Giles, G. L.
1980-01-01
A substructure procedure to include the flexibility of the tile in the stress analysis of the shuttle thermal protection system (TPS) is described. In this procedure, the TPS is divided into substructures of (1) the tile which is modeled by linear finite elements and (2) the SIP which is modeled as a nonlinear continuum. This procedure was applied for loading cases of uniform pressure, uniform moment, and an aerodynamic shock on various tile thicknesses. The ratios of through-the-thickness stresses in the SIP which were calculated using a flexible tile compared to using a rigid tile were found to be less than 1.05 for the cases considered.
An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories
NASA Technical Reports Server (NTRS)
Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.
2014-01-01
NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.
Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization
NASA Astrophysics Data System (ADS)
Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad
2015-05-01
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Johnson, J. K.
1979-01-01
An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.
NASA Technical Reports Server (NTRS)
Beyst, Brian; Rezvani, Ali; Young, Bin; Friauf, Robert J.
1991-01-01
Previous Monte Carlo simulations provide a data base for properties of secondary electron emission (SEE) from insulators and metals. Incident primary electrons are considered at energies up to 1200 eV. The behavior of secondary electrons is characterized by (1) yield vs. primary energy E(sub p), (2) distribution vs. secondary energy E(sub s), and (3) distribution vs. angle of emission theta. Special attention is paid to the low energy range E(sub p) up to 50 eV, where the number and energy of secondary electrons is limited by the finite band gap of the insulator. For primary energies above 50 eV the SEE yield curve can be conveniently parameterized by a Haffner formula. The energy distribution of secondary electrons is described by an empirical formula with average energy about 8.0 eV. The angular distribution of secondaries is slightly more peaked in the forward direction than the customary cos theta distribution. Empirical formulas and parameters are given for all yield and distribution curves. Procedures and algorithms are described for using these results to find the SEE yield, and then to choose the energy and angle of emergence of each secondary electron. These procedures can readily be incorporated into numerical simulations of plasma-solid surface interactions in low earth orbit.
Should Title 24 Ventilation Requirements Be Amended to include an Indoor Air Quality Procedure?
Dutton, Spencer M.; Mendell, Mark J.; Chan, Wanyu R.
2013-05-13
Minimum outdoor air ventilation rates (VRs) for buildings are specified in standards, including California?s Title 24 standards. The ASHRAE ventilation standard includes two options for mechanically-ventilated buildings ? a prescriptive ventilation rate procedure (VRP) that specifies minimum VRs that vary among occupancy classes, and a performance-based indoor air quality procedure (IAQP) that may result in lower VRs than the VRP, with associated energy savings, if IAQ meeting specified criteria can be demonstrated. The California Energy Commission has been considering the addition of an IAQP to the Title 24 standards. This paper, based on a review of prior data and new analyses of the IAQP, evaluates four future options for Title 24: no IAQP; adding an alternate VRP, adding an equivalent indoor air quality procedure (EIAQP), and adding an improved ASHRAE-like IAQP. Criteria were established for selecting among options, and feedback was obtained in a workshop of stakeholders. Based on this review, the addition of an alternate VRP is recommended. This procedure would allow lower minimum VRs if a specified set of actions were taken to maintain acceptable IAQ. An alternate VRP could also be a valuable supplement to ASHRAE?s ventilation standard.
The Relationship between the Bock-Aitkin Procedure and the EM Algorithm for IRT Model Estimation.
ERIC Educational Resources Information Center
Hsu, Yaowen; Ackerman, Terry A.; Fan, Meichu
It has previously been shown that the Bock-Aitkin procedure (R. Bock and M. Aitkin, 1981) is an instance of the EM algorithm when trying to find the marginal maximum likelihood estimate for a discrete latent ability variable (latent trait). In this paper, it is shown that the Bock-Aitkin procedure is a numerical implementation of the EM algorithm…
Best Estimate Radiation Flux Value-Added Procedure. Algorithm Operational Details and Explanations
Shi, Y.; Long, C. N.
2002-10-01
This document describes some specifics of the algorithm for best estimate evaluation of radiation fluxes at Southern Great Plains (SGP) Central Facility (CF). It uses the data available from the three co-located surface radiometer platforms at the SGP CF to automatically determine the best estimate of the irradiance measurements available. The Best Estimate Flux (BEFlux) value-added procedure (VAP) was previously named Best Estimate ShortWave (BESW) VAP, which included all of the broadband and spectral shortwave (SW) measurements for the SGP CF. In BESW, multiple measurements of the same quantities were handled simply by designating one as the primary measurement and using all others to merely fill in any gaps. Thus, this “BESW” is better termed “most continuous,” since no additional quality assessment was applied. We modified the algorithm in BESW to use the average of the closest two measurements as the best estimate when possible, if these measurements pass all quality assessment criteria. Furthermore, we included longwave (LW) fields in the best estimate evaluation to include all major components of the surface radiative energy budget, and renamed the VAP to Best Estimate Flux (BEFLUX1LONG).
Timmins, S.
1991-01-01
Walker Branch Watershed is a forested, research watershed marked throughout by a 264 ft grid that was surveyed in 1967 using the Oak Ridge National Laboratory (X-10) coordinate system. The Tennessee Valley Authority (TVA) prepared a contour map of the watershed in 1987, and an ARC/INFO{trademark} version of the TVA topographic map with the X-10 grid superimposed has since been used as the primary geographic information system (GIS) data base for the watershed. However, because of inaccuracies observed in mapped locations of some grid markers and permanent research plots, portions of the watershed were resurveyed in 1989 and an extensive investigation of the coordinates used in creating both the TVA map and ARC/INFO data base and of coordinate transformation procedures currently in use on the Oak Ridge Reservation was conducted. They determined that the positional errors resulted from the field orientation of the blazed grid rather than problems in mapmaking. In resurveying the watershed, previously surveyed control points were located or noted as missing, and 25 new control points along the perimeter roads were surveyed. In addition, 67 of 156 grid line intersections (pegs) were physically located and their positions relative to mapped landmarks were recorded. As a result, coordinates for the Walker Branch Watershed grid lines and permanent research plots were revised, and a revised map of the watershed was produced. In conjunction with this work, existing procedures for converting between the local grid systems, Tennessee state plane, and the 1927 and 1983 North American Datums were updated and compiled along with illustrative examples and relevant historical information. Alternative algorithms were developed for several coordinate conversions commonly used on the Oak Ridge Reservation.
Viscous microstructural dampers with aligned holes: design procedure including the edge correction.
Homentcovschi, Dorel; Miles, Ronald N
2007-09-01
The paper is a continuation of the works "Modelling of viscous damping of perforated planar micromechanical structures. Applications in acoustics" [Homentcovschi and Miles, J. Acoust. Soc. Am. 116, 2939-2947 (2004)] and "Viscous Damping of Perforated Planar Micromechanical Structures" [Homentcovschi and Miles, Sensors Actuators, A119, 544-552 (2005)] where design formulas for the case of an offset (staggered) system of holes was provided. The present work contains design formulas for perforated planar microstructures used in MEMS devices (such as proof-masses in accelerometers, backplates in microphones, micromechanical switches, resonators, tunable microoptical interferometers, etc.) in the case of aligned (nonstaggered) holes of circular and square section. The given formulas assure a minimum total damping coefficient (including the squeeze film damping and the direct and indirect resistance of the holes) for an assigned open area. The paper also gives a simple edge correction, making it possible to consider real (finite) perforated planar microstructures. The proposed edge correction is validated by comparison with the results obtained by FEM simulations: the relative error is found to be smaller than 0.04%. By putting together the design formulas with the edge correction a simple integrated design procedure for obtaining viscous perforated dampers with assigned properties is obtained. PMID:17927414
Belwin Edward, J; Rajasekar, N; Sathiyasekar, K; Senthilnathan, N; Sarjila, R
2013-09-01
Obtaining optimal power flow solution is a strenuous task for any power system engineer. The inclusion of FACTS devices in the power system network adds to its complexity. The dual objective of OPF with fuel cost minimization along with FACTS device location for IEEE 30 bus is considered and solved using proposed Enhanced Bacterial Foraging algorithm (EBFA). The conventional Bacterial Foraging Algorithm (BFA) has the difficulty of optimal parameter selection. Hence, in this paper, BFA is enhanced by including Nelder-Mead (NM) algorithm for better performance. A MATLAB code for EBFA is developed and the problem of optimal power flow with inclusion of FACTS devices is solved. After several run with different initial values, it is found that the inclusion of FACTS devices such as SVC and TCSC in the network reduces the generation cost along with increased voltage stability limits. It is also observed that, the proposed algorithm requires lesser computational time compared to earlier proposed algorithms. PMID:23759251
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 1 2011-07-01 2011-07-01 false What provisions must be included in a local educational... IMPACT AID PROGRAMS Special Provisions for Local Educational Agencies That Claim Children Residing on... educational agency's Indian policies and procedures? (a) An LEA's Indian policies and procedures (IPPs)...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false What provisions must be included in a local educational... IMPACT AID PROGRAMS Special Provisions for Local Educational Agencies That Claim Children Residing on... educational agency's Indian policies and procedures? (a) An LEA's Indian policies and procedures (IPPs)...
Why McNemar's Procedure Needs to Be Included in the Business Statistics Curriculum
ERIC Educational Resources Information Center
Berenson, Mark L.; Koppel, Nicole B.
2005-01-01
In business research situations it is often of interest to examine the differences in the responses in repeated measurements of the same subjects or from among matched or paired subjects. A simple and useful procedure for comparing differences between proportions in two related samples was devised by McNemar (1947) nearly 60 years ago. Although…
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Johnston, Christopher O.
2011-01-01
Implementations of a model for equilibrium, steady-state ablation boundary conditions are tested for the purpose of providing strong coupling with a hypersonic flow solver. The objective is to remove correction factors or film cooling approximations that are usually applied in coupled implementations of the flow solver and the ablation response. Three test cases are considered - the IRV-2, the Galileo probe, and a notional slender, blunted cone launched at 10 km/s from the Earth's surface. A successive substitution is employed and the order of succession is varied as a function of surface temperature to obtain converged solutions. The implementation is tested on a specified trajectory for the IRV-2 to compute shape change under the approximation of steady-state ablation. Issues associated with stability of the shape change algorithm caused by explicit time step limits are also discussed.
A procedure for testing the quality of LANDSAT atmospheric correction algorithms
NASA Technical Reports Server (NTRS)
Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.
1982-01-01
There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.
Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure
NASA Astrophysics Data System (ADS)
Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu
2006-01-01
Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.
NASA Astrophysics Data System (ADS)
Schneider, Florian; Rascher, Rolf; Stamp, Richard; Smith, Gordon
2013-09-01
The modern optical industry requires objects with complex topographical structures. Free-form shaped objects are of large interest in many branches, especially for size reduced, modern lifestyle products like digital cameras. State of the art multi-axes-coordinate measurement machines (CMM), like the topographical measurement machine TII-3D, are by principle suitable to measure free-form shaped objects. The only limitation is the software package. This paper may illustrate a simple way to enhance coordinate measurement machines in order to add a free-form function. Next to a coordinate measurement machine, only a state of the art CAD† system and a simple piece of software are necessary. For this paper, the CAD software CREO‡ had been used. CREO enables the user to develop a 3D object in two different ways. With the first method, the user might design the shape by drawing one or more 2D sketches and put an envelope around. Using the second method, the user could define one or more formulas in the editor to describe the favoured surface. Both procedures lead to the required three-dimensional shape. However, further features of CREO enable the user to export the XYZ-coordinates of the created surface. A special designed software tool, developed with Matlab§, converts the XYZ-file into a measurement matrix which can be used as a reference file. Finally the result of the free-form measurement, carried out with a CMM, has to be loaded into the software tool and both files will be computed. The result is an error profile which provides the deviation between the measurement and the target-geometry.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 2 2010-10-01 2010-10-01 false What administrative and management procedures must a Tribe or Tribal organization include in a Tribal IV-D plan? 309.75 Section 309.75 Public Welfare Regulations Relating to Public Welfare OFFICE OF CHILD SUPPORT ENFORCEMENT (CHILD SUPPORT ENFORCEMENT PROGRAM), ADMINISTRATION FOR CHILDREN...
Cassini VIMS observations of the Galilean satellites including the VIMS calibration procedure
McCord, T.B.; Coradini, A.; Hibbitts, C.A.; Capaccioni, F.; Hansen, G.B.; Filacchione, G.; Clark, R.N.; Cerroni, P.; Brown, R.H.; Baines, K.H.; Bellucci, G.; Bibring, J.-P.; Buratti, B.J.; Bussoletti, E.; Combes, M.; Cruikshank, D.P.; Drossart, P.; Formisano, V.; Jaumann, R.; Langevin, Y.; Matson, D.L.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.
2004-01-01
The Visual and Infrared Mapping Spectrometer (VIMS) observed the Galilean satellites during the Cassini spacecraft's 2000/2001 flyby of Jupiter, providing compositional and thermal information about their surfaces. The Cassini spacecraft approached the jovian system no closer than about 126 Jupiter radii, about 9 million kilometers, at a phase angle of < 90 ??, resulting in only sub-pixel observations by VIMS of the Galilean satellites. Nevertheless, most of the spectral features discovered by the Near Infrared Mapping Spectrometer (NIMS) aboard the Galileo spacecraft during more than four years of observations have been identified in the VIMS data analyzed so far, including a possible 13C absorption. In addition, VIMS made observations in the visible part of the spectrum and at several new phase angles for all the Galilean satellites and the calculated phase functions are presented. In the process of analyzing these data, the VIMS radiometric and spectral calibrations were better determined in preparation for entry into the Saturn system. Treatment of these data is presented as an example of the VIMS data reduction, calibration and analysis process and a detailed explanation is given of the calibration process applied to the Jupiter data. ?? 2004 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, B.; Ye, Z. F.; Xu, X.
2016-01-01
The data processing procedures currently used on most multi-object fiber spectroscopic telescopes, such as Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST), the Sloan Digital Sky Survey (SDSS), the Anglo-Australia Telescope (AAT), etc., are based on one-dimensional (1-D) algorithms. In this paper, LAMOST is taken as an example to display the proposed multi-object fiber spectral data processing procedure. In the using processing procedure on LAMOST, after the pretreatment process, the two-dimensional (2-D) observed raw data are extracted into 1-D intermediate data simply based on 1-D model. Then the subsequent key steps are all done by 1-D algorithms. However, this processing procedure is not in accord with the formation mechanism of the observed spectra. Therefore, it brings a considerable error in each step. To solve the problem, we propose a novel processing procedure that has not been used on LAMOST or other telescopes. The modules of the procedure are reordered, and the main steps are all based on 2-D algorithms. The principles of the core algorithms are explained in detail. Besides, some partial experimental results are shown to prove the effectiveness and superiority of the 2-D algorithms.
Solano, Carlos J F; Pothula, Karunakar R; Prajapati, Jigneshkumar D; De Biase, Pablo M; Noskov, Sergei Yu; Kleinekathöfer, Ulrich
2016-05-10
All-atom molecular dynamics simulations have a long history of applications studying ion and substrate permeation across biological and artificial pores. While offering unprecedented insights into the underpinning transport processes, MD simulations are limited in time-scales and ability to simulate physiological membrane potentials or asymmetric salt solutions and require substantial computational power. While several approaches to circumvent all of these limitations were developed, Brownian dynamics simulations remain an attractive option to the field. The main limitation, however, is an apparent lack of protein flexibility important for the accurate description of permeation events. In the present contribution, we report an extension of the Brownian dynamics scheme which includes conformational dynamics. To achieve this goal, the dynamics of amino-acid residues was incorporated into the many-body potential of mean force and into the Langevin equations of motion. The developed software solution, called BROMOCEA, was applied to ion transport through OmpC as a test case. Compared to fully atomistic simulations, the results show a clear improvement in the ratio of permeating anions and cations. The present tests strongly indicate that pore flexibility can enhance permeation properties which will become even more important in future applications to substrate translocation. PMID:27088446
Gousheh, S.S.
1996-01-01
I have used the shooting method to find the eigenvalues (bound state energies) of a set of strongly coupled Schroedinger type equations. I have discussed the advantages of the shooting method when the potentials include {delta}-functions. I have also discussed some points which are universal in these kind of problems, whose use make the algorithm much more efficient. These points include mapping the domain of the ODE into a finite one, using the asymptotic form of the solutions, best use of the normalization freedom, and converting the {delta}-functions into boundary conditions.
Belmonti, Simone; Lombardi, Francesca; Morandi, Matteo; Fabbiani, Massimiliano; Tordini, Giacinta; Cauda, Roberto; De Luca, Andrea; Di Giambenedetto, Simona; Montagnani, Francesca
2016-01-01
The 13-valent pneumococcal conjugate vaccine (PCV-13) is recommended for HIV-infected people, although its effectiveness in this population remains under evaluation. In this study, we describe the development, optimization, and analytical validation of an ELISA procedure to measure specific antibodies for the pneumococcal polysaccharide serotypes included in PCV13 vaccine, testing sera obtained from HIV-infected outpatients (n = 30) who received the vaccine. The protocol followed the last version of WHO guidelines, based on the new standard 007sp, with the modification of employing Statens Serum Institut (SSI) antigens. We supplied the assay performance validation in terms of sensitivity, reproducibility, precision and accuracy. In addition we detailed optimal antigen-coating concentrations and ELISA conditions common to all 13 serotypes, suitable for laboratories performing these assays in order to standardize the method. Our procedure showed reproducibility and reliability, making it a valid alternative for evaluating the response to pneumococcal serotypes included in PCV13 vaccine. PMID:26506438
Enhanced 3-D-reconstruction algorithm for C-arm systems suitable for interventional procedures.
Wiesent, K; Barth, K; Navab, N; Durlak, P; Brunner, T; Schuetz, O; Seissler, W
2000-05-01
Increasingly, three-dimensional (3-D) imaging technologies are used in medical diagnosis, for therapy planning, and during interventional procedures. We describe the possibilities of fast 3-D-reconstruction of high-contrast objects with high spatial resolution from only a small series of two-dimensional (2-D) planar radiographs. The special problems arising from the intended use of an open, mechanically unstable C-arm system are discussed. For the description of the irregular sampling geometry, homogeneous coordinates are used thoroughly. The well-known Feldkamp algorithm is modified to incorporate corresponding projection matrices without any decomposition into intrinsic and extrinsic parameters. Some approximations to speed up the whole reconstruction procedure and the tradeoff between image quality and computation time are also considered. Using standard hardware the reconstruction of a 256(3) cube is now possible within a few minutes, a time that is acceptable during interventions. Examples for cranial vessel imaging from some clinical test installations will be shown as well as promising results for bone imaging with a laboratory C-arm system. PMID:11021683
NASA Astrophysics Data System (ADS)
Batzias, F. A.; Sidiras, D. K.; Giannopoulos, Ch.; Spetsidis, I.
2009-08-01
This work deals with a methodological framework designed/developed under the form of a spatio-temporal algorithmic procedure for environmental policymaking at local level. The procedure includes 25 activity stages and 9 decision nodes, putting emphasis on (i) mapping on GIS layers water supply/demand and modeling of aquatic pollution coming from point and non-point sources, (ii) environmental monitoring by periodically measuring the main pollutants in situ and in the laboratory, (iii) design of environmental projects, decomposition of them into sub-projects and combination of the latter to form attainable alternatives, (iv) multicriteria ranking of alternatives, according to a modified Delphi method, by using as criteria the expected environmental benefit, the attitude of inhabitants, the priority within the programme of regional development, the capital required for the investment and the operating cost, and (v) knowledge Base (KB) operation/enrichment, functioning in combination with a data mining mechanism to extract knowledge/information/data from external Bases. An implementation is presented referring to the Municipality of Arkalochori in the Greek island of Crete.
NASA Technical Reports Server (NTRS)
Tappa, M. J.; Mills, R. D.; Ware, B.; Simon, J. I.
2014-01-01
The isotopic compositions of elements are often used to characterize nucelosynthetic contributions in early Solar System objects. Coordinated multiple middle-mass elements with differing volatilities may provide information regarding the location of condensation of early Solar System solids. Here we detail new procedures that we have developed to make high-precision multi-isotope measurements of chromium and calcium using thermal ionization mass spectrometry, and characterize a suite of chondritic and terrestrial material including two fragments of the Chelyabinsk LL-chondrite.
2014-01-01
Background Developing suitable methods for the identification of protein complexes remains an active research area. It is important since it allows better understanding of cellular functions as well as malfunctions and it consequently leads to producing more effective cures for diseases. In this context, various computational approaches were introduced to complement high-throughput experimental methods which typically involve large datasets, are expensive in terms of time and cost, and are usually subject to spurious interactions. Results In this paper, we propose ProRank+, a method which detects protein complexes in protein interaction networks. The presented approach is mainly based on a ranking algorithm which sorts proteins according to their importance in the interaction network, and a merging procedure which refines the detected complexes in terms of their protein members. ProRank + was compared to several state-of-the-art approaches in order to show its effectiveness. It was able to detect more protein complexes with higher quality scores. Conclusions The experimental results achieved by ProRank + show its ability to detect protein complexes in protein interaction networks. Eventually, the method could potentially identify previously-undiscovered protein complexes. The datasets and source codes are freely available for academic purposes at http://faculty.uaeu.ac.ae/nzaki/Research.htm. PMID:24944073
NASA Technical Reports Server (NTRS)
Kankam, M. David; Benjamin, Owen
1991-01-01
The development of computer software for performance prediction and analysis of voltage-fed, variable-frequency AC drives for space power applications is discussed. The AC drives discussed include the pulse width modulated inverter (PWMI), a six-step inverter and the pulse density modulated inverter (PDMI), each individually connected to a wound-rotor induction motor. Various d-q transformation models of the induction motor are incorporated for user-selection of the most applicable model for the intended purpose. Simulation results of selected AC drives correlate satisfactorily with published results. Future additions to the algorithm are indicated. These improvements should enhance the applicability of the computer program to the design and analysis of space power systems.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
An overview is presented of a model for describing data and control flow associated with the execution of large-grained, decision-free algorithms in a special distributed computer environment. The ATAMM (Algorithm-To-Architecture Mapping Model) model provides a basis for relating an algorithm to its execution in a dataflow multicomputer environment. The ATAMM model features a marked graph Petri net description of the algorithm behavior with regard to both data and control flow. The model provides an analytical basis for calculating performance bounds on throughput characteristics which are demonstrated here.
NASA Astrophysics Data System (ADS)
Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang
2009-05-01
The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.
Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter
2007-01-01
A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals. PMID:17266517
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2004-01-01
TRMM has been an imminently successful mission from an engineering standpoint but even more from a science standpoint. An important part of this science success has been the careful quality control of the TRMM standard products. This paper will present the quality monitoring efforts that the TRMM Science Data and Information System (TSDIS) conducts on a routine basis. The paper will detail parameter trending, geolocation quality control and the procedures to support the preparation of next versions of the algorithm used for reprocessing.
Ticehurst, John R; Aird, Deborah Z; Dam, Lisa M; Borek, Anita P; Hargrove, John T; Carroll, Karen C
2006-03-01
We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were > or = 99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by approximately 75 to 80% and two-step testing was complete in < or = 3 days, we decided that this algorithm would be effective. Over 6 months, our laboratories' expenses were US dollar 143,000 less than if CCNA alone had been performed on all 5,887 specimens. PMID:16517916
NASA Astrophysics Data System (ADS)
Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak
2016-06-01
Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote
NASA Astrophysics Data System (ADS)
Chaudhury, Pinaki; Bhattacharyya, S. P.
1999-03-01
It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.
König, Julian; Möckel, Martin; Mueller, Eda; Bocksch, Wolfgang; Baid-Agrawal, Seema; Babel, Nina; Schindler, Ralf; Reinke, Petra; Nickel, Peter
2014-01-01
Background. Benefits of cardiac screening in kidney transplant candidates (KTC) will be dependent on the availability of effective interventions. We retrospectively evaluated characteristics and outcome of percutaneous coronary interventions (PCI) in KTC selected for revascularization by a cardiac screening approach. Methods. In 267 patients evaluated 2003 to 2006, screening tests performed were reviewed and PCI characteristics correlated with major adverse cardiovascular events (MACE) during a follow-up of 55 months. Results. Stress tests in 154 patients showed ischemia in 28 patients (89% high risk). Of 58 patients with coronary angiography, 38 had significant stenoses and 18 cardiac interventions (6.7% of all). 29 coronary lesions in 17/18 patients were treated by PCI. Angiographic success rate was 93.1%, but procedural success rate was only 86.2%. Long lesions (P = 0.029) and diffuse disease (P = 0.043) were associated with MACE. In high risk patients, cardiac screening did not improve outcome as 21.7% of patients with versus 15.5% of patients without properly performed cardiac screening had MACE (P = 0.319). Conclusion. The moderate procedural success of PCI and poor outcome in long and diffuse coronary lesions underscore the need to define appropriate revascularization strategies in KTC, which will be a prerequisite for cardiac screening to improve outcome in these high-risk patients. PMID:25045528
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Stokbro, K; Aagaard, E; Torkov, P; Bell, R B; Thygesen, T
2016-01-01
This retrospective study evaluated the precision and positional accuracy of different orthognathic procedures following virtual surgical planning in 30 patients. To date, no studies of three-dimensional virtual surgical planning have evaluated the influence of segmentation on positional accuracy and transverse expansion. Furthermore, only a few have evaluated the precision and accuracy of genioplasty in placement of the chin segment. The virtual surgical plan was compared with the postsurgical outcome by using three linear and three rotational measurements. The influence of maxillary segmentation was analyzed in both superior and inferior maxillary repositioning. In addition, transverse surgical expansion was compared with the postsurgical expansion obtained. An overall, high degree of linear accuracy between planned and postsurgical outcomes was found, but with a large standard deviation. Rotational difference showed an increase in pitch, mainly affecting the maxilla. Segmentation had no significant influence on maxillary placement. However, a posterior movement was observed in inferior maxillary repositioning. A lack of transverse expansion was observed in the segmented maxilla independent of the degree of expansion. PMID:26250603
Dora, Carlos; Racioppi, Francesca
2003-01-01
From the mid-1990s, research began to highlight the importance of a wide range of health impacts of transport policy decisions. The Third Ministerial Conference on Environment and Health adopted a Charter on Transport, Environment and Health based on four main components: bringing awareness of the nature, magnitude and costs of the health impacts of transport into intergovernmental processes; strengthening the arguments for integration of health into transport policies by developing in-depth analysis of the evidence; developing national case studies; and engaging ministries of environment, health and transport as well as intergovernmental and nongovernmental organizations. Negotiation of the Charter was based on two converging processes: the political process involved the interaction of stakeholders in transport, health and environment in Europe, which helped to frame the issues and the approaches to respond to them; the scientific process involved an international group of experts who produced state-of- the-art reviews of the health impacts resulting from transportation activities, identifying gaps in existing knowledge and methodological tools, specifying the policy implications of their findings, and suggesting possible targets for health improvements. Health arguments were used to strengthen environmental ones, clarify costs and benefits, and raise issues of health equity. The European experience shows that HIA can fulfil the need for simple procedures to be systematically applied to decisions regarding transport strategies at national, regional and local levels. Gaps were identified concerning models for quantifying health impacts and capacity building on how to use such tools. PMID:12894322
NASA Astrophysics Data System (ADS)
Biswas, Papun; Chakraborti, Debjani
2010-10-01
This paper describes how the genetic algorithms (GAs) can be efficiently used to fuzzy goal programming (FGP) formulation of optimal power flow problems having multiple objectives. In the proposed approach, the different constraints, various relationships of optimal power flow calculations are fuzzily described. In the model formulation of the problem, the membership functions of the defined fuzzy goals are characterized first for measuring the degree of achievement of the aspiration levels of the goals specified in the decision making context. Then, the achievement function for minimizing the regret for under-deviations from the highest membership value (unity) of the defined membership goals to the extent possible on the basis of priorities is constructed for optimal power flow problems. In the solution process, the GA method is employed to the FGP formulation of the problem for achievement of the highest membership value (unity) of the defined membership functions to the extent possible in the decision making environment. In the GA based solution search process, the conventional Roulette wheel selection scheme, arithmetic crossover and random mutation are taken into consideration to reach a satisfactory decision. The developed method has been tested on IEEE 6-generator 30-bus System. Numerical results show that this method is promising for handling uncertain constraints in practical power systems.
ERIC Educational Resources Information Center
Nolan, R. O.; And Others
The Final Report, Volume 1, covers research results of the Michigan State University Driver Performance Measurement Project. This volume (Volume 2) constitutes a guide for training observers/raters in the driver performance measurement procedures developed in this research by MSU. The guide includes a training course plan and content materials…
Cristofolini, Andrea; Latini, Chiara; Borghi, Carlo A.
2011-02-01
This paper presents a technique for improving the convergence rate of a generalized minimum residual (GMRES) algorithm applied for the solution of a algebraic system produced by the discretization of an electrodynamic problem with a tensorial electrical conductivity. The electrodynamic solver considered in this work is a part of a magnetohydrodynamic (MHD) code in the low magnetic Reynolds number approximation. The code has been developed for the analysis of MHD interaction during the re-entry phase of a space vehicle. This application is a promising technique intensively investigated for the shock mitigation and the vehicle control in the higher layers of a planetary atmosphere. The medium in the considered application is a low density plasma, characterized by a tensorial conductivity. This is a result of the behavior of the free electric charges, which tend to drift in a direction perpendicular both to the electric field and to the magnetic field. In the given approximation, the electrodynamics is described by an elliptical partial differential equation, which is solved by means of a finite element approach. The linear system obtained by discretizing the problem is solved by means of a GMRES iterative method with an incomplete LU factorization threshold preconditioning. The convergence of the solver appears to be strongly affected by the tensorial characteristic of the conductivity. In order to deal with this feature, the bandwidth reduction in the coefficient matrix is considered and a novel technique is proposed and discussed. First, the standard reverse Cuthill-McKee (RCM) procedure has been applied to the problem. Then a modification of the RCM procedure (the weighted RCM procedure, WRCM) has been developed. In the last approach, the reordering is performed taking into account the relation between the mesh geometry and the magnetic field direction. In order to investigate the effectiveness of the methods, two cases are considered. The RCM and WRCM procedures
NASA Astrophysics Data System (ADS)
Busoni, Lorenzo; Carlà, Marcello; Lanzi, Leonardo
2001-06-01
A set of fast algorithms for axisymmetric drop shape analysis measurements is described. Speed has been improved by more than 1 order of magnitude over previously available procedures. Frame analysis is performed and drop characteristics and interfacial tension γ are computed in less than 40 ms on a Pentium III 450 MHz PC, while preserving an overall accuracy in Δγ/γ close to 1×10-4. A new procedure is described to evaluate both the algorithms performance and the contribution of each source of experimental error to the overall measurement accuracy.
NASA Astrophysics Data System (ADS)
Hsu, Chih-Ming
2014-12-01
Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
NASA Astrophysics Data System (ADS)
Leblanc, T.; Haefele, A.; Sica, R. J.; van Gijsel, A.
2014-12-01
A new lidar data processing algorithm for the retrieval of ozone, temperature and water vapor has been developed for centralized use within the Network for the Detection of Atmospheric Composition Change (NDACC) and the GCOS Reference Upper Air Network (GRUAN). The program is written with the objective that raw data from a large number of lidar instruments can be analyzed consistently. The uncertainty budget includes 13 sources of uncertainty that are explicitly propagated taking into account vertical and inter-channel dependencies. Several standardized definitions of vertical resolution can be used, leading to a maximum flexibility, and to the production of tropospheric ozone, stratospheric ozone, middle atmospheric temperature and tropospheric water vapor profiles optimized for multiple user needs such as long-term monitoring, process studies and model and satellite validation. A review of the program's functionalities as well as the first retrieved products will be presented.
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
Jiménez-Núñez, Francisco Gabriel; Manrique-Arija, Sara; Ureña-Garnica, Inmaculada; Romero-Barco, Carmen María; Panero-Lamothe, Blanca; Descalzo, Miguel Angel; Carmona, Loreto; Rodríguez-Pérez, Manuel; Fernández-Nebro, Antonio
2013-07-01
We evaluated the efficacy of a triage approach based on a combination of osteoporosis risk-assessment tools plus peripheral densitometry to identify low bone density accurately enough to be useful for clinical decision making in postmenopausal women. We conducted a cross-sectional diagnostic study in postmenopausal Caucasian women from primary and tertiary care. All women underwent dual-energy X-ray absorptiometric (DXA) measurement at the hip and lumbar spine and were categorized as osteoporotic or not. Additionally, patients had a nondominant heel densitometry performed with a PIXI densitometer. Four osteoporosis risk scores were tested: SCORE, ORAI, OST, and OSIRIS. All measurements were cross-blinded. We estimated the area under the curve (AUC) to predict the DXA results of 16 combinations of PIXI plus risk scores. A formula including the best combination was derived from a regression model and its predictability estimated. We included 505 women, in whom the prevalence of osteoporosis was 20 %, similar in both settings. The best algorithm was a combination of PIXI + OST + SCORE with an AUC of 0.826 (95 % CI 0.782-0.869). The proposed formula is Risk = (-12) × [PIXI + (-5)] × [OST + (-2)] × SCORE and showed little bias in the estimation (0.0016). If the formula had been implemented and the intermediate risk cutoff set at -5 to 20, the system would have saved
Sahlin, Eskil; Magnusson, Bertil
2012-08-15
A survey analysis and chemical characterization methodology for inhomogeneous solid waste samples of relatively large samples (typically up to 100g) using X-ray fluorescence following a general homogenization procedure is presented. By using a combination of acid digestion and grinding various materials can be homogenized e.g. pure metals, alloys, salts, ores, plastics, organics. In the homogenization step, solid material is fully or partly digested in a mixture of nitric acid and hydrochloric acid in an open vessel. The resulting mixture is then dried, grinded, and finally pressed to a wax briquette. The briquette is analyzed using wave-length dispersive X-ray fluorescence with fundamental parameters evaluation. The recovery of 55 elements were tested by preparing samples with known compositions using different alloys, pure metals or elements, oxides, salts and solutions of dissolved compounds. It was found that the methodology was applicable to 49 elements including Na, Mg, Al, Si, P, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, As, Se, Rb, Sr, Y, Zr, Nb, Mo, Ru, Rh, Pd, Ag, Cd, In, Sn, Sb, Te, Cs, Ba, La, Ce, Ta, W, Re, Ir, Pt, Au, Tl, Pb, Bi, and Th, that all had recoveries >0.8. 6 elements were lost by volatilization, including Br, I, Os, and Hg that were completely lost, and S and Ge that were partly lost. Since all lanthanides are chemically similar to La and Ce, all actinides are chemically similar to Th, and Hf is chemically similar to Zr, it is likely that the method is applicable to 77 elements. By using an internal standard such as strontium, added as strontium nitrate, samples containing relatively high concentrations of elements not measured by XRF (hydrogen to fluorine), e.g. samples containing plastics, can be analyzed. PMID:22841048
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F.; De, Suvranu
2014-01-01
Background High-frequency electricity is used in a majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. Methods We present a real-time and physically realistic simulation of electrosurgery, by modeling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide sub-finite-element graphical rendering of vaporized tissue, a dual mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. Results We have demonstrated our physics based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Conclusions Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. PMID:24357156
NASA Astrophysics Data System (ADS)
Susskind, Joel; Blaisdell, John M.; Iredell, Lena
2014-01-01
The atmospheric infrared sounder (AIRS) science team version-6 AIRS/advanced microwave sounding unit (AMSU) retrieval algorithm is now operational at the Goddard Data and Information Services Center (DISC). AIRS version-6 level-2 products are generated near real time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. Some of the significant improvements in retrieval methodology contained in the version-6 retrieval algorithm compared to that previously used in version-5 are described. In particular, the AIRS science team made major improvements with regard to the algorithms used to (1) derive surface skin temperature and surface spectral emissivity; (2) generate the initial state used to start the cloud clearing and retrieval procedures; and (3) derive error estimates and use them for quality control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, version-6 also operates in an AIRS only (AO) mode, which produces results almost as good as those of the full AIRS/AMSU mode. The improvements of some AIRS version-6 and version-6 AO products compared to those obtained using version-5 are also demonstrated.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2014-01-01
The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.
Constructive neural network learning algorithms
Parekh, R.; Yang, Jihoon; Honavar, V.
1996-12-31
Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.
NASA Astrophysics Data System (ADS)
Liu, Yuan; D'Haese, Pierre-Francois; Dawant, Benoit M.
2014-03-01
Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.
Romijn, C A; Luttik, R; van de Meent, D; Slooff, W; Canton, J H
1993-08-01
Effect assessment on secondary poisoning can be an asset to effect assessments on direct poisoning in setting quality criteria for the environment. This study presents an algorithm for effect assessment on secondary poisoning. The water-fish-fish-eating bird or mammal pathway was analyzed as an example of a secondary poisoning pathway. Parameters used in this algorithm are the bioconcentration factor for fish (BCF) and the no-observed-effect concentration for the group of fish-eating birds and mammals (NOECfish-eater). For the derivation of reliable BCFs preference is given to the use of experimentally derived BCFs over QSAR estimates. NOECs for fish eaters are derived by extrapolating toxicity data on single species. Because data on fish-eating species are seldom available, toxicity data on all birds and mammalian species were used. The proposed algorithm (MAR = NOECfish-eater/BCF) was used to calculate MARS (maximum acceptable risk levels) for the compounds lindane, dieldrin, cadmium, mercury, PCB153, and PCB118. By subsequently, comparing these MARs to MARs derived by effect assessment for aquatic organisms, it was concluded that for methyl mercury and PCB153 secondary poisoning of fish-eating birds and mammals could be a critical pathway. For these compounds, effects on populations of fish-eating birds and mammals can occur at levels in surface water below the MAR calculated for aquatic ecosystems. Secondary poisoning of fish-eating birds and mammals is not likely to occur for cadmium at levels in water below the MAR calculated for aquatic ecosystems. PMID:7691536
Reller, Megan E; Lema, Clara A; Perl, Trish M; Cai, Mian; Ross, Tracy L; Speck, Kathleen A; Carroll, Karen C
2007-11-01
We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) CCNA and conventional PCR if the direct CCNA was negative. The "gold standard" for toxigenic C. difficile was detection of C. difficile by the GDH screen or by culture and toxin production by direct or indirect CCNA. We tested 439 specimens from 439 patients. GDH screening detected all culture-positive specimens. The sensitivity of the two-step algorithm was 77% (95% confidence interval [CI], 70 to 84%), and that of culture was 87% (95% CI, 80 to 92%). PCR results correlated completely with those of CCNA testing on isolates (29/29 positive and 32/32 negative, respectively). We conclude that GDH is an excellent screening test and that culture with isolate CCNA testing detects an additional 23% of toxigenic C. difficile missed by direct CCNA. Since culture is tedious and also detects nontoxigenic C. difficile, we conclude that culture is most useful (i) when the direct CCNA is negative but a high clinical suspicion of toxigenic C. difficile remains, (ii) in the evaluation of new diagnostic tests for toxigenic C. difficile (where the best reference standard is essential), and (iii) in epidemiologic studies (where the availability of an isolate allows for strain typing and antimicrobial susceptibility testing). PMID:17804652
NASA Technical Reports Server (NTRS)
Smith, E. A.; Xiang, X.; Mugnai, A.; Hood, R. E.; Spencer, R. W.
1994-01-01
A microwave-based, profile-type precipitation retrieval algorithm has been used to analyze high-resolution passsive microwave measurements over an ocean background, obtained by the Advanced Microwave Precipitation Radiometer (AMPR) flown on a NASA ER-2 aircraft. The analysis is designed to first determine the improvements that can be gained by adding brightness temperature information from the AMPR low-frequency channel (10.7 GHz) to a multispectral retrieval algorithm nominally run with satellite information at 19, 37, and 85 GHz. The impact of spatial resolution degradation of the high-resolution brightness temperature information on the retrieved rain/cloud liquid water contents and ice water contents is then quantified in order to assess the possible biases inherent to satellite-based retrieval. Careful inspection of the high-resolution aircraft dataset reveals five distinctive brightness temperature features associated with cloud structure and scattering effects that are not generally detectable in current passive microwave satellite measurements. Results suggest that the inclusion of 10.7-GHz information overcomes two basic problems associated with three-channel retrieval. Intercomparisons of retrievals carried out at high-resolution and then averaged to a characteristic satellite scale to the corresponding retrievals in which the brightness temperatures are first convolved down to the satellite scale suggest that with the addition of the 10.7-GHz channel, the rain liquid water contents will not be negatively impacted by special resolution degradation. That is not the case with the ice water contents as they appear ti be quite sensitive to the imposed scale, the implication being that as spatial resolution is reduced, ice water contents will become increasingly underestimated.
A Runge-Kutta Nystrom algorithm.
NASA Technical Reports Server (NTRS)
Bettis, D. G.
1973-01-01
A Runge-Kutta algorithm of order five is presented for the solution of the initial value problem where the system of ordinary differential equations is of second order and does not contain the first derivative. The algorithm includes the Fehlberg step control procedure.
Um, Ki Sung; Kwak, Yun Sik; Cho, Hune; Kim, Il Kon
2005-11-01
A basic assumption of Health Level Seven (HL7) protocol is 'No limitation of message length'. However, most existing commercial HL7 interface engines do limit message length because they use the string array method, which is run in the main memory for the HL7 message parsing process. Specifically, messages with image and multi-media data create a long string array and thus cause the computer system to raise critical and fatal problem. Consequently, HL7 messages cannot handle the image and multi-media data necessary in modern medical records. This study aims to solve this problem with the 'streaming algorithm' method. This new method for HL7 message parsing applies the character-stream object which process character by character between the main memory and hard disk device with the consequence that the processing load on main memory could be alleviated. The main functions of this new engine are generating, parsing, validating, browsing, sending, and receiving HL7 messages. Also, the engine can parse and generate XML-formatted HL7 messages. This new HL7 engine successfully exchanged HL7 messages with 10 megabyte size images and discharge summary information between two university hospitals. PMID:16181703
NASA Astrophysics Data System (ADS)
Paton, F. L.; Maier, H. R.; Dandy, G. C.
2014-08-01
Cities around the world are increasingly involved in climate action and mitigating greenhouse gas (GHG) emissions. However, in the context of responding to climate pressures in the water sector, very few studies have investigated the impacts of changing water use on GHG emissions, even though water resource adaptation often requires greater energy use. Consequently, reducing GHG emissions, and thus focusing on both mitigation and adaptation responses to climate change in planning and managing urban water supply systems, is necessary. Furthermore, the minimization of GHG emissions is likely to conflict with other objectives. Thus, applying a multiobjective evolutionary algorithm (MOEA), which can evolve an approximation of entire trade-off (Pareto) fronts of multiple objectives in a single run, would be beneficial. Consequently, the main aim of this paper is to incorporate GHG emissions into a MOEA framework to take into consideration both adaptation and mitigation responses to climate change for a city's water supply system. The approach is applied to a case study based on Adelaide's southern water supply system to demonstrate the framework's practical management implications. Results indicate that trade-offs exist between GHG emissions and risk-based performance, as well as GHG emissions and economic cost. Solutions containing rainwater tanks are expensive, while GHG emissions greatly increase with increased desalinated water supply. Consequently, while desalination plants may be good adaptation options to climate change due to their climate-independence, rainwater may be a better mitigation response, albeit more expensive.
Young, John F; Luecke, Richard H; Pearce, Bruce A; Lee, Taewon; Ahn, Hongshik; Baek, Songjoon; Moon, Hojin; Dye, Daniel W; Davis, Thomas M; Taylor, Susan J
2009-01-01
Physiologically based pharmacokinetic (PBPK) models need the correct organ/tissue weights to match various total body weights in order to be applied to children and the obese individual. Baseline data from Reference Man for the growth of human organs (adrenals, brain, heart, kidneys, liver, lungs, pancreas, spleen, thymus, and thyroid) were augmented with autopsy data to extend the describing polynomials to include the morbidly obese individual (up to 250 kg). Additional literature data similarly extends the growth curves for blood volume, muscle, skin, and adipose tissue. Collectively these polynomials were used to calculate blood/organ/tissue weights for males and females from birth to 250 kg, which can be directly used to help parameterize PBPK models. In contrast to other black/white anthropomorphic measurements, the data demonstrated no observable or statistical difference in weights for any organ/tissue between individuals identified as black or white in the autopsy reports. PMID:19267313
Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis
2014-01-01
When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967
Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis; Couso, Inés; Sánchez, Luciano
2014-01-01
When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967
NASA Astrophysics Data System (ADS)
Martinaitis, S. M.; Fuelberg, H. E.; Sullivan, J. L.; Pathak, C.
2007-12-01
. Individual gauge sites also will be evaluated. Intervals of precipitation are analyzed to see how each scheme handles light, moderate, and heavy rainfall events. Finally, case studies describe how each scheme estimates particular rainfall events, including land-falling tropical cyclones. In summary, this paper will describe which procedure compares best with the NCDC independent gauges, and whether the OneRain and MPE products can be used interchangeably.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe; Serranti, Silvia
2014-03-01
In secondary raw materials and recycling sectors, the products quality represents, more and more, the key issue to pursuit in order to be competitive in a more and more demanding market, where quality standards and products certification play a preheminent role. These goals assume particular importance when recycling actions are applied. Recovered products, resulting from waste materials, and/or dismissed products processing, are, in fact, always seen with a certain suspect. An adequate response of the industry to the market can only be given through the utilization of equipment and procedures ensuring pure, high-quality production, and efficient work and cost. All these goals can be reached adopting not only more efficient equipment and layouts, but also introducing new processing logics able to realize a full control of the handled material flow streams fulfilling, at the same time, i) an easy management of the procedures, ii) an efficient use of the energy, iii) the definition and set up of reliable and robust procedures, iv) the possibility to implement network connectivity capabilities finalized to a remote monitoring and control of the processes and v) a full data storage, analysis and retrieving. Furthermore the ongoing legislation and regulation require the implementation of recycling infrastructure characterised by high resources efficiency and low environmental impacts, both aspects being strongly linked to the waste materials and/or dismissed products original characteristics. For these reasons an optimal recycling infrastructure design primarily requires a full knowledge of the characteristics of the input waste. What previously outlined requires the introduction of a new important concept to apply in solid waste recycling, the recycling-oriented characterization, that is the set of actions addressed to strategically determine selected attributes, in order to get goaloriented data on waste for the development, implementation or improvement of recycling
Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L
2013-12-01
ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766
Evaluation of feedback-reduction algorithms for hearing aids.
Greenberg, J E; Zurek, P M; Brantley, M
2000-11-01
Three adaptive feedback-reduction algorithms were implemented in a laboratory-based digital hearing aid system and evaluated with dynamic feedback paths and hearing-impaired subjects. The evaluation included measurements of maximum stable gain and subjective quality ratings. The continuously adapting CNN algorithm (Closed-loop processing with No probe Noise) provided the best performance: 8.5 dB of added stable gain (ASG) relative to a reference algorithm averaged over all subjects, ears, and vent conditions. Two intermittently adapting algorithms, ONO (Open-loop with Noise when Oscillation detected) and ONQ (Open-loop with Noise when Quiet detected), provided an average of 5 dB of ASG. Subjects with more severe hearing losses received greater benefits: 13 dB average ASG for the CNN algorithm and 7-8 dB average ASG for the ONO and ONQ algorithms. These values are conservative estimates of ASG because the fitting procedure produced a frequency-gain characteristic that already included precautions against feedback. Speech quality ratings showed no substantial algorithm effect on pleasantness or intelligibility, although subjects informally expressed strong objections to the probe noise used by the ONO and ONQ algorithms. This objection was not reflected in the speech quality ratings because of limitations of the experimental procedure. The results clearly indicate that the CNN algorithm is the most promising choice for adaptive feedback reduction in hearing aids. PMID:11108377
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Memetic algorithm for community detection in networks.
Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng
2011-11-01
Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method. PMID:22181467
An optimized procedure for determining incremental heat rate characteristics
Noyola, A.H.; Grady, W.M. ); Viviani, G.L. )
1990-05-01
This paper describes an optimized procedure for producing generator incremental heat rate curves from continually sampled unit performance data. A generalized reduced gradient algorithm is applied to optimally locate break points in incremental heat rate curves. The advantages include the ability to automatically take into consideration slow time-varying effects such as unit aging and temperature variations in combustion air and cooling water. The procedure is tested using actual fuel rate data for four generators.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Goldberg, S Nahum
2012-03-01
In this basic research study, Ganapathy-Kanniappan et al advance our understanding of how to block the glycolytic pathway to inhibit tumor progression by using image guided procedures (1). This was accomplished by demonstrating their ability to perform molecular targeting of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) in human hepatocellular carcinoma (HCC) by using percutaneous injection of either inhibitor--3-bromopyruvate (3-BrPA) or short hairpin RNA (shRNA). They take the critical step of providing further rationale for potentially advancing this therapy into clinical trials by demonstrating that GAPDH expression strongly correlates with c-jun, a proto-oncogene involved in liver tumorigenesis in human HCC (2). PMID:22357877
Algorithmic Procedure for Finding Semantically Related Journals.
ERIC Educational Resources Information Center
Pudovkin, Alexander I.; Garfield, Eugene
2002-01-01
Using citations, papers and references as parameters a relatedness factor (RF) is computed for a series of journals. Sorting these journals by the RF produces a list of journals most closely related to a specified starting journal. The method appears to select a set of journals that are semantically most similar to the target journal. The…
Al-Massaedh, Ayat Allah; Pyell, Ute
2013-04-19
A new synthesis procedure for highly crosslinked macroporous amphiphilic N-adamantyl-functionalized mixed-mode acrylamide-based monolithic stationary phases for capillary electrochromatography (CEC) is investigated employing solubilization of the hydrophobic monomer by complexation with a cyclodextrin. N-(1-adamantyl)acrylamide is synthesized and characterized as a hydrophobic monomer forming a water soluble-inclusion complex with statistically methylated-β-cyclodextrin. The stoichiometry, the complex formation constant and the spatial arrangement of the formed complex are determined. Mixed-mode monolithic stationary phases are synthesized by in situ free radical copolymerization of cyclodextrin-solubilized N-adamantyl acrylamide, a water soluble crosslinker (piperazinediacrylamide), a hydrophilic monomer (methacrylamide), and a negatively charged monomer (vinylsulfonic acid) in aqueous medium in bind silane-pretreated fused silica capillaries. The synthesized monolithic stationary phases are amphiphilic and can be employed in the reversed- and in the normal-phase mode (depending on the composition of the mobile phase), which is demonstrated with polar and non-polar analytes. Observations made with polar analytes and polar mobile phase can only be explained by a mixed-mode retention mechanism. The influence of the total monomer concentration (%T) on the chromatographic properties, the electroosmotic mobility, and on the specific permeability is investigated. With a homologues series of alkylphenones it is confirmed that the hydrophobicity (methylene selectivity) of the stationary phase increases with increasing mass fraction of N-(1-adamantyl)acrylamide in the synthesis mixture. PMID:23489493
Algorithmic commonalities in the parallel environment
NASA Technical Reports Server (NTRS)
Mcanulty, Michael A.; Wainer, Michael S.
1987-01-01
The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.
2010-01-01
Background Total joint replacements represent a considerable part of day-to-day orthopaedic routine and a substantial proportion of patients undergoing unilateral total hip arthroplasty require a contralateral treatment after the first operation. This report compares complications and functional outcome of simultaneous versus early and delayed two-stage bilateral THA over a five-year follow-up period. Methods The study is a post hoc analysis of prospectively collected data in the framework of the European IDES hip registry. The database query resulted in 1819 patients with 5801 follow-ups treated with bilateral THA between 1965 and 2002. According to the timing of the two operations the sample was divided into three groups: I) 247 patients with simultaneous bilateral THA, II) 737 patients with two-stage bilateral THA within six months, III) 835 patients with two-stage bilateral THA between six months and five years. Results Whereas postoperative hip pain and flexion did not differ between the groups, the best walking capacity was observed in group I and the worst in group III. The rate of intraoperative complications in the first group was comparable to that of the second. The frequency of postoperative local and systemic complication in group I was the lowest of the three groups. The highest rate of complications was observed in group III. Conclusions From the point of view of possible intra- and postoperative complications, one-stage bilateral THA is equally safe or safer than two-stage interventions. Additionally, from an outcome perspective the one-stage procedure can be considered to be advantageous. PMID:20973941
Public Sector Impasse Procedures.
ERIC Educational Resources Information Center
Vadakin, James C.
The subject of collective bargaining negotiation impasse procedures in the public sector, which includes public school systems, is a broad one. In this speech, the author introduces the various procedures, explains how they are used, and lists their advantages and disadvantages. Procedures discussed are mediation, fact-finding, arbitration,…
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Semioptimal practicable algorithmic cooling
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-15
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Pyroshock prediction procedures
NASA Astrophysics Data System (ADS)
Piersol, Allan G.
2002-05-01
Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
A direct element resequencing procedure
NASA Technical Reports Server (NTRS)
Akin, J. E.; Fulford, R. E.
1978-01-01
Element by element frontal solution algorithms are utilized in many of the existing finite element codes. The overall computational efficiency of this type of procedure is directly related to the element data input sequence. Thus, it is important to have a pre-processor which will resequence these data so as to reduce the element wavefronts to be encountered in the solution algorithm. A direct element resequencing algorithm is detailed for reducing element wavefronts. It also generates computational by products that can be utilized in pre-front calculations and in various post-processors. Sample problems are presented and compared with other algorithms.
34 CFR 303.15 - Include; including.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Include; including. 303.15 Section 303.15 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS...
Algorithms Could Automate Cancer Diagnosis
NASA Technical Reports Server (NTRS)
Baky, A. A.; Winkler, D. G.
1982-01-01
Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.
FOHI-D: An iterative Hirshfeld procedure including atomic dipoles
Geldof, D.; Blockhuys, F.; Van Alsenoy, C.; Krishtal, A.
2014-04-14
In this work, a new partitioning method based on the FOHI method (fractional occupation Hirshfeld-I method) will be discussed. The new FOHI-D method uses an iterative scheme in which both the atomic charge and atomic dipole are calculated self-consistently. In order to induce the dipole moment on the atom, an electric field is applied during the atomic SCF calculations. Based on two sets of molecules, the atomic charge and intrinsic atomic dipole moment of hydrogen and chlorine atoms are compared using the iterative Hirshfeld (HI) method, the iterative Stockholder atoms (ISA) method, the FOHI method, and the FOHI-D method. The results obtained are further analyzed as a function of the group electronegativity of Boyd et al. [J. Am. Chem. Soc. 110, 4182 (1988); Boyd et al., J. Am. Chem. Soc. 114, 1652 (1992)] and De Proft et al. [J. Phys. Chem. 97, 1826 (1993)]. The molecular electrostatic potential (ESP) based on the HI, ISA, FOHI, and FOHI-D charges is compared with the ab initio ESP. Finally, the effect of adding HI, ISA, FOHI, and FOHI-D atomic dipoles to the multipole expansion as a function of the precision of the ESP is analyzed.
FOHI-D: an iterative Hirshfeld procedure including atomic dipoles.
Geldof, D; Krishtal, A; Blockhuys, F; Van Alsenoy, C
2014-04-14
In this work, a new partitioning method based on the FOHI method (fractional occupation Hirshfeld-I method) will be discussed. The new FOHI-D method uses an iterative scheme in which both the atomic charge and atomic dipole are calculated self-consistently. In order to induce the dipole moment on the atom, an electric field is applied during the atomic SCF calculations. Based on two sets of molecules, the atomic charge and intrinsic atomic dipole moment of hydrogen and chlorine atoms are compared using the iterative Hirshfeld (HI) method, the iterative Stockholder atoms (ISA) method, the FOHI method, and the FOHI-D method. The results obtained are further analyzed as a function of the group electronegativity of Boyd et al. [J. Am. Chem. Soc. 110, 4182 (1988); Boyd et al., J. Am. Chem. Soc. 114, 1652 (1992)] and De Proft et al. [J. Phys. Chem. 97, 1826 (1993)]. The molecular electrostatic potential (ESP) based on the HI, ISA, FOHI, and FOHI-D charges is compared with the ab initio ESP. Finally, the effect of adding HI, ISA, FOHI, and FOHI-D atomic dipoles to the multipole expansion as a function of the precision of the ESP is analyzed. PMID:24735285
New Results in Astrodynamics Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.
1998-01-01
Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.
Improved piecewise orthogonal signal correction algorithm.
Feudale, Robert N; Tan, Huwei; Brown, Steven D
2003-10-01
Piecewise orthogonal signal correction (POSC), an algorithm that performs local orthogonal filtering, was recently developed to process spectral signals. POSC was shown to improve partial leastsquares regression models over models built with conventional OSC. However, rank deficiencies within the POSC algorithm lead to artifacts in the filtered spectra when removing two or more POSC components. Thus, an updated OSC algorithm for use with the piecewise procedure is reported. It will be demonstrated how the mathematics of this updated OSC algorithm were derived from the previous version and why some OSC versions may not be as appropriate to use with the piecewise modeling procedure as the algorithm reported here. PMID:14639746
Procedural Quantum Programming
NASA Astrophysics Data System (ADS)
Ömer, Bernhard
2002-09-01
While classical computing science has developed a variety of methods and programming languages around the concept of the universal computer, the typical description of quantum algorithms still uses a purely mathematical, non-constructive formalism which makes no difference between a hydrogen atom and a quantum computer. This paper investigates, how the concept of procedural programming languages, the most widely used classical formalism for describing and implementing algorithms, can be adopted to the field of quantum computing, and how non-classical features like the reversibility of unitary transformations, the non-observability of quantum states or the lack of copy and erase operations can be reflected semantically. It introduces the key concepts of procedural quantum programming (hybrid target architecture, operator hierarchy, quantum data types, memory management, etc.) and presents the experimental language QCL, which implements these principles.
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
Wavelet periodicity detection algorithms
NASA Astrophysics Data System (ADS)
Benedetto, John J.; Pfander, Goetz E.
1998-10-01
This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Vassault, A; Arnaud, J; Szymanovicz, A
2010-12-01
Examination procedures have to be written for each examination according to the standard requirements. Using CE marked devices, technical inserts can be used, but because of their lack of homogeneity, it could be easier to document their use as a standard procedure. Document control policy applies for those procedures, the content of which could be as provided in this document. Electronic manuals can be used as well. PMID:21613016
Morton, D.P.
1994-01-01
Handling uncertainty in natural inflow is an important part of a hydroelectric scheduling model. In a stochastic programming formulation, natural inflow may be modeled as a random vector with known distribution, but the size of the resulting mathematical program can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We develop an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of stochastic hydroelectric scheduling problems. Stochastic programming, Hydroelectric scheduling, Large-scale Systems.
Science Safety Procedure Handbook.
ERIC Educational Resources Information Center
Lynch, Mervyn A.; Offet, Lorna
This booklet outlines general safety procedures in the areas of: (1) student supervision; (2) storage safety regulations, including lists of incompatible chemicals, techniques of disposal and storage; (3) fire; and (4) first aid. Specific sections exist for elementary, junior high school, senior high school, in which special procedures are…
ERIC Educational Resources Information Center
Nevada State Dept. of Education, Carson City.
The procedure described herein entails the use of an educational planning consultant, statements of educational and service problems to be solved by proposed construction, a site plan, and architect selection. Also included in the outline of procedures is a tentative statement of specifications, tentative cost estimates and matrices for conducting…
The E-MS Algorithm: Model Selection with Incomplete Data
Jiang, Jiming; Nguyen, Thuan; Rao, J. Sunil
2014-01-01
We propose a procedure associated with the idea of the E-M algorithm for model selection in the presence of missing data. The idea extends the concept of parameters to include both the model and the parameters under the model, and thus allows the model to be part of the E-M iterations. We develop the procedure, known as the E-MS algorithm, under the assumption that the class of candidate models is finite. Some special cases of the procedure are considered, including E-MS with the generalized information criteria (GIC), and E-MS with the adaptive fence (AF; Jiang et al. 2008). We prove numerical convergence of the E-MS algorithm as well as consistency in model selection of the limiting model of the E-MS convergence, for E-MS with GIC and E-MS with AF. We study the impact on model selection of different missing data mechanisms. Furthermore, we carry out extensive simulation studies on the finite-sample performance of the E-MS with comparisons to other procedures. The methodology is also illustrated on a real data analysis involving QTL mapping for an agricultural study on barley grains. PMID:26783375
A Monotonically Convergent Algorithm for FACTALS.
ERIC Educational Resources Information Center
Kiers, Henk A. L.; And Others
1993-01-01
A new procedure is proposed for handling nominal variables in the analysis of variables of mixed measurement levels, and a procedure is developed for handling ordinal variables. Using these procedures, a monotonically convergent algorithm is constructed for the FACTALS method for any mixture of variables. (SLD)
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing
NASA Astrophysics Data System (ADS)
Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.
2016-05-01
State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.
Optimization of a chemical identification algorithm
NASA Astrophysics Data System (ADS)
Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren
2010-04-01
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Component evaluation testing and analysis algorithms.
Hart, Darren M.; Merchant, Bion John
2011-10-01
The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.
A parallel algorithm for the non-symmetric eigenvalue problem
Dongarra, J.; Sidani, M. . Dept. of Computer Science Oak Ridge National Lab., TN )
1991-12-01
This paper describes a parallel algorithm for computing the eigenvalues and eigenvectors of a non-symmetric matrix. The algorithm is based on a divide-and-conquer procedure and uses an iterative refinement technique.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Object-oriented algorithmic laboratory for ordering sparse matrices
Kumfert, G K
2000-05-01
We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively
NASA Technical Reports Server (NTRS)
Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.
1986-01-01
The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.
Pump apparatus including deconsolidator
Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew
2014-10-07
A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.
Ramponi, Denise R
2016-01-01
Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures. PMID:27482994
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
Optical modulator including grapene
Liu, Ming; Yin, Xiaobo; Zhang, Xiang
2016-06-07
The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.
Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J
2015-06-01
Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. PMID:25782189
Algorithm for Identifying Erroneous Rain-Gauge Readings
NASA Technical Reports Server (NTRS)
Rickman, Doug
2005-01-01
An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.
Traffic Noise Ground Attenuation Algorithm Evaluation
NASA Astrophysics Data System (ADS)
Herman, Lloyd Allen
The Federal Highway Administration traffic noise prediction program, STAMINA 2.0, was evaluated for its accuracy. In addition, the ground attenuation algorithm used in the Ontario ORNAMENT method was evaluated to determine its potential to improve these predictions. Field measurements of sound levels were made at 41 sites on I-440 in Nashville, Tennessee in order to both study noise barrier effectiveness and to evaluate STAMINA 2.0 and the performance of the ORNAMENT ground attenuation algorithm. The measurement sites, which contain large variations in terrain, included several cross sections. Further, all sites contain some type of barrier, natural or constructed, which could more fully expose the strength and weaknesses of the ground attenuation algorithms. The noise barrier evaluation was accomplished in accordance with American National Standard Methods for Determination of Insertion Loss of Outdoor Noise Barriers which resulted in an evaluation of this standard. The entire 7.2 mile length of I-440 was modeled using STAMINA 2.0. A multiple run procedure was developed to emulate the results that would be obtained if the ORNAMENT algorithm was incorporated into STAMINA 2.0. Finally, the predicted noise levels based on STAMINA 2.0 and STAMINA with the ORNAMENT ground attenuation algorithm were compared with each other and with the field measurements. It was found that STAMINA 2.0 overpredicted noise levels by an average of over 2 dB for the receivers on I-440, whereas, the STAMINA with ORNAMENT ground attenuation algorithm overpredicted noise levels by an average of less than 0.5 dB. The mean errors for the two predictions were found to be statistically different from each other, and the mean error for the prediction with the ORNAMENT ground attenuation algorithm was not found to be statistically different from zero. The STAMINA 2.0 program predicts little, if any, ground attenuation for receivers at typical first-row distances from highways where noise barriers
Nursing Procedures. NAVMED P-5066.
ERIC Educational Resources Information Center
Bureau of Medicine and Surgery (Navy), Washington, DC.
The revised manual of nursing procedures covers fundamental nursing care, admission and discharge of the patient, assisting with therapeutic measures, pre- and postoperative care, diagnostic tests and procedures, and isolation technique. Each of the over 300 topics includes the purpose, equipment, and procedure to be used and, where relevant, such…
NASA Astrophysics Data System (ADS)
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
Old And New Algorithms For Toeplitz Systems
NASA Astrophysics Data System (ADS)
Brent, Richard P.
1988-02-01
Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.
On-line learning algorithms for locally recurrent neural networks.
Campolucci, P; Uncini, A; Piazza, F; Rao, B D
1999-01-01
This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal recursive backpropagation (CRBP), presents some advantages with respect to the other on-line training methods. The new CRBP algorithm includes as particular cases backpropagation (BP), temporal backpropagation (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among others, thereby providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. The only learning method that has been proposed for locally recurrent networks with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and higher speed of convergence with respect to the Back-Tsoi algorithm, which is supported by the theoretical development and confirmed by simulations. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with the new CRBP method. The simulations show that CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space. PMID:18252525
A Generalization of Takane's Algorithm for DEDICOM.
ERIC Educational Resources Information Center
Kiers, Henk A. L.; And Others
1990-01-01
An algorithm is described for fitting the DEDICOM model (proposed by R. A. Harshman in 1978) for the analysis of asymmetric data matrices. The method modifies a procedure proposed by Y. Takane (1985) to provide guaranteed monotonic convergence. The algorithm is based on a technique known as majorization. (SLD)
FORTRAN Algorithm for Image Processing
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hull, David R.
1987-01-01
FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
Patel, Aalpen A; Glaiberman, Craig; Gould, Derek A
2007-06-01
In the past few decades, medicine has started to look at the potential use of simulators in medical education. Procedural medicine lends itself well to the use of simulators. Efforts are under way to establish national agendas to change the way medical education is approached and thereby improve patient safety. Universities, credentialing organizations, and hospitals are investing large sums of money to build and use simulation centers for undergraduate and graduate medical education. PMID:17574195
Optimization of the double dosimetry algorithm for interventional cardiologists
NASA Astrophysics Data System (ADS)
Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena
2014-11-01
A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.
34 CFR 303.170 - Procedural safeguards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... process procedures in 34 CFR 300.506 through 300.512; or (2) The procedures that the State has developed... 34 Education 2 2011-07-01 2010-07-01 true Procedural safeguards. 303.170 Section 303.170 Education... Procedural safeguards. Each application must include procedural safeguards that— (a) Are consistent...
34 CFR 303.170 - Procedural safeguards.
Code of Federal Regulations, 2010 CFR
2010-07-01
...— (1) The due process procedures in 34 CFR 300.506 through 300.512; or (2) The procedures that the... 34 Education 2 2010-07-01 2010-07-01 false Procedural safeguards. 303.170 Section 303.170... Requirements § 303.170 Procedural safeguards. Each application must include procedural safeguards that— (a)...
Baltayiannis, Nikolaos; Michail, Chandrinos; Lazaridis, George; Anagnostopoulos, Dimitrios; Baka, Sofia; Mpoukovinas, Ioannis; Karavasilis, Vasilis; Lampaki, Sofia; Papaiwannou, Antonis; Karavergou, Anastasia; Kioumis, Ioannis; Pitsiou, Georgia; Katsikogiannis, Nikolaos; Tsakiridis, Kosmas; Rapti, Aggeliki; Trakada, Georgia; Zissimopoulos, Athanasios; Zarogoulidis, Konstantinos
2015-01-01
Minimally invasive procedures, which include laparoscopic surgery, use state-of-the-art technology to reduce the damage to human tissue when performing surgery. Minimally invasive procedures require small “ports” from which the surgeon inserts thin tubes called trocars. Carbon dioxide gas may be used to inflate the area, creating a space between the internal organs and the skin. Then a miniature camera (usually a laparoscope or endoscope) is placed through one of the trocars so the surgical team can view the procedure as a magnified image on video monitors in the operating room. Specialized equipment is inserted through the trocars based on the type of surgery. There are some advanced minimally invasive surgical procedures that can be performed almost exclusively through a single point of entry—meaning only one small incision, like the “uniport” video-assisted thoracoscopic surgery (VATS). Not only do these procedures usually provide equivalent outcomes to traditional “open” surgery (which sometimes require a large incision), but minimally invasive procedures (using small incisions) may offer significant benefits as well: (I) faster recovery; (II) the patient remains for less days hospitalized; (III) less scarring and (IV) less pain. In our current mini review we will present the minimally invasive procedures for thoracic surgery. PMID:25861610
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Experimental validation of clock synchronization algorithms
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Graham, R. Lynn
1992-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.
A subzone reconstruction algorithm for efficient staggered compatible remapping
Starinshak, D.P. Owen, J.M.
2015-09-01
Staggered-grid Lagrangian hydrodynamics algorithms frequently make use of subzonal discretization of state variables for the purposes of improved numerical accuracy, generality to unstructured meshes, and exact conservation of mass, momentum, and energy. For Arbitrary Lagrangian–Eulerian (ALE) methods using a geometric overlay, it is difficult to remap subzonal variables in an accurate and efficient manner due to the number of subzone–subzone intersections that must be computed. This becomes prohibitive in the case of 3D, unstructured, polyhedral meshes. A new procedure is outlined in this paper to avoid direct subzonal remapping. The new algorithm reconstructs the spatial profile of a subzonal variable using remapped zonal and nodal representations of the data. The reconstruction procedure is cast as an under-constrained optimization problem. Enforcing conservation at each zone and node on the remapped mesh provides the set of equality constraints; the objective function corresponds to a quadratic variation per subzone between the values to be reconstructed and a set of target reference values. Numerical results for various pure-remapping and hydrodynamics tests are provided. Ideas for extending the algorithm to staggered-grid radiation-hydrodynamics are discussed as well as ideas for generalizing the algorithm to include inequality constraints.
47 CFR 65.820 - Included items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 3 2013-10-01 2013-10-01 false Included items. 65.820 Section 65.820... OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items. (a... allowance either by performing a lead-lag study of interstate revenue and expense items or by using...
47 CFR 65.820 - Included items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 3 2014-10-01 2014-10-01 false Included items. 65.820 Section 65.820... OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items. (a... allowance either by performing a lead-lag study of interstate revenue and expense items or by using...
47 CFR 65.820 - Included items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 3 2012-10-01 2012-10-01 false Included items. 65.820 Section 65.820... OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items. (a... allowance either by performing a lead-lag study of interstate revenue and expense items or by using...
47 CFR 65.820 - Included items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Included items. 65.820 Section 65.820 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items....
47 CFR 65.820 - Included items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Included items. 65.820 Section 65.820 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.820 Included items....
Search properties of some sequential decoding algorithms.
NASA Technical Reports Server (NTRS)
Geist, J. M.
1973-01-01
Sequential decoding procedures are studied in the context of selecting a path through a tree. Several algorithms are considered, and their properties are compared. It is shown that the stack algorithm introduced by Zigangirov (1966) and by Jelinek (1969) is essentially equivalent to the Fano algorithm with regard to the set of nodes examined and the path selected, although the description, implementation, and action of the two algorithms are quite different. A modified Fano algorithm is introduced, in which the quantizing parameter is eliminated. It can be inferred from limited simulation results that, at least in some applications, the new algorithm is computationally inferior to the old. However, it is of some theoretical interest since the conventional Fano algorithm may be considered to be a quantized version of it.
Recursive Algorithm For Linear Regression
NASA Technical Reports Server (NTRS)
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
New correction procedures for the fast field program which extend its range
NASA Technical Reports Server (NTRS)
West, M.; Sack, R. A.
1990-01-01
A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.
Quarantine document system indexing procedure
NASA Technical Reports Server (NTRS)
1972-01-01
The Quarantine Document System (QDS) is described including the indexing procedures and thesaurus of indexing terms. The QDS consists of these functional elements: acquisition, cataloging, indexing, storage, and retrieval. A complete listing of the collection, and the thesaurus are included.
NASA Astrophysics Data System (ADS)
Sunaguchi, Naoki; Yuasa, Tetsuya; Ando, Masami
2013-09-01
We propose a reconstruction algorithm for analyzer-based phase-contrast computed tomography (CT) applicable to biological samples including hard tissue that may generate conspicuous artifacts with the conventional reconstruction method. The algorithm is an iterative procedure that goes back and forth between a tomogram and its sinogram through the Radon transform and CT reconstruction, while imposing a priori information in individual regions. We demonstrate the efficacy of the algorithm using synthetic data generated by computer simulation reflecting actual experimental conditions and actual data acquired from a rat foot by a dark field imaging system.
NASA Technical Reports Server (NTRS)
Georgeff, Michael P.; Lansky, Amy L.
1986-01-01
Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, the formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's Space Shuttle are provided.
Parliamentary Procedure Made Easy.
ERIC Educational Resources Information Center
Hayden, Ellen T.
Based on the newly revised "Robert's Rules of Order," these self-contained learning activities will help students successfully and actively participate in school, social, civic, political, or professional organizations. There are 13 lessons. Topics studied include the what, why, and history of parliamentary procedure; characteristics of the ideal…
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
Procedures and Policies Manual
ERIC Educational Resources Information Center
Davis, Jane M.
2006-01-01
This document was developed by the Middle Tennessee State University James E. Walker Library Collection Management Department to provide policies and procedural guidelines for the cataloging and processing of bibliographic materials. This document includes policies for cataloging monographs, serials, government documents, machine-readable data…
Bretland, P M
1988-01-01
The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting. PMID:3349241
NOSS altimeter algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.
1982-01-01
A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.
A Pressure Based Multi-Fluid Algorithm for Multiphase Flow
NASA Astrophysics Data System (ADS)
Ming, P. J.; Zhang, W. P.; Lei, G. D.; Zhu, M. G.
A new finite volume-based numerical algorithm for predicting multiphase flow phenomena is presented. The method is formulated on an orthogonal coordinate system in collocated primitive variables. The SIMPLE-like algorithms are based on the prediction and correction procedure, and they are extended for all speed range. The object of the present work is to extent single phase SIMPLE algorithm to multiphase flow. The overview of the algorithm is described and relevant numerical issues are discussed extensively, including implicit process of the moment interaction with “partial elimination” (of the drag term), introduction of under-relaxation factor, formulation of momentum interpolation, and pressure correction equation. This model is based on the k-ɛ model assumed that the turbulence is dictated by the continuous phase. Thus only the transport equation for the continuous phase turbulence energy kc needed to be solved while a algebraic turbulence model is used for dispersed phase. The present author also designed a general program with FORTRAN90 program language for the new algorithm based on the household code General Transport Equation Analyzer (GTEA). The performance of the new method is assessed by solving a 3D bubbly two-phase flow in a vertical pipe. A good agreement is achieved between the numerical result and experimental data in the literature.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Cubit Adaptive Meshing Algorithm Library
Energy Science and Technology Software Center (ESTSC)
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
Efficient estimation algorithms for a satellite-aided search and rescue mission
NASA Technical Reports Server (NTRS)
Argentiero, P.; Garza-Robles, R.
1977-01-01
It has been suggested to establish a search and rescue orbiting satellite system as a means for locating distress signals from downed aircraft, small boats, and overland expeditions. Emissions from Emergency Locator Transmitters (ELT), now available in most U.S. aircraft are to be utilized in the positioning procedure. A description is presented of a set of Doppler navigation algorithms for extracting ELT position coordinates from Doppler data. The algorithms have been programmed for a small computing machine and the resulting system has successfully processed both real and simulated Doppler data. A software system for solving the Doppler navigation problem must include an orbit propagator, a first guess algorithm, and an algorithm for estimating longitude and latitude from Doppler data. Each of these components is considered.
Development and Testing of Data Mining Algorithms for Earth Observation
NASA Technical Reports Server (NTRS)
Glymour, Clark
2005-01-01
The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.
Environmental Test Screening Procedure
NASA Technical Reports Server (NTRS)
Zeidler, Janet
2000-01-01
This procedure describes the methods to be used for environmental stress screening (ESS) of the Lightning Mapper Sensor (LMS) lens assembly. Unless otherwise specified, the procedures shall be completed in the order listed, prior to performance of the Acceptance Test Procedure (ATP). The first unit, S/N 001, will be subjected to the Qualification Vibration Levels, while the remainder will be tested at the Operational Level. Prior to ESS, all units will undergo Pre-ESS Functional Testing that includes measuring the on-axis and plus or minus 0.95 full field Modulation Transfer Function and Back Focal Length. Next, all units will undergo ESS testing, and then Acceptance testing per PR 460.
Antialiasing procedural shaders with reduction maps.
Van Horn, R Brooks; Turk, Greg
2008-01-01
Both image textures and procedural textures suffer from minification aliasing, however, unlike image textures, there is no good automatic method to anti-alias procedural textures. Given a procedural texture on a surface, we present a method that automatically creates an anti-aliased version of the procedural texture. The new procedural texture maintains the original texture's details, but reduces minification aliasing artifacts. This new algorithm creates a pyramid similar to MIP-Maps to represent the texture. Instead of storing per-texel color, our texture hierarchy stores weighted sums of reflectance functions, allowing a wider range of effects to be anti-aliased. The stored reflectance functions are automatically selected based on an analysis of the different reflectances found over the surface. When the texture is viewed at close range, the original texture is used, but as the texture footprint grows, the algorithm gradually replaces the texture's result with an anti-aliased one. PMID:18369263
Laboratory test interpretations and algorithms in utilization management.
Van Cott, Elizabeth M
2014-01-01
Appropriate assimilation of laboratory test results into patient care is enhanced when pathologist interpretations of the laboratory tests are provided for clinicians, and when reflex algorithm testing is utilized. Benefits of algorithms and interpretations include avoidance of misdiagnoses, reducing the number of laboratory tests needed, reducing the number of procedures, transfusions and admissions, shortening the amount of time needed to reach a diagnosis, reducing errors in test ordering, and providing additional information about how the laboratory results might affect other aspects of a patient's care. Providing interpretations can be challenging for pathologists, therefore mechanisms to facilitate the successful implementation of an interpretation service are described. These include algorithm-based testing and interpretation, optimizing laboratory requisitions and/or order-entry systems, proficiency testing programs that assess interpretations and provide constructive feedback, utilization of a collection of interpretive sentences or paragraphs that can be building blocks ("coded comments") for constructing preliminary interpretations, middleware, and pathology resident participation and education. In conclusion, the combination of algorithms and interpretations for laboratory testing has multiple benefits for the medical care for the patient. PMID:24080245
Fast algorithms for combustion kinetics calculations: A comparison
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
1984-01-01
To identify the fastest algorithm currently available for the numerical integration of chemical kinetic rate equations, several algorithms were examined. Findings to date are summarized. The algorithms examined include two general-purpose codes EPISODE and LSODE and three special-purpose (for chemical kinetic calculations) codes CHEMEQ, CRK1D, and GCKP84. In addition, an explicit Runge-Kutta-Merson differential equation solver (IMSL Routine DASCRU) is used to illustrate the problems associated with integrating chemical kinetic rate equations by a classical method. Algorithms were applied to two test problems drawn from combustion kinetics. These problems included all three combustion regimes: induction, heat release and equilibration. Variations of the temperature and species mole fraction are given with time for test problems 1 and 2, respectively. Both test problems were integrated over a time interval of 1 ms in order to obtain near-equilibration of all species and temperature. Of the codes examined in this study, only CREK1D and GCDP84 were written explicitly for integrating exothermic, non-isothermal combustion rate equations. These therefore have built-in procedures for calculating the temperature.
47 CFR 1.9005 - Included services.
Code of Federal Regulations, 2011 CFR
2011-10-01
... to 47 CFR 90.187(b)(2)(v)); (z) The 218-219 MHz band (part 95 of this chapter); (aa) The Local... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules of this subpart apply to...
47 CFR 1.9005 - Included services.
Code of Federal Regulations, 2010 CFR
2010-10-01
... to 47 CFR 90.187(b)(2)(v)); (z) The 218-219 MHz band (part 95 of this chapter); (aa) The Local... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Spectrum Leasing Scope and Authority § 1.9005 Included services. The spectrum leasing policies and rules of this subpart apply to...
Proposed first-generation WSQ bit allocation procedure
Bradley, J.N.; Brislawn, C.M.
1993-09-08
The Wavelet/Scalar Quantization (WSQ) gray-scale fingerprint image compression algorithm involves a symmetric wavelet transform (SWT) image decomposition followed by uniform scalar quantization of each subband. The algorithm is adaptive insofar as the bin widths for the scalar quantizers are image-specific and are included in the compressed image format. Since the decoder requires only the actual bin width values -- but not the method by which they were computed -- the standard allows for future refinements of the WSQ algorithm by improving the method used to select the scalar quantizer bin widths. This report proposes a bit allocation procedure for use with the first-generation WSQ encoder. In previous work a specific formula is provided for the relative sizes of the scalar quantizer bin widths in terms of the variances of the SWT subbands. An explicit specification for the constant of proportionality, q, that determines the absolute bin widths was not given. The actual compression ratio produced by the WSQ algorithm will generally vary from image to image depending on the amount of coding gain obtained by the run-length and Huffman coding, stages of the algorithm, but testing performed by the FBI established that WSQ compression produces archival quality images at compression ratios of around 20 to 1. The bit allocation procedure described in this report possesses a control parameter, r, that can be set by the user to achieve a predetermined amount of lossy compression, effectively giving the user control over the amount of distortion introduced by quantization noise. The variability observed in final compression ratios is thus due only to differences in lossless coding gain from image to image, chiefly a result of the varying amounts of blank background surrounding the print area in the images. Experimental results are presented that demonstrate the proposed method`s effectiveness.
[Neural basis of procedural memory].
Mochizuki-Kawai, Hiroko
2008-07-01
Procedural memory is acquired by trial and error. Our daily life is supported by a number of procedural memories such as those for riding bicycle, typing, reading words, etc. Procedural memory is divided into 3 types; motor, perceptual, and cognitive. Here, the author reviews the cognitive and neural basis of procedural memory according to these 3 types. It is reported that the basal ganglia or cerebellum dysfunction causes deficits in procedural memory. Compared with age-matched healthy participants, patients with Parkinson disease (PD), Huntington disease (HD) or spinocerebellar degeneration (SCD) show deterioration in improvements in motor-type procedural memory tasks. Previous neuroimaging studies have reported that motor-type procedural memory may be supported by multiple brain regions, including the frontal and parietal regions as well as the basal ganglia (cerebellum); this was found with a serial reaction time task (SRT task). Although 2 other types of procedural memory are also maintained by multiple brain regions, the related cerebral areas depend on the type of memory. For example, it was suggested that acquisition of the perceptual type of procedural memory (e.g., ability to read mirror images of words) might be maintained by the bilateral fusiform region, while the acquisition of cognitive procedural memory might be supported by the frontal, parietal, or cerebellar regions as well as the basal ganglia. In the future, we need to cleary understand the neural "network" related to the procedural memory. PMID:18646622
Abstract models for the synthesis of optimization algorithms.
NASA Technical Reports Server (NTRS)
Meyer, G. G. L.; Polak, E.
1971-01-01
Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.
An ROLAP Aggregation Algorithm with the Rules Being Specified
NASA Astrophysics Data System (ADS)
Zhengqiu, Weng; Tai, Kuang; Lina, Zhang
This paper introduces the base theory of data warehouse and ROLAP, and presents a new kind of ROLAP aggregation algorithm, which has calculation algorithms. It covers the shortage of low accuracy of traditional aggregation algorithm that aggregates only by addition. The ROLAP aggregation with calculation algorithm which can aggregate according to business rules improves accuracy. And key designs and procedures are presented. Compared with the traditional method, its efficiency is displayed in an experiment.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
Fusing face-verification algorithms and humans.
O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon
2007-10-01
It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans. PMID:17926698
A Procedure for Morphological Analysis.
ERIC Educational Resources Information Center
Chapin, Paul G.; Norton, Lewis M.
A procedure, designated "MORPH," has been developed for the automatic morphological analysis of complex English words. Each word is reduced to a stem in canonical or dictionary form, plus affixes, inflectional and derivational, represented as morphemes or as syntactic features of the stem. The procedure includes the task of analyzing as many…
Ultrasound-Guided Hip Procedures.
Payne, Jeffrey M
2016-08-01
This article describes the techniques for performing ultrasound-guided procedures in the hip region, including intra-articular hip injection, iliopsoas bursa injection, greater trochanter bursa injection, ischial bursa injection, and piriformis muscle injection. The common indications, pitfalls, accuracy, and efficacy of these procedures are also addressed. PMID:27468669
Evaluation of Mechanical Losses in Piezoelectric Plates using Genetic algorithm
NASA Astrophysics Data System (ADS)
Arnold, F. J.; Gonçalves, M. S.; Massaro, F. R.; Martins, P. S.
Numerical methods are used for the characterization of piezoelectric ceramics. A procedure based on genetic algorithm is applied to find the physical coefficients and mechanical losses. The coefficients are estimated from a minimum scoring of cost function. Electric impedances are calculated from Mason's model including mechanical losses constant and dependent on frequency as a linear function. The results show that the electric impedance percentage error in the investigated interval of frequencies decreases when mechanical losses depending on frequency are inserted in the model. A more accurate characterization of the piezoelectric ceramics mechanical losses should be considered as frequency dependent.
Optimal Design of Geodetic Network Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vajedian, Sanaz; Bagheri, Hosein
2010-05-01
A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Improved algorithm for calculating the Chandrasekhar function
NASA Astrophysics Data System (ADS)
Jablonski, A.
2013-02-01
Theoretical models of electron transport in condensed matter require an effective source of the Chandrasekhar H(x,omega) function. A code providing the H(x,omega) function has to be both accurate and very fast. The current revision of the code published earlier [A. Jablonski, Comput. Phys. Commun. 183 (2012) 1773] decreased the running time, averaged over different pairs of arguments x and omega, by a factor of more than 20. The decrease of the running time in the range of small values of the argument x, less than 0.05, is even more pronounced, reaching a factor of 30. The accuracy of the current code is not affected, and is typically better than 12 decimal places. New version program summaryProgram title: CHANDRAS_v2 Catalogue identifier: AEMC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 976 No. of bytes in distributed program, including test data, etc.: 11416 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows 7, Windows XP, Unix/Linux RAM: 0.7 MB Classification: 2.4, 7.2 Catalogue identifier of previous version: AEMC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 1773 Does the new version supersede the old program: Yes Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two
Subspace scheduling and parallel implementation of non-systolic regular iterative algorithms
Roychowdhury, V.P.; Kailath, T.
1989-01-01
The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning
Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.
2010-09-15
Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning
Chen, Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.
2010-01-01
Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK’s interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization. PMID:20964213
NASA Astrophysics Data System (ADS)
Bolognesi, Tommaso
2011-07-01
In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
Solar Eclipse Monitoring for Solar Energy Applications Using the Solar and Moon Position Algorithms
Reda, I.
2010-03-01
This report includes a procedure for implementing an algorithm (described by Jean Meeus) to calculate the moon's zenith angle with uncertainty of +/-0.001 degrees and azimuth angle with uncertainty of +/-0.003 degrees. The step-by-step format presented here simplifies the complicated steps Meeus describes to calculate the Moon's position, and focuses on the Moon instead of the planets and stars. It also introduces some changes to accommodate for solar radiation applications.
NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1
NASA Technical Reports Server (NTRS)
Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)
2002-01-01
This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
A Frequency-Domain Substructure System Identification Algorithm
NASA Technical Reports Server (NTRS)
Blades, Eric L.; Craig, Roy R., Jr.
1996-01-01
A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.
Proper bibeta ROC model: algorithm, software, and performance evaluation
NASA Astrophysics Data System (ADS)
Chen, Weijie; Hu, Nan
2016-03-01
Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.
Library of Continuation Algorithms
Energy Science and Technology Software Center (ESTSC)
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Non-intrusive parameter identification procedure user's guide
NASA Technical Reports Server (NTRS)
Hanson, G. D.; Jewell, W. F.
1983-01-01
Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Improvements of HITS Algorithms for Spam Links
NASA Astrophysics Data System (ADS)
Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao
The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.
A Short Survey of Document Structure Similarity Algorithms
Buttler, D
2004-02-27
This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.
An algorithmic approach for clinical management of chronic spinal pain.
Manchikanti, Laxmaiah; Helm, Standiford; Singh, Vijay; Benyamin, Ramsin M; Datta, Sukdeb; Hayek, Salim M; Fellows, Bert; Boswell, Mark V
2009-01-01
Interventional pain management, and the interventional techniques which are an integral part of that specialty, are subject to widely varying definitions and practices. How interventional techniques are applied by various specialties is highly variable, even for the most common procedures and conditions. At the same time, many payors, publications, and guidelines are showing increasing interest in the performance and costs of interventional techniques. There is a lack of consensus among interventional pain management specialists with regards to how to diagnose and manage spinal pain and the type and frequency of spinal interventional techniques which should be utilized to treat spinal pain. Therefore, an algorithmic approach is proposed, providing a step-by-step procedure for managing chronic spinal pain patients based upon evidence-based guidelines. The algorithmic approach is developed based on the best available evidence regarding the epidemiology of various identifiable sources of chronic spinal pain. Such an approach to spinal pain includes an appropriate history, examination, and medical decision making in the management of low back pain, neck pain and thoracic pain. This algorithm also provides diagnostic and therapeutic approaches to clinical management utilizing case examples of cervical, lumbar, and thoracic spinal pain. An algorithm for investigating chronic low back pain without disc herniation commences with a clinical question, examination and imaging findings. If there is evidence of radiculitis, spinal stenosis, or other demonstrable causes resulting in radiculitis, one may proceed with diagnostic or therapeutic epidural injections. In the algorithmic approach, facet joints are entertained first in the algorithm because of their commonality as a source of chronic low back pain followed by sacroiliac joint blocks if indicated and provocation discography as the last step. Based on the literature, in the United States, in patients without disc
Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D
2016-05-01
OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes. PMID:26799412
Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Lin, C. T.
1989-01-01
The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.
NASA Astrophysics Data System (ADS)
Baluev, Roman V.
2013-11-01
This is a parallelized algorithm performing a decomposition of a noisy time series into a number of sinusoidal components. The algorithm analyses all suspicious periodicities that can be revealed, including the ones that look like an alias or noise at a glance, but later may prove to be a real variation. After the selection of the initial candidates, the algorithm performs a complete pass through all their possible combinations and computes the rigorous multifrequency statistical significance for each such frequency tuple. The largest combinations that still survived this thresholding procedure represent the outcome of the analysis.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Promoting Understanding of Linear Equations with the Median-Slope Algorithm
ERIC Educational Resources Information Center
Edwards, Michael Todd
2005-01-01
The preliminary findings resulting when invented algorithm is used with entry-level students while introducing linear equations is described. As calculations are accessible, the algorithm is preferable to more rigorous statistical procedures in entry-level classrooms.
Jet-calculus approach including coherence effects
Jones, L.M.; Migneron, R.; Narayanan, K.S.S.
1987-01-01
We show how integrodifferential equations typical of jet calculus can be combined with an averaging procedure to obtain jet-calculus-based results including the Mueller interference graphs. Results in longitudinal-momentum fraction x for physical quantities are higher at intermediate x and lower at large x than with the conventional ''incoherent'' jet calculus. These results resemble those of Marchesini and Webber, who used a Monte Carlo approach based on the same dynamics.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
Myers, Timothy
2006-09-01
The use of protocols or care algorithms in medical facilities has increased in the managed care environment. The definition and application of care algorithms, with a particular focus on the treatment of acute bronchospasm, are explored in this review. The benefits and goals of using protocols, especially in the treatment of asthma, to standardize patient care based on clinical guidelines and evidence-based medicine are explained. Ideally, evidence-based protocols should translate research findings into best medical practices that would serve to better educate patients and their medical providers who are administering these protocols. Protocols should include evaluation components that can monitor, through some mechanism of quality assurance, the success and failure of the instrument so that modifications can be made as necessary. The development and design of an asthma care algorithm can be accomplished by using a four-phase approach: phase 1, identifying demographics, outcomes, and measurement tools; phase 2, reviewing, negotiating, and standardizing best practice; phase 3, testing and implementing the instrument and collecting data; and phase 4, analyzing the data and identifying areas of improvement and future research. The experiences of one medical institution that implemented an asthma care algorithm in the treatment of pediatric asthma are described. Their care algorithms served as tools for decision makers to provide optimal asthma treatment in children. In addition, the studies that used the asthma care algorithm to determine the efficacy and safety of ipratropium bromide and levalbuterol in children with asthma are described. PMID:16945065
Filtering algorithm for dotted interferences
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.
2011-09-01
An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.
Algorithms for skiascopy measurement automatization
NASA Astrophysics Data System (ADS)
Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta
2014-10-01
Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.
Terra, Ricardo Mingarini; Waisberg, Daniel Reis; de Almeida, José Luiz Jesus; Devido, Marcela Santana; Pêgo-Fernandes, Paulo Manuel; Jatene, Fabio Biscegli
2012-01-01
OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005), open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008), videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm) observed by chest scan. The patients were divided into an old algorithm (n = 93) and new algorithm (n = 113) group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge) and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p<0.01). The patients in the new algorithm group were older (41±1 vs. 46.3±16.7 years, p = 0.014) and had higher Charlson Comorbidity Index scores [0(0-3) vs. 2(0-4), p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p = 0.35), although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04). CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery. PMID:22760892
A spreadsheet algorithm for stagewise solvent extraction
Leonard, R.A.; Regalbuto, M.C.
1993-01-01
Part of the novelty is the way in which the problem is organized in the spreadsheet. In addition, to facilitate spreadsheet setup, a new calculational procedure has been developed. The resulting Spreadsheet Algorithm for Stagewise Solvent Extraction (SASSE) can be used with either IBM or Macintosh personal computers as a simple yet powerful tool for analyzing solvent extraction flowsheets.
Interventional radiology neck procedures.
Zabala Landa, R M; Korta Gómez, I; Del Cura Rodríguez, J L
2016-05-01
Ultrasonography has become extremely useful in the evaluation of masses in the head and neck. It enables us to determine the anatomic location of the masses as well as the characteristics of the tissues that compose them, thus making it possible to orient the differential diagnosis toward inflammatory, neoplastic, congenital, traumatic, or vascular lesions, although it is necessary to use computed tomography or magnetic resonance imaging to determine the complete extension of certain lesions. The growing range of interventional procedures, mostly guided by ultrasonography, now includes biopsies, drainages, infiltrations, sclerosing treatments, and tumor ablation. PMID:27138033
Practical pearls for oral procedures.
Davari, Parastoo; Fazel, Nasim
2016-01-01
We provide an overview of clinically relevant principles of oral surgical procedures required in the workup and management of oral mucosal diseases. An understanding of the fundamental concepts of how to perform safely and effectively minor oral procedures is important to the practicing dermatologist and can minimize the need for patient referrals. This chapter reviews the principles of minor oral procedures, including incisional, excisional, and punch biopsies, as well as minor salivary gland excision. Pre- and postoperative patient care is also discussed. PMID:27343958
A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas
Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q
2007-04-18
A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.
Genetic algorithms and MCML program for recovery of optical properties of homogeneous turbid media
Morales Cruzado, Beatriz; y Montiel, Sergio Vázquez; Atencio, José Alberto Delgado
2013-01-01
In this paper, we present and validate a new method for optical properties recovery of turbid media with slab geometry. This method is an iterative method that compares diffuse reflectance and transmittance, measured using integrating spheres, with those obtained using the known algorithm MCML. The search procedure is based in the evolution of a population due to selection of the best individual, i.e., using a genetic algorithm. This new method includes several corrections such as non-linear effects in integrating spheres measurements and loss of light due to the finite size of the sample. As a potential application and proof-of-principle experiment of this new method, we use this new algorithm in the recovery of optical properties of blood samples at different degrees of coagulation. PMID:23504404
A comparative study of algorithms for radar imaging from gapped data
NASA Astrophysics Data System (ADS)
Xu, Xiaojian; Luan, Ruixue; Jia, Li; Huang, Ying
2007-09-01
In ultra wideband (UWB) radar imagery, there are often cases where the radar's operating bandwidth is interrupted due to various reasons, either periodically or randomly. Such interruption produces phase history data gaps, which in turn result in artifacts in the image if conventional image reconstruction techniques are used. The higher level artifacts severely degrade the radar images. In this work, several novel techniques for artifacts suppression in gapped data imaging were discussed. These include: (1) A maximum entropy based gap filling technique using a modified Burg algorithm (MEBGFT); (2) An alternative iteration deconvolution based on minimum entropy (AIDME) and its modified version, a hybrid max-min entropy procedure; (3) A windowed coherent CLEAN algorithm; and (4) Two-dimensional (2-D) periodically-gapped Capon (PG-Capon) and APES (PG-APES) algorithms. Performance of various techniques is comparatively studied.
On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Hsieh, Shih-Fu
1990-01-01
In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei
2016-02-01
Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM". PMID:26872036
SamACO: variable sampling ant colony optimization algorithm for continuous optimization.
Hu, Xiao-Min; Zhang, Jun; Chung, Henry Shu-Hung; Li, Yun; Liu, Ou
2010-12-01
An ant colony optimization (ACO) algorithm offers algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution constructions and to realize a pheromone laying-and-following mechanism. Although ACO is first designed for solving discrete (combinatorial) optimization problems, the ACO procedure is also applicable to continuous optimization. This paper presents a new way of extending ACO to solving continuous optimization problems by focusing on continuous variable sampling as a key to transforming ACO from discrete optimization to continuous optimization. The proposed SamACO algorithm consists of three major steps, i.e., the generation of candidate variable values for selection, the ants' solution construction, and the pheromone update process. The distinct characteristics of SamACO are the cooperation of a novel sampling method for discretizing the continuous search space and an efficient incremental solution construction method based on the sampled values. The performance of SamACO is tested using continuous numerical functions with unimodal and multimodal features. Compared with some state-of-the-art algorithms, including traditional ant-based algorithms and representative computational intelligence algorithms for continuous optimization, the performance of SamACO is seen competitive and promising. PMID:20371409
Reference Policies and Procedures Manual.
ERIC Educational Resources Information Center
George Mason Univ., Fairfax, VA.
This guide to services of the reference department of Fenwick Library, George Mason University, is intended for use by staff in the department, as well as the general public. Areas covered include (1) reference desk services to users; (2) reference desk support procedures; (3) off desk services; (4) collection development, including staff…
A Parallel Algorithm for the Vehicle Routing Problem
Groer, Christopher S; Golden, Bruce; Edward, Wasil
2011-01-01
The vehicle routing problem (VRP) is a dicult and well-studied combinatorial optimization problem. We develop a parallel algorithm for the VRP that combines a heuristic local search improvement procedure with integer programming. We run our parallel algorithm with as many as 129 processors and are able to quickly nd high-quality solutions to standard benchmark problems. We assess the impact of parallelism by analyzing our procedure's performance under a number of dierent scenarios.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
An automatic and fast centerline extraction algorithm for virtual colonoscopy.
Jiang, Guangxiang; Gu, Lixu
2005-01-01
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
36 CFR 908.32 - Review procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Review procedures. 908.32... DEVELOPMENT AREA Review Procedure § 908.32 Review procedures. (a) Upon receipt of a request for review, the... applicable regulations; (2) Information submitted by the applicant including the request for review and...
36 CFR 908.32 - Review procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Review procedures. 908.32... DEVELOPMENT AREA Review Procedure § 908.32 Review procedures. (a) Upon receipt of a request for review, the... applicable regulations; (2) Information submitted by the applicant including the request for review and...
46 CFR 148.5 - Alternative procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Alternative procedures. 148.5 Section 148.5 Shipping... MATERIALS THAT REQUIRE SPECIAL HANDLING General § 148.5 Alternative procedures. (a) The Commandant (CG-ENG-5) may authorize the use of an alternative procedure, including exemptions to the IMSBC...
46 CFR 148.5 - Alternative procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Alternative procedures. 148.5 Section 148.5 Shipping... MATERIALS THAT REQUIRE SPECIAL HANDLING General § 148.5 Alternative procedures. (a) The Commandant (CG-5223) may authorize the use of an alternative procedure, including exemptions to the IMSBC...
46 CFR 148.5 - Alternative procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Alternative procedures. 148.5 Section 148.5 Shipping... MATERIALS THAT REQUIRE SPECIAL HANDLING General § 148.5 Alternative procedures. (a) The Commandant (CG-ENG-5) may authorize the use of an alternative procedure, including exemptions to the IMSBC...
46 CFR 148.5 - Alternative procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Alternative procedures. 148.5 Section 148.5 Shipping... MATERIALS THAT REQUIRE SPECIAL HANDLING General § 148.5 Alternative procedures. (a) The Commandant (CG-ENG-5) may authorize the use of an alternative procedure, including exemptions to the IMSBC...
Medical Service Clinical Laboratory Procedures--Bacteriology.
ERIC Educational Resources Information Center
Department of the Army, Washington, DC.
This manual presents laboratory procedures for the differentiation and identification of disease agents from clinical materials. Included are procedures for the collection of specimens, preparation of culture media, pure culture methods, cultivation of the microorganisms in natural and simulated natural environments, and procedures in…
7 CFR 15b.25 - Procedural safeguards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 1 2011-01-01 2011-01-01 false Procedural safeguards. 15b.25 Section 15b.25... Education § 15b.25 Procedural safeguards. A recipient that provides a public elementary or secondary... related services, a system of procedural safeguards that includes notice, an opportunity for the...
45 CFR 84.36 - Procedural safeguards.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Procedural safeguards. 84.36 Section 84.36 Public... Secondary Education § 84.36 Procedural safeguards. A recipient that operates a public elementary or... need special instruction or related services, a system of procedural safeguards that includes...
34 CFR 104.36 - Procedural safeguards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 1 2011-07-01 2011-07-01 false Procedural safeguards. 104.36 Section 104.36 Education... Preschool, Elementary, and Secondary Education § 104.36 Procedural safeguards. A recipient that operates a... procedural safeguards that includes notice, an opportunity for the parents or guardian of the person...
Procedures for Peer Review of Grant Applications
ERIC Educational Resources Information Center
US Department of Education, 2006
2006-01-01
This guide presents information on the procedures for peer review of grant applications. It begins with an overview of the review process for grant application submission and review. The review process includes: (1) pre-submission procedures that enable the Institute to plan for specific review sessions; (2) application processing procedures; (3)…