A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less
Automated ILA design for synchronous sequential circuits
NASA Technical Reports Server (NTRS)
Liu, M. N.; Liu, K. Z.; Maki, G. K.; Whitaker, S. R.
1991-01-01
An iterative logic array (ILA) architecture for synchronous sequential circuits is presented. This technique utilizes linear algebra to produce the design equations. The ILA realization of synchronous sequential logic can be fully automated with a computer program. A programmable design procedure is proposed to fullfill the design task and layout generation. A software algorithm in the C language has been developed and tested to generate 1 micron CMOS layouts using the Hewlett-Packard FUNGEN module generator shell.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
Gstat: a program for geostatistical modelling, prediction and simulation
NASA Astrophysics Data System (ADS)
Pebesma, Edzer J.; Wesseling, Cees G.
1998-01-01
Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
NASA Astrophysics Data System (ADS)
Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori
2017-12-01
It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.
A comparison of SuperLU solvers on the intel MIC architecture
NASA Astrophysics Data System (ADS)
Tuncel, Mehmet; Duran, Ahmet; Celebi, M. Serdar; Akaydin, Bora; Topkaya, Figen O.
2016-10-01
In many science and engineering applications, problems may result in solving a sparse linear system AX=B. For example, SuperLU_MCDT, a linear solver, was used for the large penta-diagonal matrices for 2D problems and hepta-diagonal matrices for 3D problems, coming from the incompressible blood flow simulation (see [1]). It is important to test the status and potential improvements of state-of-the-art solvers on new technologies. In this work, sequential, multithreaded and distributed versions of SuperLU solvers (see [2]) are examined on the Intel Xeon Phi coprocessors using offload programming model at the EURORA cluster of CINECA in Italy. We consider a portfolio of test matrices containing patterned matrices from UFMM ([3]) and randomly located matrices. This architecture can benefit from high parallelism and large vectors. We find that the sequential SuperLU benefited up to 45 % performance improvement from the offload programming depending on the sparse matrix type and the size of transferred and processed data.
Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study
Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...
2015-01-01
This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
Traversing Theory and Transgressing Academic Discourses: Arts-Based Research in Teacher Education
ERIC Educational Resources Information Center
Dixon, Mary; Senior, Kim
2009-01-01
Pre-service teacher education is marked by linear and sequential programming which offers a plethora of strategies and methods (Cochran-Smith & Zeichner, 2005; Darling Hammond & Bransford, 2005; Grant & Zeichner, 1997). This paper emerges from a three year study within a core education subject in pre-service teacher education in…
Learning directed acyclic graphs from large-scale genomics data.
Nikolay, Fabio; Pesavento, Marius; Kritikos, George; Typas, Nassos
2017-09-20
In this paper, we consider the problem of learning the genetic interaction map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double-knockout (DK) data. Based on a set of well-established biological interaction models, we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE program by incorporating genetic interaction profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically significant results for real measurement data. Finally, we show via numeric simulations that the GENIE program and the GI-profile data extended GENIE (GI-GENIE) program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
Analysis of Optimal Sequential State Discrimination for Linearly Independent Pure Quantum States.
Namkung, Min; Kwon, Younghun
2018-04-25
Recently, J. A. Bergou et al. proposed sequential state discrimination as a new quantum state discrimination scheme. In the scheme, by the successful sequential discrimination of a qubit state, receivers Bob and Charlie can share the information of the qubit prepared by a sender Alice. A merit of the scheme is that a quantum channel is established between Bob and Charlie, but a classical communication is not allowed. In this report, we present a method for extending the original sequential state discrimination of two qubit states to a scheme of N linearly independent pure quantum states. Specifically, we obtain the conditions for the sequential state discrimination of N = 3 pure quantum states. We can analytically provide conditions when there is a special symmetry among N = 3 linearly independent pure quantum states. Additionally, we show that the scenario proposed in this study can be applied to quantum key distribution. Furthermore, we show that the sequential state discrimination of three qutrit states performs better than the strategy of probabilistic quantum cloning.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
G-sequentially connectedness for topological groups with operations
NASA Astrophysics Data System (ADS)
Mucuk, Osman; Cakalli, Huseyin
2016-08-01
It is a well-known fact that for a Hausdorff topological group X, the limits of convergent sequences in X define a function denoted by lim from the set of all convergent sequences in X to X. This notion has been modified by Connor and Grosse-Erdmann for real functions by replacing lim with an arbitrary linear functional G defined on a linear subspace of the vector space of all real sequences. Recently some authors have extended the concept to the topological group setting and introduced the concepts of G-sequential continuity, G-sequential compactness and G-sequential connectedness. In this work, we present some results about G-sequentially closures, G-sequentially connectedness and fundamental system of G-sequentially open neighbourhoods for topological group with operations which include topological groups, topological rings without identity, R-modules, Lie algebras, Jordan algebras, and many others.
ERIC Educational Resources Information Center
Ayalon, Michal; Watson, Anne; Lerman, Steve
2015-01-01
This study investigates students' ways of attending to linear sequential data in two tasks, and conjectures possible relationships between those ways and elements of the task design. Drawing on the substantial literature about such situations, we focus for this paper on linear rate of change, and on covariation and correspondence approaches to…
Optimal Sequential Rules for Computer-Based Instruction.
ERIC Educational Resources Information Center
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
Optimal control of parametric oscillations of compressed flexible bars
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
Programmable polyproteams built using twin peptide superglues
Veggiani, Gianluca; Nakamura, Tomohiko; Brenner, Michael D.; Yan, Jun; Robinson, Carol V.; Howarth, Mark
2016-01-01
Programmed connection of amino acids or nucleotides into chains introduced a revolution in control of biological function. Reacting proteins together is more complex because of the number of reactive groups and delicate stability. Here we achieved sequence-programmed irreversible connection of protein units, forming polyprotein teams by sequential amidation and transamidation. SpyTag peptide is engineered to spontaneously form an isopeptide bond with SpyCatcher protein. By engineering the adhesin RrgA from Streptococcus pneumoniae, we developed the peptide SnoopTag, which formed a spontaneous isopeptide bond to its protein partner SnoopCatcher with >99% yield and no cross-reaction to SpyTag/SpyCatcher. Solid-phase attachment followed by sequential SpyTag or SnoopTag reaction between building-blocks enabled iterative extension. Linear, branched, and combinatorial polyproteins were synthesized, identifying optimal combinations of ligands against death receptors and growth factor receptors for cancer cell death signal activation. This simple and modular route to programmable “polyproteams” should enable exploration of a new area of biological space. PMID:26787909
Programmable polyproteams built using twin peptide superglues.
Veggiani, Gianluca; Nakamura, Tomohiko; Brenner, Michael D; Gayet, Raphaël V; Yan, Jun; Robinson, Carol V; Howarth, Mark
2016-02-02
Programmed connection of amino acids or nucleotides into chains introduced a revolution in control of biological function. Reacting proteins together is more complex because of the number of reactive groups and delicate stability. Here we achieved sequence-programmed irreversible connection of protein units, forming polyprotein teams by sequential amidation and transamidation. SpyTag peptide is engineered to spontaneously form an isopeptide bond with SpyCatcher protein. By engineering the adhesin RrgA from Streptococcus pneumoniae, we developed the peptide SnoopTag, which formed a spontaneous isopeptide bond to its protein partner SnoopCatcher with >99% yield and no cross-reaction to SpyTag/SpyCatcher. Solid-phase attachment followed by sequential SpyTag or SnoopTag reaction between building-blocks enabled iterative extension. Linear, branched, and combinatorial polyproteins were synthesized, identifying optimal combinations of ligands against death receptors and growth factor receptors for cancer cell death signal activation. This simple and modular route to programmable "polyproteams" should enable exploration of a new area of biological space.
Binary tree eigen solver in finite element analysis
NASA Technical Reports Server (NTRS)
Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.
1993-01-01
This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.
Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Leyland, Jane
2014-01-01
In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Classical and sequential limit analysis revisited
NASA Astrophysics Data System (ADS)
Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi
2018-04-01
Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.
Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Derivation of sequential, real-time, process-control programs
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Schneider, Fred B.; Budhiraja, Navin
1991-01-01
The use of weakest-precondition predicate transformers in the derivation of sequential, process-control software is discussed. Only one extension to Dijkstra's calculus for deriving ordinary sequential programs was found to be necessary: function-valued auxiliary variables. These auxiliary variables are needed for reasoning about states of a physical process that exists during program transitions.
Single-Photon-Sensitive HgCdTe Avalanche Photodiode Detector
NASA Technical Reports Server (NTRS)
Huntington, Andrew
2013-01-01
The purpose of this program was to develop single-photon-sensitive short-wavelength infrared (SWIR) and mid-wavelength infrared (MWIR) avalanche photodiode (APD) receivers based on linear-mode HgCdTe APDs, for application by NASA in light detection and ranging (lidar) sensors. Linear-mode photon-counting APDs are desired for lidar because they have a shorter pixel dead time than Geiger APDs, and can detect sequential pulse returns from multiple objects that are closely spaced in range. Linear-mode APDs can also measure photon number, which Geiger APDs cannot, adding an extra dimension to lidar scene data for multi-photon returns. High-gain APDs with low multiplication noise are required for efficient linear-mode detection of single photons because of APD gain statistics -- a low-excess-noise APD will generate detectible current pulses from single photon input at a much higher rate of occurrence than will a noisy APD operated at the same average gain. MWIR and LWIR electron-avalanche HgCdTe APDs have been shown to operate in linear mode at high average avalanche gain (M > 1000) without excess multiplication noise (F = 1), and are therefore very good candidates for linear-mode photon counting. However, detectors fashioned from these narrow-bandgap alloys require aggressive cooling to control thermal dark current. Wider-bandgap SWIR HgCdTe APDs were investigated in this program as a strategy to reduce detector cooling requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy
1993-01-01
Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luszczek, Piotr R; Tomov, Stanimire Z; Dongarra, Jack J
We present an efficient and scalable programming model for the development of linear algebra in heterogeneous multi-coprocessor environments. The model incorporates some of the current best design and implementation practices for the heterogeneous acceleration of dense linear algebra (DLA). Examples are given as the basis for solving linear systems' algorithms - the LU, QR, and Cholesky factorizations. To generate the extreme level of parallelism needed for the efficient use of coprocessors, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multi-core CPUs andmore » coprocessors using a light-weight runtime system. The use of lightweight runtime systems keeps scheduling overhead low, while enabling the expression of parallelism through otherwise sequential code. This simplifies the development efforts and allows the exploration of the unique strengths of the various hardware components.« less
DE and NLP Based QPLS Algorithm
NASA Astrophysics Data System (ADS)
Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo
As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.
Piezoelectric actuator uses sequentially-excited multiple elements: A concept
NASA Technical Reports Server (NTRS)
Sabelman, E. E.
1972-01-01
Utilizing arrays of sequentially-excited piezoelectric elements to provide motion in a nonmagnetic motor provide built-in redundancy and long life required for deployment or actuation of devices on spacecraft. Linear-motion motor devices can also be fabricated.
2013-03-30
Abstract: We study multi-robot routing problems (MR- LDR ) where a team of robots has to visit a set of given targets with linear decreasing rewards over...time, such as required for the delivery of goods to rescue sites after disasters. The objective of MR- LDR is to find an assignment of targets to...We develop a mixed integer program that solves MR- LDR optimally with a flow-type formulation and can be solved faster than the standard TSP-type
Improving the Energy Market: Algorithms, Market Implications, and Transmission Switching
NASA Astrophysics Data System (ADS)
Lipka, Paula Ann
This dissertation aims to improve ISO operations through a better real-time market solution algorithm that directly considers both real and reactive power, finds a feasible Alternating Current Optimal Power Flow solution, and allows for solving transmission switching problems in an AC setting. Most of the IEEE systems do not contain any thermal limits on lines, and the ones that do are often not binding. Chapter 3 modifies the thermal limits for the IEEE systems to create new, interesting test cases. Algorithms created to better solve the power flow problem often solve the IEEE cases without line limits. However, one of the factors that makes the power flow problem hard is thermal limits on the lines. The transmission networks in practice often have transmission lines that become congested, and it is unrealistic to ignore line limits. Modifying the IEEE test cases makes it possible for other researchers to be able to test their algorithms on a setup that is closer to the actual ISO setup. This thesis also examines how to convert limits given on apparent power---as is in the case in the Polish test systems---to limits on current. The main consideration in setting line limits is temperature, which linearly relates to current. Setting limits on real or apparent power is actually a proxy for using the limits on current. Therefore, Chapter 3 shows how to convert back to the best physical representation of line limits. A sequential linearization of the current-voltage formulation of the Alternating Current Optimal Power Flow (ACOPF) problem is used to find an AC-feasible generator dispatch. In this sequential linearization, there are parameters that are set to the previous optimal solution. Additionally, to improve accuracy of the Taylor series approximations that are used, the movement of the voltage is restricted. The movement of the voltage is allowed to be very large at the first iteration and is restricted further on each subsequent iteration, with the restriction corresponding to the accuracy and AC-feasiblity of the solution. This linearization was tested on the IEEE and Polish systems, which range from 14 to 3375 buses and 20 to 4161 transmission lines. It had an accuracy of 0.5% or less for all but the 30-bus system. It also solved in linear time with CPLEX, while the non-linear version solved in O(n1.11) to O(n1.39). The sequential linearization is slower than the nonlinear formulation for smaller problems, but faster for larger problems, and its linear computational time means it would continue solving faster for larger problems. A major consideration to implementing algorithms to solve the optimal generator dispatch is ensuring that the resulting prices from the algorithm will support the market. Since the sequential linearization is linear, it is convex, its marginal values are well-defined, and there is no duality gap. The prices and settlements obtained from the sequential linearization therefore can be used to run a market. This market will include extra prices and settlements for reactive power and voltage, compared to the present-day market, which is based on real power. An advantage of this is that there is a very clear pool that can be used for reactive power/voltage support payments, while presently there is not a clear pool to take them out of. This method also reveals how valuable reactive power and voltage are at different locations, which can enable better planning of reactive resource construction. Transmission switching increases the feasible region of the generator dispatch, which means there may be a better solution than without transmission switching. Power flows on transmission lines are not directly controllable; rather, the power flows according to how it is injected and the physical characteristics of the lines. Changing the network topology changes the physical characteristics, which changes the flows. This means that sets of generator dispatch that may have previously been infeasible due to the flow exceeding line constraints may be feasible, since the flows will be different and may meet line constraints. However, transmission switching is a mixed integer problem, which may have a very slow solution time. For economic switching, we examine a series of heuristics. We examine the congestion rent heuristic in detail and then examine many other heuristics at a higher level. Post-contingency corrective switching aims to fix issues in the power network after a line or generator outage. In Chapter 7, we show that using the sequential linear program with corrective switching helps solve voltage and excessive flow issues. (Abstract shortened by UMI.).
Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina
2012-02-01
The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.
PETSc Users Manual Revision 3.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Brown, J.; Buschelman, K.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself; For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
PETSc Users Manual Revision 3.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Brown, J.; Buschelman, K.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself; For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
PETSc Users Manual Revision 3.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself. ;For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
Stochastic Control of Multi-Scale Networks: Modeling, Analysis and Algorithms
2014-10-20
Theory, (02 2012): 0. doi: B. T. Swapna, Atilla Eryilmaz, Ness B. Shroff. Throughput-Delay Analysis of Random Linear Network Coding for Wireless ... Wireless Sensor Networks and Effects of Long-Range Dependent Data, Sequential Analysis , (10 2012): 0. doi: 10.1080/07474946.2012.719435 Stefano...Sequential Analysis , (10 2012): 0. doi: John S. Baras, Shanshan Zheng. Sequential Anomaly Detection in Wireless Sensor Networks andEffects of Long
Level-Set Topology Optimization with Aeroelastic Constraints
NASA Technical Reports Server (NTRS)
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2015-01-01
Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.
Maissan, Francois; Pool, Jan; Stutterheim, Eric; Wittink, Harriet; Ostelo, Raymond
2018-06-02
Neck pain is the fourth major cause of disability worldwide but sufficient evidence regarding treatment is not available. This study is a first exploratory attempt to gain insight into and consensus on the clinical reasoning of experts in patients with non-specific neck pain. First, we aimed to inventory expert opinions regarding the indication for physiotherapy when, other than neck pain, no positive signs and symptoms and no positive diagnostic tests are present. Secondly, we aimed to determine which measurement instruments are being used and when they are used to support and objectify the clinical reasoning process. Finally, we wanted to establish consensus among experts regarding the use of unimodal interventions in patients with non-specific neck pain, i.e. their sequential linear clinical reasoning. A Delphi study. A Web-based Delphi study was conducted. Fifteen experts (teachers and researchers) participated. Pain alone was deemed not be an indication for physiotherapy treatment. PROMs are mainly used for evaluative purposes and physical tests for diagnostic and evaluative purposes. Eighteen different variants of sequential linear clinical reasoning were investigated within our Delphi study. Only 6 out of 18 variants of sequential linear clinical reasoning reached more than 50% consensus. Pain alone is not an indication for physiotherapy. Insight has been obtained into which measurement instruments are used and when they are used. Consensus about sequential linear lines of clinical reasoning was poor. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
Transportable Maps Software. Volume I.
1982-07-01
being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records
THRESHOLD ELEMENTS AND THE DESIGN OF SEQUENTIAL SWITCHING NETWORKS.
The report covers research performed from March 1966 to March 1967. The major topics treated are: (1) methods for finding weight- threshold vectors...that realize a given switching function in multi- threshold linear logic; (2) synthesis of sequential machines by means of shift registers and simple
Networked Workstations and Parallel Processing Utilizing Functional Languages
1993-03-01
program . This frees the programmer to concentrate on what the program is to do, not how the program is...traditional ’von Neumann’ architecture uses a timer based (e.g., the program counter), sequentially pro- grammed, single processor approach to problem...traditional ’von Neumann’ architecture uses a timer based (e.g., the program counter), sequentially programmed , single processor approach to
High-speed multiple sequence alignment on a reconfigurable platform.
Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf
2006-01-01
Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
Procedures for shape optimization of gas turbine disks
NASA Technical Reports Server (NTRS)
Cheu, Tsu-Chien
1989-01-01
Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.
Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.
O'Connor, B P
1999-11-01
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.
Sequential Service Restoration for Unbalanced Distribution Systems and Microgrids
Chen, Bo; Chen, Chen; Wang, Jianhui; ...
2017-07-07
The resilience and reliability of modern power systems are threatened by increasingly severe weather events and cyber-physical security events. An effective restoration methodology is desired to optimally integrate emerging smart grid technologies and pave the way for developing self-healing smart grids. In this paper, a sequential service restoration (SSR) framework is proposed to generate restoration solutions for distribution systems and microgrids in the event of large-scale power outages. The restoration solution contains a sequence of control actions that properly coordinate switches, distributed generators, and switchable loads to form multiple isolated microgrids. The SSR can be applied for three-phase unbalanced distributionmore » systems and microgrids and can adapt to various operation conditions. Mathematical models are introduced for three-phase unbalanced power flow, voltage regulators, transformers, and loads. Furthermore, the SSR problem is formulated as a mixed-integer linear programming model, and its effectiveness is evaluated via the modified IEEE 123 node test feeder.« less
Sequential Service Restoration for Unbalanced Distribution Systems and Microgrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Bo; Chen, Chen; Wang, Jianhui
The resilience and reliability of modern power systems are threatened by increasingly severe weather events and cyber-physical security events. An effective restoration methodology is desired to optimally integrate emerging smart grid technologies and pave the way for developing self-healing smart grids. In this paper, a sequential service restoration (SSR) framework is proposed to generate restoration solutions for distribution systems and microgrids in the event of large-scale power outages. The restoration solution contains a sequence of control actions that properly coordinate switches, distributed generators, and switchable loads to form multiple isolated microgrids. The SSR can be applied for three-phase unbalanced distributionmore » systems and microgrids and can adapt to various operation conditions. Mathematical models are introduced for three-phase unbalanced power flow, voltage regulators, transformers, and loads. Furthermore, the SSR problem is formulated as a mixed-integer linear programming model, and its effectiveness is evaluated via the modified IEEE 123 node test feeder.« less
An Aid for Planning Programs in Career Education.
ERIC Educational Resources Information Center
Illinois State Board of Vocational Education and Rehabilitation, Springfield. Div. of Vocational and Technical Education.
Offered as an aid for developing sequential occupational education programs, the publication presents a concept in career education planning beginning with kindergarten and continuing through adult years. Career education goals are defined, and steps in planning sequential programs are outlined as follows: (1) organization of the occupational…
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Takayanagi, Toshio; Inaba, Yuya; Kanzaki, Hiroyuki; Jyoichi, Yasutaka; Motomizu, Shoji
2009-09-15
Catalytic effect of metal ions on luminol chemiluminescence (CL) was investigated by sequential injection analysis (SIA). The SIA system was set up with two solenoid micropumps, an eight-port selection valve, and a photosensor module with a fountain-type chemiluminescence cell. The SIA system was controlled and the CL signals were collected by a LabVIEW program. Aqueous solutions of luminol, H(2)O(2), and a sample solution containing metal ion were sequentially aspirated to the holding coil, and the zones were immediately propelled to the detection cell. After optimizing the parameters using 1 x 10(-5)M Fe(3+) solution, catalytic effect of some metal species was compared. Among 16 metal species examined, relatively strong CL responses were obtained with Fe(3+), Fe(2+), VO(2+), VO(3)(-), MnO(4)(-), Co(2+), and Cu(2+). The limits of detection by the present SIA system were comparable to FIA systems. Permanganate ion showed the highest CL sensitivity among the metal species examined; the calibration graph for MnO(4)(-) was linear at the concentration level of 10(-8)M and the limit of detection for MnO(4)(-) was 4.0 x 10(-10)M (S/N=3).
Correlated sequential tunneling through a double barrier for interacting one-dimensional electrons
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-07-01
The problem of resonant tunneling through a quantum dot weakly coupled to spinless Tomonaga-Luttinger liquids has been studied. We compute the linear conductance due to sequential tunneling processes upon employing a master equation approach. Besides the previously used lowest-order golden rule rates describing uncorrelated sequential tunneling processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects can be important. Focusing mainly on the temperature dependence of the peak conductance, we discuss the relation of these findings to previous theoretical and experimental results.
Correlated sequential tunneling in Tomonaga-Luttinger liquid quantum dots
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-02-01
We investigate tunneling through a quantum dot formed by two strong impurites in a spinless Tomonaga-Luttinger liquid. Upon employing a Markovian master equation approach, we compute the linear conductance due to sequential tunneling processes. Besides the previously used lowest-order Golden Rule rates describing uncorrelated sequential tunneling (UST) processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects are shown to dominate over UST. Focusing mainly on the temperature dependence of the conductance maximum, we discuss the relation of our results to previous theoretical and experimental results.
Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng
2018-05-03
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Qin, Fangjun; Jiang, Sai; Zha, Feng
2018-01-01
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538
NASA Technical Reports Server (NTRS)
Jones, D. W.
1971-01-01
The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
A mathematical programming approach for sequential clustering of dynamic networks
NASA Astrophysics Data System (ADS)
Silva, Jonathan C.; Bennett, Laura; Papageorgiou, Lazaros G.; Tsoka, Sophia
2016-02-01
A common analysis performed on dynamic networks is community structure detection, a challenging problem that aims to track the temporal evolution of network modules. An emerging area in this field is evolutionary clustering, where the community structure of a network snapshot is identified by taking into account both its current state as well as previous time points. Based on this concept, we have developed a mixed integer non-linear programming (MINLP) model, SeqMod, that sequentially clusters each snapshot of a dynamic network. The modularity metric is used to determine the quality of community structure of the current snapshot and the historical cost is accounted for by optimising the number of node pairs co-clustered at the previous time point that remain so in the current snapshot partition. Our method is tested on social networks of interactions among high school students, college students and members of the Brazilian Congress. We show that, for an adequate parameter setting, our algorithm detects the classes that these students belong more accurately than partitioning each time step individually or by partitioning the aggregated snapshots. Our method also detects drastic discontinuities in interaction patterns across network snapshots. Finally, we present comparative results with similar community detection methods for time-dependent networks from the literature. Overall, we illustrate the applicability of mathematical programming as a flexible, adaptable and systematic approach for these community detection problems. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.
20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.
Code of Federal Regulations, 2011 CFR
2011-04-01
... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...
20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.
Code of Federal Regulations, 2013 CFR
2013-04-01
... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...
20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.
Code of Federal Regulations, 2010 CFR
2010-04-01
... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...
20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.
Code of Federal Regulations, 2014 CFR
2014-04-01
... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...
20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.
Code of Federal Regulations, 2012 CFR
2012-04-01
... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...
Sun, Zeyu; Hamilton, Karyn L.; Reardon, Kenneth F.
2014-01-01
We evaluated a sequential elution protocol from immobilized metal affinity chromatography (SIMAC) employing gallium-based immobilized metal affinity chromatography (IMAC) in conjunction with titanium-dioxide-based metal oxide affinity chromatography (MOAC). The quantitative performance of this SIMAC enrichment approach, assessed in terms of repeatability, dynamic range, and linearity, was evaluated using a mixture composed of tryptic peptides from caseins, bovine serum albumin, and phosphopeptide standards. While our data demonstrate the overall consistent performance of the SIMAC approach under various loading conditions, the results also revealed that the method had limited repeatability and linearity for most phosphopeptides tested, and different phosphopeptides were found to have different linear ranges. These data suggest that, unless additional strategies are used, SIMAC should be regarded as a semi-quantitative method when used in large-scale phosphoproteomics studies in complex backgrounds. PMID:24096195
Pistón, Mariela; Mollo, Alicia; Knochen, Moisés
2011-01-01
A fast and efficient automated method using a sequential injection analysis (SIA) system, based on the Griess, reaction was developed for the determination of nitrate and nitrite in infant formulas and milk powder. The system enables to mix a measured amount of sample (previously constituted in the liquid form and deproteinized) with the chromogenic reagent to produce a colored substance whose absorbance was recorded. For nitrate determination, an on-line prereduction step was added by passing the sample through a Cd minicolumn. The system was controlled from a PC by means of a user-friendly program. Figures of merit include linearity (r2 > 0.999 for both analytes), limits of detection (0.32 mg kg−1 NO3-N, and 0.05 mg kg−1 NO2-N), and precision (sr%) 0.8–3.0. Results were statistically in good agreement with those obtained with the reference ISO-IDF method. The sampling frequency was 30 hour−1 (nitrate) and 80 hour−1 (nitrite) when performed separately. PMID:21960750
Sequential injection system with multi-parameter analysis capability for water quality measurement.
Kaewwonglom, Natcha; Jakmunee, Jaroon
2015-11-01
A simple sequential injection (SI) system with capability to determine multi-parameter has been developed for the determination of iron, manganese, phosphate and ammonium. A simple and compact colorimeter was fabricated in the laboratory to be employed as a detector. The system was optimized for suitable conditions for determining each parameter by changing software program and without reconfiguration of the hardware. Under the optimum conditions, the methods showed linear ranges of 0.2-10 mg L(-1) for iron and manganese determinations, and 0.3-5.0 mg L(-1) for phosphate and ammonium determinations, with correlation coefficients of 0.9998, 0.9973, 0.9987 and 0.9983, respectively. The system provided detection limits of 0.01, 0.14, 0.004 and 0.02 mg L(-1) for iron, manganese, phosphate and ammonium, respectively. The proposed system has good precision, low chemical consumption and high throughput. It was applied for monitoring water quality of Ping river in Chiang Mai, Thailand. Recoveries of the analysis were obtained in the range of 82-119%. Copyright © 2015 Elsevier B.V. All rights reserved.
The composite sequential clustering technique for analysis of multispectral scanner data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.
Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M
2017-01-01
Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.
Sequential two-photon double ionization of noble gases by circularly polarized XUV radiation
NASA Astrophysics Data System (ADS)
Gryzlova, E. V.; Grum-Grzhimailo, A. N.; Kuzmina, E. I.; Strakhova, S. I.
2014-10-01
Photoelectron angular distributions (PADs) and angular correlations between two emitted electrons in sequential two-photon double ionization (2PDI) of atoms by circularly polarized radiation are studied theoretically. In particular, the sequential 2PDI of the valence n{{p}6} shell in noble gas atoms (neon, argon, krypton) is analyzed, accounting for the first-order corrections to the dipole approximation. Due to different selection rules in ionization transitions, the circular polarization of photons causes some new features of the cross sections, PADs and angular correlation functions in comparison with the case of linearly polarized photons.
Probabilistic Guidance of Swarms using Sequential Convex Programming
2014-01-01
quadcopter fleet [24]. In this paper, sequential convex programming (SCP) [25] is implemented using model predictive control (MPC) to provide real-time...in order to make Problem 1 convex. The details for convexifying this problem can be found in [26]. The main steps are discretizing the problem using
S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.
ERIC Educational Resources Information Center
CICIARELLI, V; LEONARD, JOSEPH
A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Optimal mode transformations for linear-optical cluster-state generation
Uskov, Dmitry B.; Lougovski, Pavel; Alsing, Paul M.; ...
2015-06-15
In this paper, we analyze the generation of linear-optical cluster states (LOCSs) via sequential addition of one and two qubits. Existing approaches employ the stochastic linear-optical two-qubit controlled-Z (CZ) gate with success rate of 1/9 per operation. The question of optimality of the CZ gate with respect to LOCS generation has remained open. We report that there are alternative schemes to the CZ gate that are exponentially more efficient and show that sequential LOCS growth is indeed globally optimal. We find that the optimal cluster growth operation is a state transformation on a subspace of the full Hilbert space. Finally,more » we show that the maximal success rate of postselected entangling n photonic qubits or m Bell pairs into a cluster is (1/2) n-1 and (1/4) m-1, respectively, with no ancilla photons, and we give an explicit optical description of the optimal mode transformations.« less
Structure of weakly 2-dependent siphons
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh; Chen, Jiun-Ting
2013-09-01
Deadlocks arising from insufficiently marked siphons in flexible manufacturing systems can be controlled by adding monitors to each siphon - too many for large systems. Li and Zhou add monitors to elementary siphons only while controlling the rest of (called dependent) siphons by adjusting control depth variables of elementary siphons. Only a linear number of monitors are required. The control of weakly dependent siphons (WDSs) is rather conservative since only positive terms were considered. The structure for strongly dependent siphons (SDSs) has been studied earlier. Based on this structure, the optimal sequence of adding monitors has been discovered earlier. Better controllability has been discovered to achieve faster and more permissive control. The results have been extended earlier to S3PGR2 (systems of simple sequential processes with general resource requirements). This paper explores the structures for WDSs, which, as found in this paper, involve elementary resource circuits interconnecting at more than (for SDSs, exactly) one resource place. This saves the time to compute compound siphons, their complementary sets and T-characteristic vectors. Also it allows us (1) to improve the controllability of WDSs and control siphons and (2) to avoid the time to find independent vectors for elementary siphons. We propose a sufficient and necessary test for adjusting control depth variables in S3PR (systems of simple sequential processes with resources) to avoid the sufficient-only time-consuming linear integer programming test (LIP) (Nondeterministic Polynomial (NP) time complete problem) required previously for some cases.
Energy-aware virtual network embedding in flexi-grid optical networks
NASA Astrophysics Data System (ADS)
Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng; Chen, Bin
2018-01-01
Virtual network embedding (VNE) problem is to map multiple heterogeneous virtual networks (VN) on a shared substrate network, which mitigate the ossification of the substrate network. Meanwhile, energy efficiency has been widely considered in the network design. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the power increment of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low energy consumption. Numerical results show the functionality of the heuristic algorithm in a 24-node network.
Evaluation of an antibiotic intravenous to oral sequential therapy program.
Pablos, Ana I; Escobar, Ismael; Albiñana, Sandra; Serrano, Olga; Ferrari, José M; Herreros de Tejada, Alberto
2005-01-01
This study was designed to analyse the drug consumption difference and economic impact of an antibiotic sequential therapy focused on quinolones. We studied the consumption of quinolones (ofloxacin/levofloxacin and ciprofloxacin) 6 months before and after the implementation of a sequential therapy program in hospitalised patients. It was calculated for each antibiotic, in its oral and intravenous forms, in defined daily dose (DDD/100 stays per day) and economical terms (drug acquisition cost). At the beginning of the program ofloxacin was replaced by levofloxacin and, since their clinical uses are similar, the consumption of both drugs was compared during the period. In economic terms, the consumption of intravenous quinolones decreased 60% whereas the consumption of oral quinolones increased 66%. In DDD/100 stays per day, intravenous forms consumption decreased 53% and oral forms consumption increased 36%. Focusing on quinolones, the implementation of a sequential therapy program based on promoting an early switch from intravenous to oral regimen has proved its capacity to alter the utilisation profile of these antibiotics. The program has permitted the hospital a global saving of 41420 dollars for these drugs during the period of time considered. Copyright (c) 2004 John Wiley & Sons, Ltd.
Algorithms for Large-Scale Astronomical Problems
2013-08-01
implemented as a succession of Hadoop MapReduce jobs and sequential programs written in Java . The sampling and splitting stages are implemented as...one MapReduce job, the partitioning and clustering phases make up another job. The merging stage is implemented as a stand-alone Java program. The...Merging. The merging stage is implemented as a sequential Java program that reads the files with the shell information, which were generated by
Topics in the Sequential Design of Experiments
1992-03-01
decision , unless so designated by other documentation. 12a. DISTRIBUTION /AVAILABIIUTY STATEMENT 12b. DISTRIBUTION CODE Approved for public release...3 0 1992 D 14. SUBJECT TERMS 15. NUMBER OF PAGES12 Design of Experiments, Renewal Theory , Sequential Testing 1 2. PRICE CODE Limit Theory , Local...distributions for one parameter exponential families," by Michael Woodroofe. Stntca, 2 (1991), 91-112. [6] "A non linear renewal theory for a functional of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akyildiz, Halil I.; Jur, Jesse S., E-mail: jsjur@ncsu.edu
2015-03-15
The effect of exposure conditions and surface area on hybrid material formation during sequential vapor infiltrations of trimethylaluminum (TMA) into polyamide 6 (PA6) and polyethylene terephthalate (PET) fibers is investigated. Mass gain of the fabric samples after infiltration was examined to elucidate the reaction extent with increasing number of sequential TMA single exposures, defined as the times for a TMA dose and a hold period. An interdependent relationship between dosing time and holding time on the hybrid material formation is observed for TMA exposure PET, exhibited as a linear trend between the mass gain and total exposure (dose time ×more » hold time × number of sequential exposures). Deviation from this linear relationship is only observed under very long dose or hold times. In comparison, amount of hybrid material formed during sequential exposures to PA6 fibers is found to be highly dependent on amount of TMA dosed. Increasing the surface area of the fiber by altering its cross-sectional dimension is shown to have little on the reaction behavior but does allow for improved diffusion of the TMA into the fiber. This work allows for the projection of exposure parameters necessary for future high-throughput hybrid modifications to polymer materials.« less
A high level language for a high performance computer
NASA Technical Reports Server (NTRS)
Perrott, R. H.
1978-01-01
The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.
NASA Technical Reports Server (NTRS)
Martin, Carl J., Jr.
1996-01-01
This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.
The Aggregation of Single-Case Results Using Hierarchical Linear Models
ERIC Educational Resources Information Center
Van den Noortgate, Wim; Onghena, Patrick
2007-01-01
To investigate the generalizability of the results of single-case experimental studies, evaluating the effect of one or more treatments, in applied research various simultaneous and sequential replication strategies are used. We discuss one approach for aggregating the results for single-cases: the use of hierarchical linear models. This approach…
Thinking Style, Browsing Primes and Hypermedia Navigation
ERIC Educational Resources Information Center
Fiorina, Lorenzo; Antonietti, Alessandro; Colombo, Barbara; Bartolomeo, Annella
2007-01-01
There is a common assumption that hypermedia navigation is influenced by a learner's style of thinking, so people who are inclined to apply sequential and analytical strategies (left-thinkers) are thought to browse hypermedia in a linear way, whereas those who prefer holistic and intuitive strategies (right-thinkers) tend towards non-linear paths.…
Tait, Jamie L.; Duckham, Rachel L.; Milte, Catherine M.; Main, Luana C.; Daly, Robin M.
2017-01-01
Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people. PMID:29163146
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
ERIC Educational Resources Information Center
Brodhecker, Shirley G.
This practicum report addresses the need to supply Head Start teachers with: (1) specific preschool music objectives; (2) a sequential preschool developmental program in music to match the child's cognitive level; (3) how to choose instructional material to encourage specific basic school readiness skills; and (4) workshops to accomplish these…
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
C-quence: a tool for analyzing qualitative sequential data.
Duncan, Starkey; Collier, Nicholson T
2002-02-01
C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.
The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-06-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.
Brown, Raymond J.
1977-01-01
The present invention relates to a tool setting device for use with numerically controlled machine tools, such as lathes and milling machines. A reference position of the machine tool relative to the workpiece along both the X and Y axes is utilized by the control circuit for driving the tool through its program. This reference position is determined for both axes by displacing a single linear variable displacement transducer (LVDT) with the machine tool through a T-shaped pivotal bar. The use of the T-shaped bar allows the cutting tool to be moved sequentially in the X or Y direction for indicating the actual position of the machine tool relative to the predetermined desired position in the numerical control circuit by using a single LVDT.
NASA Astrophysics Data System (ADS)
Liu, GaiYun; Chao, Daniel Yuh
2015-08-01
To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.
Sequential self-assembly of DNA functionalized droplets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yin; McMullen, Angus; Pontani, Lea-Laetitia
Complex structures and devices, both natural and manmade, are often constructed sequentially. From crystallization to embryogenesis, a nucleus or seed is formed and built upon. Sequential assembly allows for initiation, signaling, and logical programming, which are necessary for making enclosed, hierarchical structures. Though biology relies on such schemes, they have not been available in materials science. We demonstrate programmed sequential self-assembly of DNA functionalized emulsions. The droplets are initially inert because the grafted DNA strands are pre-hybridized in pairs. Active strands on initiator droplets then displace one of the paired strands and thus release its complement, which in turn activatesmore » the next droplet in the sequence, akin to living polymerization. This strategy provides time and logic control during the self-assembly process, and offers a new perspective on the synthesis of materials.« less
Sequential self-assembly of DNA functionalized droplets
Zhang, Yin; McMullen, Angus; Pontani, Lea-Laetitia; ...
2017-06-16
Complex structures and devices, both natural and manmade, are often constructed sequentially. From crystallization to embryogenesis, a nucleus or seed is formed and built upon. Sequential assembly allows for initiation, signaling, and logical programming, which are necessary for making enclosed, hierarchical structures. Though biology relies on such schemes, they have not been available in materials science. We demonstrate programmed sequential self-assembly of DNA functionalized emulsions. The droplets are initially inert because the grafted DNA strands are pre-hybridized in pairs. Active strands on initiator droplets then displace one of the paired strands and thus release its complement, which in turn activatesmore » the next droplet in the sequence, akin to living polymerization. This strategy provides time and logic control during the self-assembly process, and offers a new perspective on the synthesis of materials.« less
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
2001-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
1999-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Zhang, Jia-yu; Wang, Zi-jian; Li, Yun; Liu, Ying; Cai, Wei; Li, Chen; Lu, Jian-qiu; Qiao, Yan-jiang
2016-01-15
The analytical methodologies for evaluation of multi-component system in traditional Chinese medicines (TCMs) have been inadequate or unacceptable. As a result, the unclarity of multi-component hinders the sufficient interpretation of their bioactivities. In this paper, an ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap (UPLC-LTQ-Orbitrap)-based strategy focused on the comprehensive identification of TCM sequential constituents was developed. The strategy was characterized by molecular design, multiple ion monitoring (MIM), targeted database hits and mass spectral trees similarity filter (MTSF), and even more isomerism discrimination. It was successfully applied in the HRMS data-acquisition and processing of chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ), and a total of 115 chromatographic peaks attributed to 18 categories were characterized, allowing a comprehensive revelation of CGAs in FLJ for the first time. This demonstrated that MIM based on molecular design could improve the efficiency to trigger MS/MS fragmentation reactions. Targeted database hits and MTSF searching greatly facilitated the processing of extremely large information data. Besides, the introduction of diagnostic product ions (DPIs) discrimination, ClogP analysis, and molecular simulation, raised the efficiency and accuracy to characterize sequential constituents especially position and geometric isomers. In conclusion, the results expanded our understanding on CGAs in FLJ, and the strategy could be exemplary for future research on the comprehensive identification of sequential constituents in TCMs. Meanwhile, it may propose a novel idea for analyzing sequential constituents, and is promising for quality control and evaluation of TCMs. Copyright © 2015 Elsevier B.V. All rights reserved.
Cuzzilla, R; Spittle, A J; Lee, K J; Rogerson, S; Cowan, F M; Doyle, L W; Cheong, J L Y
2018-06-01
Brain growth in the early postnatal period following preterm birth has not been well described. This study of infants born at <30 weeks' gestational age and without major brain injury aimed to accomplish the following: 1) assess the reproducibility of linear measures made from cranial ultrasonography, 2) evaluate brain growth using sequential cranial ultrasonography linear measures from birth to term-equivalent age, and 3) explore perinatal predictors of postnatal brain growth. Participants comprised 144 infants born at <30 weeks' gestational age at a single center between January 2011 and December 2013. Infants with major brain injury seen on cranial ultrasonography or congenital or chromosomal abnormalities were excluded. Brain tissue and fluid spaces were measured from cranial ultrasonography performed as part of routine clinical care. Brain growth was assessed in 3 time intervals: <7, 7-27, and >27 days' postnatal age. Data were analyzed using intraclass correlation coefficients and mixed-effects regression. A total of 429 scans were assessed for 144 infants. Several linear measures showed excellent reproducibility. All measures of brain tissue increased with postnatal age, except for the biparietal diameter, which decreased within the first postnatal week and increased thereafter. Gestational age of ≥28 weeks at birth was associated with slower growth of the biparietal diameter and ventricular width compared with gestational age of <28 weeks. Postnatal corticosteroid administration was associated with slower growth of the corpus callosum length, transcerebellar diameter, and vermis height. Sepsis and necrotizing enterocolitis were associated with slower growth of the transcerebellar diameter. Postnatal brain growth in infants born at <30 weeks' gestational age can be evaluated using sequential linear measures made from routine cranial ultrasonography and is associated with perinatal predictors of long-term development. © 2018 by American Journal of Neuroradiology.
NASA Astrophysics Data System (ADS)
Zhao, Dang-Jun; Song, Zheng-Yu
2017-08-01
This study proposes a multiphase convex programming approach for rapid reentry trajectory generation that satisfies path, waypoint and no-fly zone (NFZ) constraints on Common Aerial Vehicles (CAVs). Because the time when the vehicle reaches the waypoint is unknown, the trajectory of the vehicle is divided into several phases according to the prescribed waypoints, rendering a multiphase optimization problem with free final time. Due to the requirement of rapidity, the minimum flight time of each phase index is preferred over other indices in this research. The sequential linearization is used to approximate the nonlinear dynamics of the vehicle as well as the nonlinear concave path constraints on the heat rate, dynamic pressure, and normal load; meanwhile, the convexification techniques are proposed to relax the concave constraints on control variables. Next, the original multiphase optimization problem is reformulated as a standard second-order convex programming problem. Theoretical analysis is conducted to show that the original problem and the converted problem have the same solution. Numerical results are presented to demonstrate that the proposed approach is efficient and effective.
Hernández-Torrano, Daniel; Ali, Syed; Chan, Chee-Kai
2017-08-08
Students commencing their medical training arrive with different educational backgrounds and a diverse range of learning experiences. Consequently, students would have developed preferred approaches to acquiring and processing information or learning style preferences. Understanding first-year students' learning style preferences is important to success in learning. However, little is understood about how learning styles impact learning and performance across different subjects within the medical curriculum. Greater understanding of the relationship between students' learning style preferences and academic performance in specific medical subjects would be valuable. This cross-sectional study examined the learning style preferences of first-year medical students and how they differ across gender. This research also analyzed the effect of learning styles on academic performance across different subjects within a medical education program in a Central Asian university. A total of 52 students (57.7% females) from two batches of first-year medical school completed the Index of Learning Styles Questionnaire, which measures four dimensions of learning styles: sensing-intuitive; visual-verbal; active-reflective; sequential-global. First-year medical students reported preferences for visual (80.8%) and sequential (60.5%) learning styles, suggesting that these students preferred to learn through demonstrations and diagrams and in a linear and sequential way. Our results indicate that male medical students have higher preference for visual learning style over verbal, while females seemed to have a higher preference for sequential learning style over global. Significant associations were found between sensing-intuitive learning styles and performance in Genetics [β = -0.46, B = -0.44, p < 0.01] and Anatomy [β = -0.41, B = -0.61, p < 0.05] and between sequential-global styles and performance in Genetics [β = 0.36, B = 0.43, p < 0.05]. More specifically, sensing learners were more likely to perform better than intuitive learners in the two subjects and global learners were more likely to perform better than sequential learners in Genetics. This knowledge will be helpful to individual students to improve their performance in these subjects by adopting new sensing learning techniques. Instructors can also benefit by modifying and adapting more appropriate teaching approaches in these subjects. Future studies to validate this observation will be valuable.
Sadeque, Farig; Xu, Dongfang; Bethard, Steven
2017-01-01
The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users’ posts to Reddit. In this paper we present the techniques employed for the University of Arizona team’s participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets. PMID:29075167
Harold R. Offord
1966-01-01
Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...
Zhang, Zhichao; Ye, Zhibin
2012-08-18
Upon the addition of an equimolar amount of 2,2'-bipyridine, a cationic Pd-diimine complex capable of facilitating "living" ethylene polymerization is switched to catalyze "living" alternating copolymerization of 4-tertbutylstyrene and CO. This unique chemistry is thus employed to synthesize a range of well-defined treelike (hyperbranched polyethylene)-b-(linear polyketone) block polymers.
Linear motion device and method for inserting and withdrawing control rods
Smith, Jay E.
1984-01-01
A linear motion device, more specifically a control rod drive mechanism (CRDM) for inserting and withdrawing control rods into a reactor core, is capable of independently and sequentially positioning two sets of control rods with a single motor stator and rotor. The CRDM disclosed can control more than one control rod lead screw without incurring a substantial increase in the size of the mechanism.
Programing Procedures Manual (PPM).
1981-12-15
terms ’reel’, ’unit’, and ’volume’ are synonymous and completely interchangeable in the CLOSE statement. Treatment of sequential mass storage files is...logically equivalent to the treatment of a file on tape or analogous sequential media. * For the purpose of showing the effect of various types of CLOSE...Overlay Area CA6 Address of Abend Relative to beginning of overlay segment The programer can now refer to the compile source listing for the overlay
Topology optimization of embedded piezoelectric actuators considering control spillover effects
NASA Astrophysics Data System (ADS)
Gonçalves, Juliano F.; De Leon, Daniel M.; Perondi, Eduardo A.
2017-02-01
This article addresses the problem of active structural vibration control by means of embedded piezoelectric actuators. The topology optimization method using the solid isotropic material with penalization (SIMP) approach is employed in this work to find the optimum design of actuators taken into account the control spillover effects. A coupled finite element model of the structure is derived assuming a two-phase material and this structural model is written into the state-space representation. The proposed optimization formulation aims to determine the distribution of piezoelectric material which maximizes the controllability for a given vibration mode. The undesirable effects of the feedback control on the residual modes are limited by including a spillover constraint term containing the residual controllability Gramian eigenvalues. The optimization of the shape and placement of the conventionally embedded piezoelectric actuators are performed using a Sequential Linear Programming (SLP) algorithm. Numerical examples are presented considering the control of the bending vibration modes for a cantilever and a fixed beam. A Linear-Quadratic Regulator (LQR) is synthesized for each case of controlled structure in order to compare the influence of the additional constraint.
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
Van Parijs, Hilde; Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark
2014-01-01
Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine.
Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark
2014-01-01
Background. Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. Methods. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. Results. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. Conclusions. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine. PMID:25162031
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Schneider, Francine; de Vries, Hein; van Osch, Liesbeth ADM; van Nierop, Peter WM; Kremers, Stef PJ
2012-01-01
Background Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). Objectives The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Methods Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Results Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Conclusion Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Trial Registration Dutch Trial Register NTR2168 PMID:22403770
Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J
2012-03-08
Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Dutch Trial Register NTR2168.
NASA Astrophysics Data System (ADS)
Gao, J.; Lythe, M. B.
1996-06-01
This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.
Bahnasy, Mahmoud F; Lucy, Charles A
2012-12-07
A sequential surfactant bilayer/diblock copolymer coating was previously developed for the separation of proteins. The coating is formed by flushing the capillary with the cationic surfactant dioctadecyldimethylammonium bromide (DODAB) followed by the neutral polymer poly-oxyethylene (POE) stearate. Herein we show the method development and optimization for capillary isoelectric focusing (cIEF) separations based on the developed sequential coating. Electroosmotic flow can be tuned by varying the POE chain length which allows optimization of resolution and analysis time. DODAB/POE 40 stearate can be used to perform single-step cIEF, while both DODAB/POE 40 and DODAB/POE 100 stearate allow performing two-step cIEF methodologies. A set of peptide markers is used to assess the coating performance. The sequential coating has been applied successfully to cIEF separations using different capillary lengths and inner diameters. A linear pH gradient is established only in two-step CIEF methodology using 3-10 pH 2.5% (v/v) carrier ampholyte. Hemoglobin A(0) and S variants are successfully resolved on DODAB/POE 40 stearate sequentially coated capillaries. Copyright © 2012 Elsevier B.V. All rights reserved.
Efficient partitioning and assignment on programs for multiprocessor execution
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1993-01-01
The general problem studied is that of segmenting or partitioning programs for distribution across a multiprocessor system. Efficient partitioning and the assignment of program elements are of great importance since the time consumed in this overhead activity may easily dominate the computation, effectively eliminating any gains made by the use of the parallelism. In this study, the partitioning of sequentially structured programs (written in FORTRAN) is evaluated. Heuristics, developed for similar applications are examined. Finally, a model for queueing networks with finite queues is developed which may be used to analyze multiprocessor system architectures with a shared memory approach to the problem of partitioning. The properties of sequentially written programs form obstacles to large scale (at the procedure or subroutine level) parallelization. Data dependencies of even the minutest nature, reflecting the sequential development of the program, severely limit parallelism. The design of heuristic algorithms is tied to the experience gained in the parallel splitting. Parallelism obtained through the physical separation of data has seen some success, especially at the data element level. Data parallelism on a grander scale requires models that accurately reflect the effects of blocking caused by finite queues. A model for the approximation of the performance of finite queueing networks is developed. This model makes use of the decomposition approach combined with the efficiency of product form solutions.
Transition play in team performance of volleyball: a log-linear analysis.
Eom, H J; Schutz, R W
1992-09-01
The purpose of this study was to develop and test a method to analyze and evaluate sequential skill performances in a team sport. An on-line computerized system was developed to record and summarize the sequential skill performances in volleyball. Seventy-two sample games from the third Federation of International Volleyball Cup men's competition were videotaped and grouped into two categories according to the final team standing and game outcome. Log-linear procedures were used to investigate the nature and degree of the relationship in the first-order (pass-to-set, set-to-spike) and second-order (pass-to-spike) transition plays. Results showed that there was a significant dependency in both the first-order and second-order transition plays, indicating that the outcome of a skill performance is highly influenced by the quality of a preceding skill performance. In addition, the pattern of the transition plays was stable and consistent, regardless of the classification status: Game Outcome, Team Standing, or Transition Process. The methodology and subsequent results provide valuable aids for a thorough understanding of the characteristics of transition plays in volleyball. In addition, the concept of sequential performance analysis may serve as an example for sport scientists in investigating probabilistic patterns of motor performance.
Multigrid methods in structural mechanics
NASA Technical Reports Server (NTRS)
Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.
1986-01-01
Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.
2013-08-01
in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey
Development of a Multileaf Collimator for Proton Radiotherapy
2006-06-01
voxel size and slice thickness can be adjusted and determine the resolution. Each voxel is assigned a CT Number, in Hounsfield units , which is a...measure of the linear attenuation of the material in that voxel. The Hounsfield unit is a comparison of the linear attenuation coefficient of some...a header, which contains relevant patient and scan information, and the data, which is a sequential listing of the Hounsfield units of each voxel
Linear motion device and method for inserting and withdrawing control rods
Smith, J.E.
Disclosed is a linear motion device and more specifically a control rod drive mechanism (CRDM) for inserting and withdrawing control rods into a reactor core. The CRDM and method disclosed is capable of independently and sequentially positioning two sets of control rods with a single motor stator and rotor. The CRDM disclosed can control more than one control rod lead screw without incurring a substantial increase in the size of the mechanism.
A Sequential Ensemble Prediction System at Convection Permitting Scales
NASA Astrophysics Data System (ADS)
Milan, M.; Simmer, C.
2012-04-01
A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.
DCS-Neural-Network Program for Aircraft Control and Testing
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
Preparing the Teacher of Tomorrow
ERIC Educational Resources Information Center
Hemp, Paul E.
1976-01-01
Suggested ways of planning and conducting high quality teacher preparation programs are discussed under major headings of student selection, sequential courses and experiences, and program design. (HD)
Shteingart, Hanan; Loewenstein, Yonatan
2016-01-01
There is a long history of experiments in which participants are instructed to generate a long sequence of binary random numbers. The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper, we used generalized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, we used logistic regression analysis in order to characterize the temporal sequence of participants' choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seems irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect are a monotonous decreasing function of the delay, yet these individual sequential effects are largely averaged out in a population analysis because of heterogeneity. The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation. Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in the random sequence generation task, different participants adopt different cognitive strategies to suppress sequential dependencies when generating the "random" sequences.
Sequential Online Wellness Programming Is an Effective Strategy to Promote Behavior Change
ERIC Educational Resources Information Center
MacNab, Lindsay R.; Francis, Sarah L.
2015-01-01
The growing number of United States youth and adults categorized as overweight or obese illustrates a need for research-based family wellness interventions. Sequential, online, Extension-delivered family wellness interventions offer a time- and cost-effective approach for both participants and Extension educators. The 6-week, online Healthy…
Apollo experience report: Command and service module sequential events control subsystem
NASA Technical Reports Server (NTRS)
Johnson, G. W.
1975-01-01
The Apollo command and service module sequential events control subsystem is described, with particular emphasis on the major systems and component problems and solutions. The subsystem requirements, design, and development and the test and flight history of the hardware are discussed. Recommendations to avoid similar problems on future programs are outlined.
Application of a Curriculum Hierarchy Evaluation (CHE) Model to Sequentially Arranged Tasks.
ERIC Educational Resources Information Center
O'Malley, J. Michael
A curriculum hierarchy evaluation (CHE) model was developed by combining a transfer paradigm with an aptitude-treatment-task interaction (ATTI) paradigm. Positive transfer was predicted between sequentially arranged tasks, and a programed or nonprogramed treatment was predicted to interact with aptitude and with tasks. Eighteen four and five…
LaRC-RP41: A Tough, High-Performance Composite Matrix
NASA Technical Reports Server (NTRS)
Pater, Ruth H.; Johnston, Norman J.; Smith, Ricky E.; Snoha, John J.; Gautreaux, Carol R.; Reddy, Rakasi M.
1991-01-01
New polymer exhibits increased toughness and resistance to microcracking. Cross-linking PMR-15 and linear LaRC-TPI combined to provide sequential semi-2-IPN designated as LaRC-RP41. Synthesized from PMR-15 imide prepolymer undergoing cross-linking in immediate presence of LaRC-TPI polyamic acid, also undergoing simultaneous imidization and linear chain extension. Potentially high-temperature matrix resin, adhesive, and molding resin. Applications include automobiles, electronics, aircraft, and aerospace structures.
Knowledge outcomes within rotational models of social work field education.
Birkenmaier, Julie; Curley, Jami; Rowan, Noell L
2012-01-01
This study assessed knowledge outcomes among concurrent, concurrent/sequential, and sequential rotation models of field instruction. Posttest knowledge scores of students ( n = 231) in aging-related field education were higher for students who participated in the concurrent rotation model, and for those who completed field education at a long-term care facility. Scores were also higher for students in programs that infused a higher number of geriatric competencies in their curriculum. Recommendations are provided to programs considering rotation models of field education related to older adults.
The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-04-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.
Forecasting daily streamflow using online sequential extreme learning machines
NASA Astrophysics Data System (ADS)
Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.
2016-06-01
While nonlinear machine methods have been widely used in environmental forecasting, in situations where new data arrive continually, the need to make frequent model updates can become cumbersome and computationally costly. To alleviate this problem, an online sequential learning algorithm for single hidden layer feedforward neural networks - the online sequential extreme learning machine (OSELM) - is automatically updated inexpensively as new data arrive (and the new data can then be discarded). OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1-3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates. More frequent updating gave smaller forecast errors, including errors for data above the 90th percentile. Larger datasets used in the initial training of OSELM helped to find better parameters (number of hidden nodes) for the model, yielding better predictions. With the online sequential multiple linear regression (OSMLR) as benchmark, we concluded that OSELM is an attractive approach as it easily outperformed OSMLR in forecast accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fakcharoenphol, Perapon; Xiong, Yi; Hu, Litang
TOUGH2-EGS is a numerical simulation program coupling geomechanics and chemical reactions for fluid and heat flows in porous media and fractured reservoirs of enhanced geothermal systems. The simulator includes the fully-coupled geomechanical (THM) module, the fully-coupled geochemical (THC) module, and the sequentially coupled reactive geochemistry (THMC) module. The fully-coupled flow-geomechanics model is developed from the linear elastic theory for the thermo-poro-elastic system and is formulated with the mean normal stress as well as pore pressure and temperature. The chemical reaction is sequentially coupled after solution of flow equations, which provides the flow velocity and phase saturation for the solute transportmore » calculation at each time step. In addition, reservoir rock properties, such as porosity and permeability, are subjected to change due to rock deformation and chemical reactions. The relationships between rock properties and geomechanical and chemical effects from poro-elasticity theories and empirical correlations are incorporated into the simulator. This report provides the user with detailed information on both mathematical models and instructions for using TOUGH2-EGS for THM, THC or THMC simulations. The mathematical models include the fluid and heat flow equations, geomechanical equation, reactive geochemistry equations, and discretization methods. Although TOUGH2-EGS has the capability for simulating fluid and heat flows coupled with both geomechanical and chemical effects, it is up to the users to select the specific coupling process, such as THM, THC, or THMC in a simulation. There are several example problems illustrating the applications of this program. These example problems are described in details and their input data are presented. The results demonstrate that this program can be used for field-scale geothermal reservoir simulation with fluid and heat flow, geomechanical effect, and chemical reaction in porous and fractured media.« less
Sequential quantum cloning under real-life conditions
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Mardoukhi, Yousof
2012-05-01
We consider a sequential implementation of the optimal quantum cloning machine of Gisin and Massar and propose optimization protocols for experimental realization of such a quantum cloner subject to the real-life restrictions. We demonstrate how exploiting the matrix-product state (MPS) formalism and the ensuing variational optimization techniques reveals the intriguing algebraic structure of the Gisin-Massar output of the cloning procedure and brings about significant improvements to the optimality of the sequential cloning prescription of Delgado [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.98.150502 98, 150502 (2007)]. Our numerical results show that the orthodox paradigm of optimal quantum cloning can in practice be realized in a much more economical manner by utilizing a considerably lesser amount of informational and numerical resources than hitherto estimated. Instead of the previously predicted linear scaling of the required ancilla dimension D with the number of qubits n, our recipe allows a realization of such a sequential cloning setup with an experimentally manageable ancilla of dimension at most D=3 up to n=15 qubits. We also address satisfactorily the possibility of providing an optimal range of sequential ancilla-qubit interactions for optimal cloning of arbitrary states under realistic experimental circumstances when only a restricted class of such bipartite interactions can be engineered in practice.
ERIC Educational Resources Information Center
Gooyers, Cobina; And Others
Designed for teachers to provide students with an awareness of the world of nature which surrounds them, the manual presents the philosophy of outdoor education, goals and objectives of the school program, planning for outdoor education, the Wildwood Programs, sequential program planning for students, program booking and resource list. Content…
Synthesis of concentric circular antenna arrays using dragonfly algorithm
NASA Astrophysics Data System (ADS)
Babayigit, B.
2018-05-01
Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.
ERIC Educational Resources Information Center
American Association for Health, Physical Education, and Recreation, Washington, DC.
This report contains articles on research in kinesiology, the study of the principles of mechanics and anatomy in relation to human movement. Research on sequential timing, somatotype methodology, and linear measurement with cinematographical analysis are presented in the first section. Studies of the hip extensor muscles, kinetic energy, and…
Protocol Analysis as a Tool in Function and Task Analysis
1999-10-01
Autocontingency The use of log-linear and logistic regression methods to analyse sequential data seems appealing , and is strongly advocated by...collection and analysis of observational data. Behavior Research Methods, Instruments, and Computers, 23(3), 415-429. Patrick, J. D. (1991). Snob : A
Face identification with frequency domain matched filtering in mobile environments
NASA Astrophysics Data System (ADS)
Lee, Dong-Su; Woo, Yong-Hyun; Yeom, Seokwon; Kim, Shin-Hwan
2012-06-01
Face identification at a distance is very challenging since captured images are often degraded by blur and noise. Furthermore, the computational resources and memory are often limited in the mobile environments. Thus, it is very challenging to develop a real-time face identification system on the mobile device. This paper discusses face identification based on frequency domain matched filtering in the mobile environments. Face identification is performed by the linear or phase-only matched filter and sequential verification stages. The candidate window regions are decided by the major peaks of the linear or phase-only matched filtering outputs. The sequential stages comprise a skin-color test and an edge mask filtering test, which verify color and shape information of the candidate regions in order to remove false alarms. All algorithms are built on the mobile device using Android platform. The preliminary results show that face identification of East Asian people can be performed successfully in the mobile environments.
NASA Technical Reports Server (NTRS)
Cohn, S. E.
1982-01-01
Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.
Snyder, Dalton T; Szalwinski, Lucas J; Cooks, R Graham
2017-10-17
Methods of performing precursor ion scans as well as neutral loss scans in a single linear quadrupole ion trap have recently been described. In this paper we report methodology for performing permutations of MS/MS scan modes, that is, ordered combinations of precursor, product, and neutral loss scans following a single ion injection event. Only particular permutations are allowed; the sequences demonstrated here are (1) multiple precursor ion scans, (2) precursor ion scans followed by a single neutral loss scan, (3) precursor ion scans followed by product ion scans, and (4) segmented neutral loss scans. (5) The common product ion scan can be performed earlier in these sequences, under certain conditions. Simultaneous scans can also be performed. These include multiple precursor ion scans, precursor ion scans with an accompanying neutral loss scan, and multiple neutral loss scans. We argue that the new capability to perform complex simultaneous and sequential MS n operations on single ion populations represents a significant step in increasing the selectivity of mass spectrometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, S; Lu, WG; Chen, YP
2015-03-11
A unique strategy, sequential linker installation (SLI), has been developed to construct multivariate MOFs with functional groups precisely positioned. PCN-700, a Zr-MOF with eight-connected Zr6O4(OH)(8)(H2O)(4) clusters, has been judiciously designed; the Zr-6 clusters in this MOF are arranged in such a fashion that, by replacement of terminal OH-/H2O ligands, subsequent insertion of linear dicarboxylate linkers is achieved. We demonstrate that linkers with distinct lengths and functionalities can be sequentially installed into PCN-700. Single-crystal to single-crystal transformation is realized so that the positions of the subsequently installed linkers are pinpointed via single-crystal X-ray diffraction analyses. This methodology provides a powerful toolmore » to construct multivariate MOFs with precisely positioned functionalities in the desired proximity, which would otherwise be difficult to achieve.« less
van Staden, J F; Mashamba, Mulalo G; Stefan, Raluca I
2002-09-01
An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid is proposed. A solution of 0.1 mol L(-1) sodium chloride is used as carrier. Titration is achieved by aspirating acetic acid samples between two strong base-zone volumes into a holding coil and by channelling the stack of well-defined zones with flow reversal through a reaction coil to a potentiometric sensor where the peak widths were measured. A linear relationship between peak width and logarithm of the acid concentration was obtained in the range 1-9 g/100 mL. Vinegar samples were analysed without any sample pre-treatment. The method has a relative standard deviation of 0.4% with a sample frequency of 28 samples per hour. The results revealed good agreement between the proposed sequential injection and an automated batch titration method.
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.
Xia, Jing; Wang, Michelle Yongmei
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Sequential and parallel image restoration: neural network implementations.
Figueiredo, M T; Leitao, J N
1994-01-01
Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
NASA Astrophysics Data System (ADS)
Fienen, M. N.; Bradbury, K. R.; Kniffin, M.; Barlow, P. M.; Krause, J.; Westenbroek, S.; Leaf, A.
2015-12-01
The well-drained sandy soil in the Wisconsin Central Sands is ideal for growing potatoes, corn, and other vegetables. A shallow sand and gravel aquifer provides abundant water for agricultural irrigation but also supplies critical base flow to cold-water trout streams. These needs compete with one another, and stakeholders from various perspectives are collaborating to seek solutions. Stakeholders were engaged in providing and verifying data to guide construction of a groundwater flow model which was used with linear and sequential linear programming to evaluate optimal tradeoffs between agricultural pumping and ecologically based minimum base flow values. The connection between individual irrigation wells as well as industrial and municipal supply and streamflow depletion can be evaluated using the model. Rather than addressing 1000s of wells individually, a variety of well management groups were established through k-means clustering. These groups are based on location, potential impact, water-use categories, depletion potential, and other factors. Through optimization, pumping rates were reduced to attain mandated minimum base flows. This formalization enables exploration of possible solutions for the stakeholders, and provides a tool which is transparent and forms a basis for discussion and negotiation.
Evaluation of concurrent priority queue algorithms. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Q.
1991-02-01
The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less
ERIC Educational Resources Information Center
Ramaswamy, Ravishankar; Dix, Edward F.; Drew, Janet E.; Diamond, James J.; Inouye, Sharon K.; Roehl, Barbara J. O.
2011-01-01
Purpose of the Study: Delirium is a widespread concern for hospitalized seniors, yet is often unrecognized. A comprehensive and sequential intervention (CSI) aiming to effect change in clinician behavior by improving knowledge about delirium was tested. Design and Methods: A 2-day CSI program that consisted of progressive 4-part didactic series,…
Building Reliable Metaclassifiers for Text Learning
2006-05-01
outputs are often poor [Ben00, DP96] but can be improved [Ben00, ZE01, ZE02]. SVM For linear SVMs, we use the Smox toolkit which is based on Platt’s...and implementations are the same as discussed in Section 6.3. The exception is that for an implementation of linear SVMs, we used the Smox toolkit which...is based on Platt’s Sequential Minimal Optimization algorithm [Pla98]. Since Smox is the best base classifier in the experiments below, it is the
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Lin, Kunning; Ma, Jian; Yuan, Dongxing; Feng, Sichao; Su, Haitao; Huang, Yongming; Shangguan, Qipei
2017-05-15
An integrated system was developed for automatic and sequential determination of NO 2 - , NO 3 - , PO 4 3- , Fe 2+ , Fe 3+ and Mn 2+ in natural waters based on reverse flow injection analysis combined with spectrophotometric detection. The system operation was controlled by a single chip microcomputer and laboratory-programmed software written in LabVIEW. The experimental parameters for each nutrient element analysis were optimized based on a univariate experimental design, and interferences from common ions were evaluated. The upper limits of the linear range (along with detection limit, µmolL -1 ) of the proposed method was 20 (0.03), 200 (0.7), 12 (0.3), 5 (0.03), 5 (0.03), 9 (0.2) µmolL -1 , for NO 2 - , NO 3 - , PO 4 3- , Fe 2+ , Fe 3+ and Mn 2+ , respectively. The relative standard deviations were below 5% (n=9-13) and the recoveries varied from 88.0±1.0% to 104.5±1.0% for spiked water samples. The sample throughput was about 20h -1 . This system has been successfully applied for the determination of multi-nutrient elements in different kinds of water samples and showed good agreement with reference methods (slope 1.0260±0.0043, R 2 =0.9991, n=50). Copyright © 2017 Elsevier B.V. All rights reserved.
Ochiai, Nobuo; Tsunokawa, Jun; Sasamoto, Kikuo; Hoffmann, Andreas
2014-12-05
A novel multi-volatile method (MVM) using sequential dynamic headspace (DHS) sampling for analysis of aroma compounds in aqueous sample was developed. The MVM consists of three different DHS method parameters sets including choice of the replaceable adsorbent trap. The first DHS sampling at 25 °C using a carbon-based adsorbent trap targets very volatile solutes with high vapor pressure (>20 kPa). The second DHS sampling at 25 °C using the same type of carbon-based adsorbent trap targets volatile solutes with moderate vapor pressure (1-20 kPa). The third DHS sampling using a Tenax TA trap at 80 °C targets solutes with low vapor pressure (<1 kPa) and/or hydrophilic characteristics. After the 3 sequential DHS samplings using the same HS vial, the three traps are sequentially desorbed with thermal desorption in reverse order of the DHS sampling and the desorbed compounds are trapped and concentrated in a programmed temperature vaporizing (PTV) inlet and subsequently analyzed in a single GC-MS run. Recoveries of the 21 test aroma compounds for each DHS sampling and the combined MVM procedure were evaluated as a function of vapor pressure in the range of 0.000088-120 kPa. The MVM provided very good recoveries in the range of 91-111%. The method showed good linearity (r2>0.9910) and high sensitivity (limit of detection: 1.0-7.5 ng mL(-1)) even with MS scan mode. The feasibility and benefit of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed coffee. Ten potent aroma compounds from top-note to base-note (acetaldehyde, 2,3-butanedione, 4-ethyl guaiacol, furaneol, guaiacol, 3-methyl butanal, 2,3-pentanedione, 2,3,5-trimethyl pyrazine, vanillin, and 4-vinyl guaiacol) could be identified together with an additional 72 aroma compounds. Thirty compounds including 9 potent aroma compounds were quantified in the range of 74-4300 ng mL(-1) (RSD<10%, n=5). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Stefan-van Staden, Raluca-Ioana; Bokretsion, Rahel Girmai; van Staden, Jacobus F; Aboul-Enein, Hassan Y
2006-01-01
Carbon paste based biosensors for the determination of creatine and creatinine have been integrated into a sequential injection system. Applying the multi-enzyme sequence of creatininase (CA), and/or creatinase (CI) and sarcosine oxidase (SO), hydrogen peroxide has been detected amperometrically. The linear concentration ranges are of pmol/L to nmol/L magnitude, with very low limits of detection. The proposed SIA system can be utilized reliably for the on-line simultaneous detection of creatine and creatinine in pharmaceutical products, as well as in serum samples, with a rate of 34 samples per hour and RSD values better than 0.16% (n=10).
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Energy-aware virtual network embedding in flexi-grid networks.
Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng
2017-11-27
Network virtualization technology has been proposed to allow multiple heterogeneous virtual networks (VNs) to coexist on a shared substrate network, which increases the utilization of the substrate network. Efficiently mapping VNs on the substrate network is a major challenge on account of the VN embedding (VNE) problem. Meanwhile, energy efficiency has been widely considered in the network design in terms of operation expenses and the ecological awareness. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the electricity cost of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low electricity cost. Numerical results show that the heuristic algorithm performs closely to the ILP for a small size network, and we also demonstrate its applicability to larger networks.
A tool for efficient, model-independent management optimization under uncertainty
White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.
2018-01-01
To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.
Design and Development of the Aircraft Instrument Comprehension Program.
ERIC Educational Resources Information Center
Higgins, Norman C.
The Aircraft Instrument Comprehension (AIC) Program is a self-instructional program designed to teach undergraduate student pilots to read instruments that indicate the position of the aircraft in flight, based on sequential instructional stages of information, prompted practice, and unprompted practice. The program includes a 36-item multiple…
Using Abstraction in Explicity Parallel Programs.
1991-07-01
However, we only rely on sequential consistency of memory operations. includ- ing reads. writes and any synchronization primitives provided by the...explicit synchronization primitives . This demonstrates the practical power of sequentially consistent memory, as opposed to weaker models of memory that...a small set of synchronization primitives , all pro- cedures have non-waiting specifications. This is in contrast to richer process-oriented
Dmitriy Volinskiy; John C Bergstrom; Christopher M Cornwell; Thomas P Holmes
2010-01-01
The assumption of independence of irrelevant alternatives in a sequential contingent valuation format should be questioned. Statistically, most valuation studies treat nonindependence as a consequence of unobserved individual effects. Another approach is to consider an inferential process in which any particular choice is part of a general choosing strategy of a survey...
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric
2017-12-01
This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.
Implications of Neuropsychological Research for School Psychology.
ERIC Educational Resources Information Center
Dean, Raymond S.; Gray, Jeffrey W.
Research has suggested that the two hemispheres of the brain serve specialized functions, with the most recent studies portraying the left hemisphere as processing information in a linear, serial, or sequential manner and the right hemisphere as processing information in a holistic, concrete, or visual mode. Although few systematic studies have…
Subsonic Aircraft With Regression and Neural-Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2004-01-01
At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.
Using timed event sequential data in nursing research.
Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony
2015-01-01
Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.
Digital Circuit Analysis Using an 8080 Processor.
ERIC Educational Resources Information Center
Greco, John; Stern, Kenneth
1983-01-01
Presents the essentials of a program written in Intel 8080 assembly language for the steady state analysis of a combinatorial logic gate circuit. Program features and potential modifications are considered. For example, the program could also be extended to include clocked/unclocked sequential circuits. (JN)
Duarte, Ricardo Jordão; Cury, José; Oliveira, Luis Carlos Neves; Srougi, Miguel
2013-01-01
Medical literature is scarce on information to define a basic skills training program for laparoscopic surgery (peg and transferring, cutting, clipping). The aim of this study was to determine the minimal number of simulator sessions of basic laparoscopic tasks necessary to elaborate an optimal virtual reality training curriculum. Eleven medical students with no previous laparoscopic experience were spontaneously enrolled. They were submitted to simulator training sessions starting at level 1 (Immersion Lap VR, San Jose, CA), including sequentially camera handling, peg and transfer, clipping and cutting. Each student trained twice a week until 10 sessions were completed. The score indexes were registered and analyzed. The total of errors of the evaluation sequences (camera, peg and transfer, clipping and cutting) were computed and thereafter, they were correlated to the total of items evaluated in each step, resulting in a success percent ratio for each student for each set of each completed session. Thereafter, we computed the cumulative success rate in 10 sessions, obtaining an analysis of the learning process. By non-linear regression the learning curve was analyzed. By the non-linear regression method the learning curve was analyzed and a r2 = 0.73 (p < 0.001) was obtained, being necessary 4.26 (∼five sessions) to reach the plateau of 80% of the estimated acquired knowledge, being that 100% of the students have reached this level of skills. From the fifth session till the 10th, the gain of knowledge was not significant, although some students reached 96% of the expected improvement. This study revealed that after five simulator training sequential sessions the students' learning curve reaches a plateau. The forward sessions in the same difficult level do not promote any improvement in laparoscopic basic surgical skills, and the students should be introduced to a more difficult training tasks level.
Prisant, L M; Resnick, L M; Hollenberg, S M
2001-06-01
The aim of this study was to assess the accuracy of sequential same arm blood pressure measurement by the mercury sphygmomanometer with the oscillometric blood pressure measurements from a device that also determines arterial elasticity. A prospective, multicentre, clinical study evaluated sequential same arm blood pressure measurements, using a mercury sphygmomanometer (Baumanometer, W. A. Baum Co., Inc., Copiague, New York, USA) and an oscillometric non-invasive device that calculates arterial elasticity (CVProfilor DO-2020 Cardiovascular Profiling System, Hypertension Diagnostics, Inc., Eagan, Minnesota, USA). Blood pressure was measured supine in triplicate, 3 min apart in a randomized sequence after a period of rest. The study population of 230 normotensive and hypertensive subjects included 57% females, 51% Caucasians, and 33% African Americans. The mean difference between test methods of systolic blood pressure, diastolic blood pressure, and heart rate was -3.2 +/- 6.9 mmHg, +0.8 +/- 5.9 mmHg, and +1.0 +/- 5.7 beats/minute. For systolic and diastolic blood pressure, 60.9 and 70.4% of sequential measurements by each method were within +/- 5 mmHg. Few or no points fell beyond the mean +/- 2 standard deviations lines for each cuff bladder size. Sequential same arm measurements of the CVProfilor DO-2020 Cardiovascular Profiling System measures blood pressure by an oscillometric method (dynamic linear deflation) with reasonable agreement with a mercury sphygmomanometer.
NASA Astrophysics Data System (ADS)
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2017-08-01
In the work is analysed a sequential control system of a machine for separating and grouping work pieces for processing. Whereas, the area of the considered problem is related with verification of operation of an actuator system of an electro-pneumatic control system equipped with a PLC controller. Wherein to verification is subjected the way of operation of actuators in view of logic relationships assumed in the control system. The actuators of the considered control system were three drives of linear motion (pneumatic cylinders). And the logical structure of the system of operation of the control system is based on the signals flow graph. The tested logical structure of operation of the electro-pneumatic control system was implemented in the Automation Studio software of B&R company. This software is used to create programs for the PLC controllers. Next, in the FluidSIM software was created the model of the actuator system of the control system of a machine. To verify the created program for the PLC controller, simulating the operation of the created model, it was utilized the approach of integration these two programs using the tool for data exchange in the form of the OPC server.
Manual of Accreditation Standards for Adventure Programs 1995.
ERIC Educational Resources Information Center
Williamson, John E., Comp.; Gass, Michael, Comp.
This manual presents standards for adventure education programs seeking accreditation from the Association for Experiential Education. The manual is set up sequentially, focusing both on objective standards such as technical risk management aspects, and on subjective standards such as teaching approaches used in programs. Chapter titles provide…
Generalized bipartite quantum state discrimination problems with sequential measurements
NASA Astrophysics Data System (ADS)
Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki
2018-02-01
We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.
Aukema, Sietse M; Theil, Laura; Rohde, Marius; Bauer, Benedikt; Bradtke, Jutta; Burkhardt, Birgit; Bonn, Bettina R; Claviez, Alexander; Gattenlöhner, Stefan; Makarova, Olga; Nagel, Inga; Oschlies, Ilske; Pott, Christiane; Szczepanowski, Monika; Traulsen, Arne; Kluin, Philip M; Klapper, Wolfram; Siebert, Reiner; Murga Penas, Eva M
2015-09-01
Typical Burkitt lymphoma is characterized by an IG-MYC translocation and overall low genomic complexity. Clinically, Burkitt lymphoma has a favourable prognosis with very few relapses. However, the few patients experiencing disease progression and/or relapse have a dismal outcome. Here we report cytogenetic findings of seven cases of Burkitt lymphoma in which sequential karyotyping was performed at time of diagnosis and/or disease progression/relapse(s). After case selection, karyotype re-review and additional molecular analyses were performed in six paediatric cases, treated in Berlin-Frankfurt-Münster-Non-Hodgkin lymphoma study group trials, and one additional adult patient. Moreover, we analysed 18 cases of Burkitt lymphoma from the Mitelman database in which sequential karyotyping was performed. Our findings show secondary karyotypes to have a significant increase in load of cytogenetic aberrations with a mean number of 2, 5 and 8 aberrations for primary, secondary and third investigations. Importantly, this increase in karyotype complexity seemed to result from recurrent secondary chromosomal changes involving mainly trisomy 21, gains of 1q and 7q, losses of 6q, 11q, 13q, and 17p. In addition, our findings indicate a linear clonal evolution to be the predominant manner of cytogenetic evolution. Our data may provide a biological framework for the dismal outcome of progressive and relapsing Burkitt lymphoma. © 2015 John Wiley & Sons Ltd.
Lang, Qiaolin; Yin, Long; Shi, Jianguo; Li, Liang; Xia, Lin; Liu, Aihua
2014-01-15
A novel electrochemical sequential biosensor was constructed by co-immobilizing glucoamylase (GA) and glucose oxidase (GOD) on the multi-walled carbon nanotubes (MWNTs)-modified glassy carbon electrode (GCE) by chemical crosslinking method, where glutaraldehyde and bovine serum albumin was used as crosslinking and blocking agent, respectively. The proposed biosensor (GA/GOD/MWNTs/GCE) is capable of determining starch without using extra sensors such as Clark-type oxygen sensor or H2O2 sensor. The current linearly decreased with the increasing concentration of starch ranging from 0.005% to 0.7% (w/w) with the limit of detection of 0.003% (w/w) starch. The as-fabricated sequential biosensor can be applicable to the detection of the content of starch in real samples, which are in good accordance with traditional Fehling's titration. Finally, a stable starch/O2 biofuel cell was assembled using the GA/GOD/MWNTs/GCE as bioanode and laccase/MWNTs/GCE as biocathode, which exhibited open circuit voltage of ca. 0.53 V and the maximum power density of 8.15 μW cm(-2) at 0.31 V, comparable with the other glucose/O2 based biofuel cells reported recently. Therefore, the proposed biosensor exhibited attractive features such as good stability in weak acidic buffer, good operational stability, wide linear range and capable of determination of starch in real samples as well as optimal bioanode for the biofuel cell. Copyright © 2013 Elsevier B.V. All rights reserved.
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2012-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
New Techniques in Numerical Analysis and Their Application to Aerospace Systems.
1979-01-01
employment of the sequential gradient-restoration algorithm and the modified quasilineari- zation algorithm in some problems of structural analysis (Refs. 6...and a state inequa - lity constraint. The state inequality constraint is of a special type, namely, it is linear in some or all of the com- ponents of
Right Brain Activities to Improve Analytical Thinking.
ERIC Educational Resources Information Center
Lynch, Marion E.
Schools tend to have a built-in bias toward left brain activities (tasks that are linear and sequential in nature), so the introduction of right brain activities (functions related to music, rhythm, images, color, imagination, daydreaming, dimensions) brings a balance into the classroom and helps those students who may be right brain oriented. To…
Pointillist, Cyclical, and Overlapping: Multidimensional Facets of Time in Online Learning
ERIC Educational Resources Information Center
Ihanainen, Pekka; Moravec, John W.
2011-01-01
A linear, sequential time conception based on in-person meetings and pedagogical activities is not enough for those who practice and hope to enhance contemporary education, particularly where online interactions are concerned. In this article, we propose a new model for understanding time in pedagogical contexts. Conceptual parts of the model will…
Changing Career Patterns. ERIC Digest No. 219.
ERIC Educational Resources Information Center
Brown, Bettina Lankard
The linear career path that once kept people working in the same job is not the standard career route for today's workers. Instead, many workers are now pursuing varied career paths that reflect sequential career changes. Although job mobility no longer carries the stigma once associated with job change, it can still be emotionally stressful. Job…
The Christensen Rhetoric Program.
ERIC Educational Resources Information Center
Tufte, Virginia
1969-01-01
Designed to instruct teachers as well as high school or college students in improving their writing, the Christensen Rhetoric Program is a sequential, cumulative program, published in kit form. The kit includes a script with lectures for the teacher, directions for using 200 transparencies on an overhead projector, and student workbooks which…
Sequence and Uniformity in the High School Literature Program.
ERIC Educational Resources Information Center
Sauer, Edwin H.
A good, sequential literature program for secondary school students should deal simultaneously with literary forms, with the chronological development of literature, and with broad themes of human experience. By employing the abundance of teaching aids, texts, and improved foreign translations available today, an imaginatively planned program can…
Mathemagenic Activities Program: [Reports on Cognitive/Language Development].
ERIC Educational Resources Information Center
Smock, Charles D., Ed.
This set of 13 research reports, bulletins and papers is a product of the Mathemagenic Activities Program (MAP) for early childhood education of the University of Georgia Follow Through Program. Based on Piagetian theory, the MAP provides sequentially structured sets of curriculum materials and processes that are designed to continually challenge…
Efficient Controls for Finitely Convergent Sequential Algorithms
Chen, Wei; Herman, Gabor T.
2010-01-01
Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327
Somnam, Sarawut; Jakmunee, Jaroon; Grudpan, Kate; Lenghor, Narong; Motomizu, Shoji
2008-12-01
An automated hydrodynamic sequential injection (HSI) system with spectrophotometric detection was developed. Thanks to the hydrodynamic injection principle, simple devices can be used for introducing reproducible microliter volumes of both sample and reagent into the flow channel to form stacked zones in a similar fashion to those in a sequential injection system. The zones were then pushed to the detector and a peak profile was recorded. The determination of nitrite and nitrate in water samples by employing the Griess reaction was chosen as a model. Calibration graphs with linearity in the range of 0.7 - 40 muM were obtained for both nitrite and nitrate. Detection limits were found to be 0.3 muM NO(2)(-) and 0.4 muM NO(3)(-), respectively, with a sample throughput of 20 h(-1) for consecutive determination of both the species. The developed system was successfully applied to the analysis of water samples, employing simple and cost-effective instrumentation and offering higher degrees of automation and low chemical consumption.
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
A Cursive Handwriting Skills Program for LD Students To Be Used by Regular and LD Teachers.
ERIC Educational Resources Information Center
McMillan, Ida L.
Many learning disabled students attending Avocado Elementary School in Homestead, Florida, were unable to write legibly when taught with available cursive handwriting programs. To redress the problem, a complete, sequential cursive handwriting program was devised for use with learning disabled and other students. The program combined tracing and…
FORTRAN IV Program to Determine the Proper Sequence of Records in a Datafile
ERIC Educational Resources Information Center
Jones, Michael P.; Yoshida, Roland K.
1975-01-01
This FORTRAN IV program executes an essential editing procedure which determines whether a datafile contains an equal number of records (cards) per case which are also in the intended sequential order. The program which requires very little background in computer programming is designed primarily for the user of packaged statistical procedures.…
A suppression hierarchy among competing motor programs drives sequential grooming in Drosophila
Seeds, Andrew M; Ravbar, Primoz; Chung, Phuong; Hampel, Stefanie; Midgley, Frank M; Mensh, Brett D; Simpson, Julie H
2014-01-01
Motor sequences are formed through the serial execution of different movements, but how nervous systems implement this process remains largely unknown. We determined the organizational principles governing how dirty fruit flies groom their bodies with sequential movements. Using genetically targeted activation of neural subsets, we drove distinct motor programs that clean individual body parts. This enabled competition experiments revealing that the motor programs are organized into a suppression hierarchy; motor programs that occur first suppress those that occur later. Cleaning one body part reduces the sensory drive to its motor program, which relieves suppression of the next movement, allowing the grooming sequence to progress down the hierarchy. A model featuring independently evoked cleaning movements activated in parallel, but selected serially through hierarchical suppression, was successful in reproducing the grooming sequence. This provides the first example of an innate motor sequence implemented by the prevailing model for generating human action sequences. DOI: http://dx.doi.org/10.7554/eLife.02951.001 PMID:25139955
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.
Computer retrieval of bibliographies using an editing program
Brethauer, G.E.; Brokaw, V.L.
1979-01-01
A simple program permits use of the text .editor 'qedx,' part of many computer systems, to input bibliographic entries and to retrieve specific entries which contain keywords of interest. Multiple keywords may be used sequentially to find specific entries.
ERIC Educational Resources Information Center
Boekkooi-Timminga, Ellen
Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…
Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method
2015-01-05
rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
NASA Astrophysics Data System (ADS)
Long, Kai; Wang, Xuan; Gu, Xianguang
2017-09-01
The present work introduces a novel concurrent optimization formulation to meet the requirements of lightweight design and various constraints simultaneously. Nodal displacement of macrostructure and effective thermal conductivity of microstructure are regarded as the constraint functions, which means taking into account both the load-carrying capabilities and the thermal insulation properties. The effective properties of porous material derived from numerical homogenization are used for macrostructural analysis. Meanwhile, displacement vectors of macrostructures from original and adjoint load cases are used for sensitivity analysis of the microstructure. Design variables in the form of reciprocal functions of relative densities are introduced and used for linearization of the constraint function. The objective function of total mass is approximately expressed by the second order Taylor series expansion. Then, the proposed concurrent optimization problem is solved using a sequential quadratic programming algorithm, by splitting into a series of sub-problems in the form of the quadratic program. Finally, several numerical examples are presented to validate the effectiveness of the proposed optimization method. The various effects including initial designs, prescribed limits of nodal displacement, and effective thermal conductivity on optimized designs are also investigated. An amount of optimized macrostructures and their corresponding microstructures are achieved.
Koning, Ina M; Maric, Marija; MacKinnon, David; Vollebergh, Wilma A M
2015-08-01
Previous work revealed that the combined parent-student alcohol prevention program (PAS) effectively postponed alcohol initiation through its hypothesized intermediate factors: increase in strict parental rule setting and adolescents' self-control (Koning, van den Eijnden, Verdurmen, Engels, & Vollebergh, 2011). This study examines whether the parental strictness precedes an increase in adolescents' self-control by testing a sequential mediation model. A cluster randomized trial including 3,245 Dutch early adolescents (M age = 12.68, SD = 0.50) and their parents randomized over 4 conditions: (1) parent intervention, (2) student intervention, (3) combined intervention, and (4) control group. Outcome measure was amount of weekly drinking measured at age 12 to 15; baseline assessment (T0) and 3 follow-up assessments (T1-T3). Main effects of the combined and parent intervention on weekly drinking at T3 were found. The effect of the combined intervention on weekly drinking (T3) was mediated via an increase in strict rule setting (T1) and adolescents' subsequent self-control (T2). In addition, the indirect effect of the combined intervention via rule setting (T1) was significant. No reciprocal sequential mediation (self-control at T1 prior to rules at T2) was found. The current study is 1 of the few studies reporting sequential mediation effects of youth intervention outcomes. It underscores the need of involving parents in youth alcohol prevention programs, and the need to target both parents and adolescents, so that change in parents' behavior enables change in their offspring. (c) 2015 APA, all rights reserved).
ERIC Educational Resources Information Center
King, Paul; King, Eva
This language-through-literature program is designed to be used as a native language program (language arts/reading readiness), as a second language program, or as a combined native and second language program in early childhood education. Sequentially developed over the year and within each unit, the program is subdivided into 14 units of about…
NavP: Structured and Multithreaded Distributed Parallel Programming
NASA Technical Reports Server (NTRS)
Pan, Lei; Xu, Jingling
2006-01-01
This slide presentation reviews some of the issues around distributed parallel programming. It compares and contrast two methods of programming: Single Program Multiple Data (SPMD) with the Navigational Programming (NAVP). It then reviews the distributed sequential computing (DSC) method and the methodology of NavP. Case studies are presented. It also reviews the work that is being done to enable the NavP system.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo
2011-03-04
Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.
Efficient sequential and parallel algorithms for record linkage.
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.
Integration, Reflection, Interpretation: Realizing the Goals of a General Education Capstone Course
ERIC Educational Resources Information Center
Fernandez, Nancy Page
2006-01-01
For the past 23 years, students at California State Polytechnic University at Pomona have benefited from its Interdisciplinary General Education Program (IGE)--a sequential, interdisciplinary general education program that culminates in a capstone course. IGE's history and structure support a strong culture of assessment. The program, founded in…
The Implications of Learners' Goal Orientation in a Prior Learning Assessment Program
ERIC Educational Resources Information Center
McClintock, Patricia
2013-01-01
This mixed methods sequential explanatory study was designed to investigate students' persistence in an online Prior Learning Assessment (PLA) Program by researching the implications of goal orientation and other academic, institutional, and student-related factors of non-traditional students enrolled in such a program at the University of St.…
State Skill Standards: Digital Video & Broadcast Production
ERIC Educational Resources Information Center
Bullard, Susan; Tanner, Robin; Reedy, Brian; Grabavoi, Daphne; Ertman, James; Olson, Mark; Vaughan, Karen; Espinola, Ron
2007-01-01
The standards in this document are for digital video and broadcast production programs and are designed to clearly state what the student should know and be able to do upon completion of an advanced high-school program. Digital Video and Broadcast Production is a program that consists of the initial fundamentals and sequential courses that prepare…
Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi
2016-02-01
The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Optimal Linear Responses for Markov Chains and Stochastically Perturbed Dynamical Systems
NASA Astrophysics Data System (ADS)
Antown, Fadi; Dragičević, Davor; Froyland, Gary
2018-03-01
The linear response of a dynamical system refers to changes to properties of the system when small external perturbations are applied. We consider the little-studied question of selecting an optimal perturbation so as to (i) maximise the linear response of the equilibrium distribution of the system, (ii) maximise the linear response of the expectation of a specified observable, and (iii) maximise the linear response of the rate of convergence of the system to the equilibrium distribution. We also consider the inhomogeneous, sequential, or time-dependent situation where the governing dynamics is not stationary and one wishes to select a sequence of small perturbations so as to maximise the overall linear response at some terminal time. We develop the theory for finite-state Markov chains, provide explicit solutions for some illustrative examples, and numerically apply our theory to stochastically perturbed dynamical systems, where the Markov chain is replaced by a matrix representation of an approximate annealed transfer operator for the random dynamical system.
Constrained multiple indicator kriging using sequential quadratic programming
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Erhan Tercan, A.
2012-11-01
Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, W.; Prieto, F.J.
1993-05-01
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less
ERIC Educational Resources Information Center
Si, Yajuan; Reiter, Jerome P.
2013-01-01
In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…
Psychosocial Development from College through Midlife: A 34-Year Sequential Study
ERIC Educational Resources Information Center
Whitbourne, Susan Krauss; Sneed, Joel R.; Sayer, Aline
2009-01-01
Two cohorts of alumni, leading-edge and trailing-edge baby boomers, first tested in their college years, were followed to ages 43 (N = 136) and 54 (N = 182) on a measure of Erikson's theory of psychosocial development. Hierarchical linear modeling was used to model the trajectory of growth for each psychosocial issue across middle adulthood. As…
Linear Algebra and Sequential Importance Sampling for Network Reliability
2011-12-01
first test case is an Erdős- Renyi graph with 100 vertices and 150 edges. Figure 1 depicts the relative variance of the three Algorithms: Algorithm TOP...e va ria nc e Figure 1: Relative variance of various algorithms on Erdős Renyi graph, 100 vertices 250 edges. Key: Solid = TOP-DOWN algorithm
NASA Astrophysics Data System (ADS)
Masson, F.; Mouyen, M.; Hwang, C.; Wu, Y.-M.; Ponton, F.; Lehujeur, M.; Dorbath, C.
2012-11-01
Using a Bouguer anomaly map and a dense seismic data set, we have performed two studies in order to improve our knowledge of the deep structure of Taiwan. First, we model the Bouguer anomaly along a profile crossing the island using simple forward modelling. The modelling is 2D, with the hypothesis of cylindrical symmetry. Second we present a joint analysis of gravity anomaly and seismic arrival time data recorded in Taiwan. An initial velocity model has been obtained by local earthquake tomography (LET) of the seismological data. The LET velocity model was used to construct an initial 3D gravity model, using a linear velocity-density relationship (Birch's law). The synthetic Bouguer anomaly calculated for this model has the same shape and wavelength as the observed anomaly. However some characteristics of the anomaly map are not retrieved. To derive a crustal velocity/density model which accounts for both types of observations, we performed a sequential inversion of seismological and gravity data. The variance reduction of the arrival time data for the final sequential model was comparable to the variance reduction obtained by simple LET. Moreover, the sequential model explained about 80% of the observed gravity anomaly. New 3D model of Taiwan lithosphere is presented.
Oka, Megan; Whiting, Jason
2013-01-01
In Marriage and Family Therapy (MFT), as in many clinical disciplines, concern surfaces about the clinician/researcher gap. This gap includes a lack of accessible, practical research for clinicians. MFT clinical research often borrows from the medical tradition of randomized control trials, which typically use linear methods, or follow procedures distanced from "real-world" therapy. We review traditional research methods and their use in MFT and propose increased use of methods that are more systemic in nature and more applicable to MFTs: process research, dyadic data analysis, and sequential analysis. We will review current research employing these methods, as well as suggestions and directions for further research. © 2013 American Association for Marriage and Family Therapy.
Online sequential Monte Carlo smoother for partially observed diffusion processes
NASA Astrophysics Data System (ADS)
Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain
2018-12-01
This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.
ERIC Educational Resources Information Center
Moody, John Charles
Assessed were the effects of linear and modified linear programed materials on the achievement of slow learners in tenth grade Biological Sciences Curriculum Study (BSCS) Special Materials biology. Two hundred and six students were randomly placed into four programed materials formats: linear programed materials, modified linear program with…
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
Charge tuning of nonresonant magnetoexciton phonon interactions in graphene.
Rémi, Sebastian; Goldberg, Bennett B; Swan, Anna K
2014-02-07
Far from resonance, the coupling of the G-band phonon to magnetoexcitons in single layer graphene displays kinks and splittings versus filling factor that are well described by Pauli blocking and unblocking of inter- and intra-Landau level transitions. We explore the nonresonant electron-phonon coupling by high-magnetic field Raman scattering while electrostatic tuning of the carrier density controls the filling factor. We show qualitative and quantitative agreement between spectra and a linearized model of electron-phonon interactions in magnetic fields. The splitting is caused by dichroism of left- and right-handed circular polarized light due to lifting of the G-band phonon degeneracy, and the piecewise linear slopes are caused by the linear occupancy of sequential Landau levels versus ν.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
NASA Astrophysics Data System (ADS)
Mineo, Hirobumi; Fujimura, Yuichi
2015-06-01
We propose an ultrafast quantum switching method of π-electron rotations, which are switched among four rotational patterns in a nonplanar chiral aromatic molecule (P)-2,2’- biphenol and perform the sequential switching among four rotational patterns which are performed by the overlapped pump-dump laser pulses. Coherent π-electron dynamics are generated by applying the linearly polarized UV pulse laser to create a pair of coherent quasidegenerated excited states. We also plot the time-dependent π-electron ring current, and discussed ring current transfer between two aromatic rings.
Paralex: An Environment for Parallel Programming in Distributed Systems
1991-12-07
distributed systems is coni- parable to assembly language programming for traditional sequential systems - the user must resort to low-level primitives ...to accomplish data encoding/decoding, communication, remote exe- cution, synchronization , failure detection and recovery. It is our belief that... synchronization . Finally, composing parallel programs by interconnecting se- quential computations allows automatic support for heterogeneity and fault tolerance
ERIC Educational Resources Information Center
Eeds, Angela; Vanags, Chris; Creamer, Jonathan; Loveless, Mary; Dixon, Amanda; Sperling, Harvey; McCombs, Glenn; Robinson, Doug; Shepherd, Virginia L.
2014-01-01
The School for Science and Math at Vanderbilt (SSMV) is an innovative partnership program between a Research I private university and a large urban public school system. The SSMV was started in 2007 and currently has 101 students enrolled in the program, with a total of 60 students who have completed the 4-yr sequential program. Students attend…
Critical Television Viewing Skills: Fitting Them into the Curriculum.
ERIC Educational Resources Information Center
Carpenter, Lee
1982-01-01
The need for teaching critical television viewing skills is seen as part of a greater need for a sequential media skills program and continued support for reactive art, music, and physical education programs in the schools. Twenty-eight references are listed. (Author/LLS)
ERIC Educational Resources Information Center
Siegenthaler, David
For 37 states in the United States, Project Wild has become an officially sanctioned, distributed and funded "environemtnal and conservation education program." For those who are striving to implement focused, sequential, learning programs, as well as those who wish to promote harmony through a non-anthropocentric world view, Project…
Cochran Q test with Turbo BASIC.
Seuc, A H
1995-01-01
A microcomputer program written in Turbo BASIC for the sequential application of the Cochran Q test is given. A clinical application where the test is used in order to explore the structure of the agreement between observers is also presented. A program listing is available on request.
ERIC Educational Resources Information Center
LEBEDEV, P.D.
ON THE PREMISES THAT THE DEVELOPMENT OF PROGRAMED LEARNING BY RESEARCH TEAMS OF SUBJECT AND TECHNIQUE SPECIALISTS IS INDISPUTABLE, AND THAT THE EXPERIENCED TEACHER IN THE ROLE OF INDIVIDUAL TUTOR IS INDISPENSABLE, THE TECHNOLOGY TO SUPPORT PROGRAMED INSTRUCTION MUST BE ADVANCED. AUTOMATED DEVICES EMPLOYING SEQUENTIAL AND BRANCHING TECHNIQUES FOR…
DI: An interactive debugging interpreter for applicative languages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skedzielewski, S.K.; Yates, R.K.; Oldehoeft, R.R.
1987-03-12
The DI interpreter is both a debugger and interpreter of SISLAL programs. Its use as a program interpreter is only a small part of its role; it is designed to be a tool for studying compilation techniques for applicative languages. DI interprets dataflow graphs expressed in the IF1 and IF2 languages, and is heavily instrumented to report the activity of dynamic storage activity, reference counting, copying and updating of structured data values. It also aids the SISAL language evaluation by providing an interim execution vehicle for SISAL programs. DI provides determinate, sequential interpretation of graph nodes for sequential and parallelmore » operations in a canonical order. As a debugging aid, DI allows tracing, breakpointing, and interactive display of program data values. DI handles creation of SISAL and IF1 error values for each data type and propagates them according to a well-defined algebra. We have begun to implement IF1 optimizers and have measured the improvements with DI.« less
Sequential and simultaneous choices: testing the diet selection and sequential choice models.
Freidin, Esteban; Aw, Justine; Kacelnik, Alex
2009-03-01
We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
NASA Astrophysics Data System (ADS)
Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric
2017-08-01
The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
ERIC Educational Resources Information Center
Tomblin, Elizabeth A.; And Others
In response to a 1982 Superior Court order, a centrally developed, sequential program for improving race/human relations in the San Diego City Schools was developed and field tested or implemented during the 1982-83 school year. This systematic evaluation reports on the student program, "Conflict"; the staff program; and baseline data…
Characterization of Non-Linearized Spacecraft Relative Motion using Nonlinear Normal Modes
2016-04-20
10 5.1 Results for Four Models with Different Nonlinearities ..................................................11 5.2 Effects of...Force Research Laboratory or the U.S. Government. 1.0 SUMMARY In this report, the effects of incorporating nonlinearities in sequential relative orbit...exactly. Huxel and Bishop [1] discussed the effects of using both inertial range measurements from tracking stations and relative range measurements
Optimal Achievable Encoding for Brain Machine Interface
2017-12-22
dictionary-based encoding approach to translate a visual image into sequential patterns of electrical stimulation in real time , in a manner that...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...networks, and by applying linear decoding to complete recorded populations of retinal ganglion cells for the first time . Third, we developed a greedy
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
Amorim, C M P G; Albert-García, J R; Montenegro, M C B S; Araújo, A N; Calatayud, J Martínez
2007-01-17
The present paper deals with the chemiluminescent determination of the herbicide Karbutilate on the basis of its previous photodegradation by using a low-pressure Hg lamp as UV source in a continuous-flow multicommutation assembly (a solenoid valves set). The pesticide solution was segmented by a solenoid valve and sequentially alternated with segments of the 0.001 mol l(-1) of NaOH solution, the suitable media for the formation of photo-fragments; then it passes through the photo-reactor and was lead to the flow-cell after being divided in small segments which were sequentially alternated with the oxidizing system; 2 x 10(-5) mol l(-1) of potassium permanganate in 0.2% pyrophosphoric acid. The studied calibration range, from 0.1 microg l(-1) to 65 mg l(-1), resulted in a linear behaviour over the range 20 microg l(-1)-20 mg l(-1) and fitting the linear equation: I=(1180+/-30)C+(15+/-5) with the correlation coefficient 0.9998. The limit of detection was 10 microg l(-1) and the sample throughput 17 h(-1). After testing the influence of a large series of potential interfering species, the method was applied to water and human urine samples.
Sequential injection redox or acid-base titration for determination of ascorbic acid or acetic acid.
Lenghor, Narong; Jakmunee, Jaroon; Vilen, Michael; Sara, Rolf; Christian, Gary D; Grudpan, Kate
2002-12-06
Two sequential injection titration systems with spectrophotometric detection have been developed. The first system for determination of ascorbic acid was based on redox reaction between ascorbic acid and permanganate in an acidic medium and lead to a decrease in color intensity of permanganate, monitored at 525 nm. A linear dependence of peak area obtained with ascorbic acid concentration up to 1200 mg l(-1) was achieved. The relative standard deviation for 11 replicate determinations of 400 mg l(-1) ascorbic acid was 2.9%. The second system, for acetic acid determination, was based on acid-base titration of acetic acid with sodium hydroxide using phenolphthalein as an indicator. The decrease in color intensity of the indicator was proportional to the acid content. A linear calibration graph in the range of 2-8% w v(-1) of acetic acid with a relative standard deviation of 4.8% (5.0% w v(-1) acetic acid, n=11) was obtained. Sample throughputs of 60 h(-1) were achieved for both systems. The systems were successfully applied for the assays of ascorbic acid in vitamin C tablets and acetic acid content in vinegars, respectively.
The Effects of a Dyslexia-Centred Teaching Programme.
ERIC Educational Resources Information Center
Hornsby, Beve; Miles, T. R.
1980-01-01
Results are presented for 107 dyslexic children who received instruction through a "dyslexia-centered" (structured, sequential, cumulative, and thorough) program at a hospital clinic, a unit attached to a university department, or a private center. This teaching program proved very successful at all three settings. (Author/SJL)
Social Studies Course of Study.
ERIC Educational Resources Information Center
Miller, John E.; Murphy, Terrence A.
This K-12 sequential course of study is the result of one school district's efforts to improve continuity in the social studies curriculum. Following an introduction and statement of philosophy, the program is organized around four basic educational areas--knowledge, application, valuing, and participation. Specific program goals include promoting…
Programming Cell Adhesion for On-Chip Sequential Boolean Logic Functions.
Qu, Xiangmeng; Wang, Shaopeng; Ge, Zhilei; Wang, Jianbang; Yao, Guangbao; Li, Jiang; Zuo, Xiaolei; Shi, Jiye; Song, Shiping; Wang, Lihua; Li, Li; Pei, Hao; Fan, Chunhai
2017-08-02
Programmable remodelling of cell surfaces enables high-precision regulation of cell behavior. In this work, we developed in vitro constructed DNA-based chemical reaction networks (CRNs) to program on-chip cell adhesion. We found that the RGD-functionalized DNA CRNs are entirely noninvasive when interfaced with the fluidic mosaic membrane of living cells. DNA toehold with different lengths could tunably alter the release kinetics of cells, which shows rapid release in minutes with the use of a 6-base toehold. We further demonstrated the realization of Boolean logic functions by using DNA strand displacement reactions, which include multi-input and sequential cell logic gates (AND, OR, XOR, and AND-OR). This study provides a highly generic tool for self-organization of biological systems.
High-contrast imaging with an arbitrary aperture: active correction of aperture discontinuities
NASA Astrophysics Data System (ADS)
Pueyo, Laurent; Norman, Colin; Soummer, Rémi; Perrin, Marshall; N'Diaye, Mamadou; Choquet, Elodie
2013-09-01
We present a new method to achieve high-contrast images using segmented and/or on-axis telescopes. Our approach relies on using two sequential Deformable Mirrors to compensate for the large amplitude excursions in the telescope aperture due to secondary support structures and/or segment gaps. In this configuration the parameter landscape of Deformable Mirror Surfaces that yield high contrast Point Spread Functions is not linear, and non-linear methods are needed to find the true minimum in the optimization topology. We solve the highly non-linear Monge-Ampere equation that is the fundamental equation describing the physics of phase induced amplitude modulation. We determine the optimum configuration for our two sequential Deformable Mirror system and show that high-throughput and high contrast solutions can be achieved using realistic surface deformations that are accessible using existing technologies. We name this process Active Compensation of Aperture Discontinuities (ACAD). We show that for geometries similar to JWST, ACAD can attain at least 10-7 in contrast and an order of magnitude higher for future Extremely Large Telescopes, even when the pupil features a missing segment" . We show that the converging non-linear mappings resulting from our Deformable Mirror shapes actually damp near-field diffraction artifacts in the vicinity of the discontinuities. Thus ACAD actually lowers the chromatic ringing due to diffraction by segment gaps and strut's while not amplifying the diffraction at the aperture edges beyond the Fresnel regime and illustrate the broadband properties of ACAD in the case of the pupil configuration corresponding to the Astrophysics Focused Telescope Assets. Since details about these telescopes are not yet available to the broader astronomical community, our test case is based on a geometry mimicking the actual one, to the best of our knowledge.
Efficient sequential and parallel algorithms for record linkage
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837
Pamnani, Shitaldas J.; Nyitray, Alan G.; Abrahamsen, Martha; Rollison, Dana E.; Villa, Luisa L.; Lazcano-Ponce, Eduardo; Huang, Yangxin; Borenstein, Amy; Giuliano, Anna R.
2016-01-01
Background. The purpose of this study was to assess the risk of sequential acquisition of anal human papillomavirus (HPV) infection following a type-specific genital HPV infection for the 9-valent vaccine HPV types and investigate factors associated with sequential infection among men who have sex with women (MSW). Methods. Genital and anal specimens were available for 1348 MSW participants, and HPV genotypes were detected using the Roche Linear Array assay. Sequential risk of anal HPV infection was assessed using hazard ratios (HRs) among men with prior genital infection, compared with men with no prior genital infection, in individual HPV type and grouped HPV analyses. Results. In individual analyses, men with prior HPV 16 genital infections had a significantly higher risk of subsequent anal HPV 16 infections (HR, 4.63; 95% confidence interval [CI], 1.41–15.23). In grouped analyses, a significantly higher risk of sequential type-specific anal HPV infections was observed for any of the 9 types (adjusted HR, 2.80; 95% CI, 1.32–5.99), high-risk types (adjusted HR, 2.65; 95% CI, 1.26, 5.55), and low-risk types (adjusted HR, 5.89; 95% CI, 1.29, 27.01). Conclusions. MSW with prior genital HPV infections had a higher risk of a subsequent type-specific anal infection. The higher risk was not explained by sexual intercourse with female partners. Autoinoculation is a possible mechanism for the observed association. PMID:27489298
An Instructional Note on Linear Programming--A Pedagogically Sound Approach.
ERIC Educational Resources Information Center
Mitchell, Richard
1998-01-01
Discusses the place of linear programming in college curricula and the advantages of using linear-programming software. Lists important characteristics of computer software used in linear programming for more effective teaching and learning. (ASK)
Hardway, D; Weatherly, K S; Bonheur, B
1993-01-01
Diabetes education programs remain underdeveloped in the pediatric setting, resulting in increased consumer complaints and financial liability for hospitals. The Diabetes Education on Wheels program was designed to provide comprehensive, outcome-oriented education for patients with juvenile diabetes. The primary goal of the program was to enhance patients' and family members' ability to achieve self-care in the home setting. The program facilitated sequential learning, improved consumer satisfaction, and promoted financial viability for the hospital.
ERIC Educational Resources Information Center
Smock, Charles D., Ed.; And Others
This set of four research reports is a product of the Mathemagenic Activities Program (MAP) for early childhood education of the University of Georgia Follow Through Program. Based on Piagetian theory, the MAP provides sequentially structured sets of curriculum materials and processes that are designed to continually challenge children in…
ERIC Educational Resources Information Center
La Marca, Marilyn Tierney
A study was conducted to determine the effects of the "Cherry Hill Study Skills Program" on eighth grade students' reading comprehension and study skills. The "Cherry Hill Study Skills Program" is a process oriented course dealing with the sequential development of nine specific skills deemed essential to the retrieval and retention of information…
Briggs, Marilyn; Safaii, SeAnne; Beall, Deborah Lane
2003-04-01
It is the position of the American Dietetic Association (ADA), the Society for Nutrition Education (SNE), and the American School Food Service Association (ASFSA) that comprehensive nutrition services must be provided to all of the nation's preschool through grade twelve students. These nutrition services shall be integrated with a coordinated, comprehensive school health program and implemented through a school nutrition policy. The policy should link comprehensive, sequential nutrition education; access to and promotion of child nutrition programs providing nutritious meals and snacks in the school environment; and family, community, and health services' partnerships supporting positive health outcomes for all children. Childhood obesity has reached epidemic proportions and is directly attributed to physical inactivity and diet. Schools can play a key role in reversing this trend through coordinated nutrition services that promote policies linking comprehensive, sequential nutrition education programs, access to and marketing of child nutrition programs, a school environment that models healthy food choices, and community partnerships. This position paper provides information and resources for nutrition professionals to use in developing and supporting comprehensive school health programs. J Am Diet Assoc. 2003;103:505-514.
NASA Astrophysics Data System (ADS)
Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.
2013-05-01
This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.
Abelianization and sequential confinement in 2 + 1 dimensions
NASA Astrophysics Data System (ADS)
Benvenuti, Sergio; Giacomelli, Simone
2017-10-01
We consider the lagrangian description of Argyres-Douglas theories of type A 2 N -1, which is a SU( N) gauge theory with an adjoint and one fundamental flavor. An appropriate reformulation allows us to map the moduli space of vacua across the duality, and to dimensionally reduce. Going down to three dimensions, we find that the adjoint SQCD "abelianizes": in the infrared it is equivalent to a N=4 linear quiver theory. Moreover, we study the mirror dual: using a monopole duality to "sequentially confine" quivers tails with balanced nodes, we show that the mirror RG flow lands on N=4 SQED with N flavors. These results make the supersymmetry enhancement explicit and provide a physical derivation of previous proposals for the three dimensional mirror of AD theories.
Production of single heavy charged leptons at a linear collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Pree, Erin; Sher, Marc; Turan, Ismail
2008-05-01
A sequential fourth generation of quarks and leptons is allowed by precision electroweak constraints if the mass splitting between the heavy quarks is between 50 and 80 GeV. Although heavy quarks can be easily detected at the LHC, it is very difficult to detect a sequential heavy charged lepton, L, due to large backgrounds. Should the L mass be above 250 GeV, it cannot be pair-produced at a 500 GeV ILC. We calculate the cross section for the one-loop process e{sup +}e{sup -}{yields}L{tau}. Although the cross section is small, it may be detectable. We also consider contributions from the two-Higgsmore » doublet model and the Randall-Sundrum model, in which case the cross section can be substantially higher.« less
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
PHAM, GIANG
2018-01-01
This study examines the strength and direction of lexical-grammatical associations within and between first and second languages (L1 and L2) in a longitudinal sample of sequential bilinguals. Thirty-three children who spoke Vietnamese (L1) and English (L2) completed picture-naming and story-telling tasks in each language at four yearly intervals. Hierarchical linear modeling across Years 1–4 revealed bidirectional within-language associations and a unidirectional cross-language association from the L1 to L2. Results suggest a conditional relationship between languages in which the L1 supports L2 growth, but not vice versa. Findings contribute to defining pathways for L1 and L2 learning across domains and languages. PMID:29670455
DOT National Transportation Integrated Search
1995-10-01
THIS INVESTIGATION WAS COMPLETED AS PART OF THE ITS-IDEA PROGRAM WHICH IS ONE OF THREE IDEA PROGRAMS MANAGED BY THE TRANSPORTATION RESEARCH BOARD (TRB) TO FOSTER INNOVATIONS IN SURFACE TRANSPORTATION. IT FOCUSES ON PRODUCTS AND RESULT FOR THE DEVELOP...
Systematic Approach to Food Safety Education on the Farm
ERIC Educational Resources Information Center
Shaw, Angela; Strohbehn, Catherine; Naeve, Linda; Domoto, Paul; Wilson, Lester
2015-01-01
Food safety education from farm to end user is essential in the mitigation of food safety concerns associated with fresh produce. Iowa State University developed a multi-disciplinary three-level sequential program ("Know," "Show," "Go") to provide a holistic approach to food safety education. This program provides…
The Listening and Reading Comprehension (LARC) Program....Experiential Based Sequential Training.
ERIC Educational Resources Information Center
Blumenstyk, Holly; And Others
The LARC (Listening and Reading Comprehension) Program, an experiential based story grammar approach to listening and reading comprehension is described, and a pilot study of its effectiveness with communication handicapped children is reviewed. The LARC framework translates children's own recent experiences into sequenced story episodes which are…
Making Health Communication Programs Work. A Planner's Guide.
ERIC Educational Resources Information Center
Arkin, Elaine Bratic
This manual, designed to assist professionals in health and health-related agencies, offers guidance for planning a health communication program about cancer based on social marketing and other principles as well as the experiences of National Cancer Institute staff and other practitioners. The six chapters are arranged by sequentially ordered…
Pre-testing Orientation for the Disadvantaged.
ERIC Educational Resources Information Center
Mihalka, Joseph A.
A pre-testing orientation was incorporated into the Work Incentives Program, a pre-vocational program for disadvantaged youth. Test-taking skills were taught in seven and one half hours of instruction and a variety of methods were used to provide a sequential experience with distributed learning, positive reinforcement, and immediate feedback of…
Leadership Self-Efficacy in University Co-Curricular Programs
ERIC Educational Resources Information Center
Fields, Andrew R.
2010-01-01
University educators are concerned with student leadership development in order to generate much-needed leaders in every aspect of society. This sequential mixed methods study found that students who participate in a university co-curricular outdoor education leadership training program, combined with the experience of leading a wilderness…
CACTI: free, open-source software for the sequential coding of behavioral interactions.
Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.
Mahoney, J. Matthew; Titiz, Ali S.; Hernan, Amanda E.; Scott, Rod C.
2016-01-01
Hippocampal neural systems consolidate multiple complex behaviors into memory. However, the temporal structure of neural firing supporting complex memory consolidation is unknown. Replay of hippocampal place cells during sleep supports the view that a simple repetitive behavior modifies sleep firing dynamics, but does not explain how multiple episodes could be integrated into associative networks for recollection during future cognition. Here we decode sequential firing structure within spike avalanches of all pyramidal cells recorded in sleeping rats after running in a circular track. We find that short sequences that combine into multiple long sequences capture the majority of the sequential structure during sleep, including replay of hippocampal place cells. The ensemble, however, is not optimized for maximally producing the behavior-enriched episode. Thus behavioral programming of sequential correlations occurs at the level of short-range interactions, not whole behavioral sequences and these short sequences are assembled into a large and complex milieu that could support complex memory consolidation. PMID:26866597
Chung, Sukhoon; Rhee, Hyunsill; Suh, Yongmoo
2010-01-01
Objectives This study sought to find answers to the following questions: 1) Can we predict whether a patient will revisit a healthcare center? 2) Can we anticipate diseases of patients who revisit the center? Methods For the first question, we applied 5 classification algorithms (decision tree, artificial neural network, logistic regression, Bayesian networks, and Naïve Bayes) and the stacking-bagging method for building classification models. To solve the second question, we performed sequential pattern analysis. Results We determined: 1) In general, the most influential variables which impact whether a patient of a public healthcare center will revisit it or not are personal burden, insurance bill, period of prescription, age, systolic pressure, name of disease, and postal code. 2) The best plain classification model is dependent on the dataset. 3) Based on average of classification accuracy, the proposed stacking-bagging method outperformed all traditional classification models and our sequential pattern analysis revealed 16 sequential patterns. Conclusions Classification models and sequential patterns can help public healthcare centers plan and implement healthcare service programs and businesses that are more appropriate to local residents, encouraging them to revisit public health centers. PMID:21818426
Toward intelligent information sysytem
NASA Astrophysics Data System (ADS)
Onodera, Natsuo
"Hypertext" means a concept of a novel computer-assisted tool for storage and retrieval of text information based on human association. Structure of knowledge in our idea processing is generally complicated and networked, but traditional paper documents merely express it in essentially linear and sequential forms. However, recent advances in work-station technology have allowed us to process easily electronic documents containing non-linear structure such as references or hierarchies. This paper describes concept, history and basic organization of hypertext, and shows the outline and features of existing main hypertext systems. Particularly, use of the hypertext database is illustrated by an example of Intermedia developed by Brown University.
Simultaneous determination of rutin and ascorbic acid in a sequential injection lab-at-valve system.
Al-Shwaiyat, Mohammed Khair E A; Miekh, Yuliia V; Denisenko, Tatyana A; Vishnikin, Andriy B; Andruch, Vasil; Bazel, Yaroslav R
2018-02-05
A green, simple, accurate and highly sensitive sequential injection lab-at-valve procedure has been developed for the simultaneous determination of ascorbic acid (Asc) and rutin using 18-molybdo-2-phosphate Wells-Dawson heteropoly anion (18-MPA). The method is based on the dependence of the reaction rate between 18-MPA and reducing agents on the solution pH. Only Asc is capable of interacting with 18-MPA at pH 4.7, while at pH 7.4 the reaction with both Asc and rutin proceeds simultaneously. In order to improve the precision and sensitivity of the analysis, to minimize reagent consumption and to remove the Schlieren effect, the manifold for the sequential injection analysis was supplemented with external reaction chamber, and the reaction mixture was segmented. By the reduction of 18-MPA with reducing agents one- and two-electron heteropoly blues are formed. The fraction of one-electron heteropoly blue increases at low concentrations of the reducer. Measurement of the absorbance at a wavelength corresponding to the isobestic point allows strictly linear calibration graphs to be obtained. The calibration curves were linear in the concentration ranges of 0.3-24mgL -1 and 0.2-14mgL -1 with detection limits of 0.13mgL -1 and 0.09mgL -1 for rutin and Asc, respectively. The determination of rutin was possible in the presence of up to a 20-fold molar excess of Asc. The method was applied to the determination of Asc and rutin in ascorutin tablets with acceptable accuracy and precision (1-2%). Copyright © 2017 Elsevier B.V. All rights reserved.
Reading Different Orthographies: An fMRI Study of Phrase Reading in Hindi-English Bilinguals
ERIC Educational Resources Information Center
Kumar, Uttam; Das, Tanusree; Bapi, Raju S.; Padakannaya, Prakash; Joshi, R. Malatesha; Singh, Nandini C.
2010-01-01
The aim of the present study was to use functional imaging to compare cortical activations involved in reading Hindi and English that differ markedly in terms of their orthographies by a group of late bilinguals, more fluent in Hindi (L1) than English (L2). English is alphabetic and linear, in that vowels and consonants are arranged sequentially.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less
What we know now: the Evanston Illinois field lineups.
Steblay, Nancy K
2011-02-01
A Freedom of Information Act lawsuit secured 100 eyewitness identification reports from Evanston, Illinois, one of three cities of the Illinois Pilot Program. The files provide empirical evidence regarding three methodological aspects of the Program's comparison of non-blind simultaneous to double-blind sequential lineups. (1) A-priori differences existed between lineup conditions. For example, the simultaneous non-blind lineup condition was more likely to involve witnesses who had already identified the suspect in a previous lineup or who knew the offender (non-stranger identifications), and this condition also entailed shorter delays between event and lineup. (2) Verbatim eyewitness comments were recorded more often in double-blind sequential than in non-blind simultaneous lineup reports (83% vs. 39%). (3) Effective lineup structure was used equally in the two lineup conditions.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
A system for the input and storage of data in the Besm-6 digital computer
NASA Technical Reports Server (NTRS)
Schmidt, K.; Blenke, L.
1975-01-01
Computer programs used for the decoding and storage of large volumes of data on the the BESM-6 computer are described. The following factors are discussed: the programming control language allows the programs to be run as part of a modular programming system used in data processing; data control is executed in a hierarchically built file on magnetic tape with sequential index storage; and the programs are not dependent on the structure of the data.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Child Welfare Strategy in the Coming Years.
ERIC Educational Resources Information Center
Kadushin, Alfred; And Others
This collection of policy papers by a dozen national experts in subject areas related to child welfare is designed to assist public and voluntary agency program directors in their efforts to update current programs or to design new ones. Sequentially the chapters: (1) set a framework for the following papers, (2) examine the provision of foster…
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2001-01-01
This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.
A Model for the Development an Upper-Division Marketing Certificate Program: Professional Sales.
ERIC Educational Resources Information Center
Grahn, Joyce L.
The sequential components of a model for the development of an upper-division marketing certificate program in professional sales are described in this report as they were implemented at the University of Minnesota's General College during Fall 1980. After introductory material examining the responsibilities of the professional sales…
Foreign Languages Course of Study, Junior & Senior High Schools. Draft.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL. Div. of Elementary and Secondary Instruction.
The study guide outlining the modern foreign language courses for English speakers in Dade County's secondary schools establishes a uniform sequential program for instruction in French, German, Hebrew, Italian, and Spanish. Program expectancies are described for each level and type of course, to serve as a basis for planning appropriate…
ERIC Educational Resources Information Center
Cheyney, Arnold B.; Capone, Donald L.
This teaching resource is aimed at helping students develop the skills necessary to locate places on the earth. Designed as a collection of map skill exercises rather than a sequential program of study, this program expects that students have access to and some knowledge of how to use globes, maps, atlases, and encyclopedias. The volume contains 6…
Landscape analysis software tools
Don Vandendriesche
2008-01-01
Recently, several new computer programs have been developed to assist in landscape analysis. The âSequential Processing Routine for Arraying Yieldsâ (SPRAY) program was designed to run a group of stands with particular treatment activities to produce vegetation yield profiles for forest planning. SPRAY uses existing Forest Vegetation Simulator (FVS) software coupled...
RBS Career Education. Evaluation Planning Manual. Education Is Going to Work.
ERIC Educational Resources Information Center
Kershner, Keith M.
Designed for use with the Research for Better Schools career education program, this evaluation planning manual focuses on procedures and issues central to planning the evaluation of an educational program. Following a statement on the need for evaluation, nine sequential steps for evaluation planning are discussed. The first two steps, program…
Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control.
Hahne, J M; Biessmann, F; Jiang, N; Rehbaum, H; Farina, D; Meinecke, F C; Muller, K-R; Parra, L C
2014-03-01
In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.
Idilman, Ilkay S; Keskin, Onur; Elhan, Atilla Halil; Idilman, Ramazan; Karcaaltincaba, Musturay
2014-05-01
To determine the utility of sequential MRI-estimated proton density fat fraction (MRI-PDFF) for quantification of the longitudinal changes in liver fat content in individuals with nonalcoholic fatty liver disease (NAFLD). A total of 18 consecutive individuals (M/F: 10/8, mean age: 47.7±9.8 years) diagnosed with NAFLD, who underwent sequential PDFF calculations for the quantification of hepatic steatosis at two different time points, were included in the study. All patients underwent T1-independent volumetric multi-echo gradient-echo imaging with T2* correction and spectral fat modeling. A close correlation for quantification of hepatic steatosis between the initial MRI-PDFF and liver biopsy was observed (rs=0.758, p<0.001). The median interval between two sequential MRI-PDFF measurements was 184 days. From baseline to the end of the follow-up period, serum GGT level and homeostasis model assessment score were significantly improved (p=0.015, p=0.006, respectively), whereas BMI, serum AST, and ALT levels were slightly decreased. MRI-PDFFs were significantly improved (p=0.004). A good correlation between two sequential MRI-PDFF calculations was observed (rs=0.714, p=0.001). With linear regression analyses, only delta serum ALT levels had a significant effect on delta MRI-PDFF calculations (r2=38.6%, p=0.006). At least 5.9% improvement in MRI-PDFF is needed to achieve a normalized abnormal ALT level. The improvement of MRI-PDFF score was associated with the improvement of biochemical parameters in patients who had improvement in delta MRI-PDFF (p<0.05). MRI-PDFF can be used for the quantification of the longitudinal changes of hepatic steatosis. The changes in serum ALT levels significantly reflected changes in MRI-PDFF in patients with NAFLD.
Pi, Fengmei; Zhao, Zhengyi; Chelikani, Venkata; Yoder, Kristine; Kvaratskhelia, Mamuka
2016-01-01
The intracellular parasitic nature of viruses and the emergence of antiviral drug resistance necessitate the development of new potent antiviral drugs. Recently, a method for developing potent inhibitory drugs by targeting biological machines with high stoichiometry and a sequential-action mechanism was described. Inspired by this finding, we reviewed the development of antiviral drugs targeting viral DNA-packaging motors. Inhibiting multisubunit targets with sequential actions resembles breaking one bulb in a series of Christmas lights, which turns off the entire string. Indeed, studies on viral DNA packaging might lead to the development of new antiviral drugs. Recent elucidation of the mechanism of the viral double-stranded DNA (dsDNA)-packaging motor with sequential one-way revolving motion will promote the development of potent antiviral drugs with high specificity and efficiency. Traditionally, biomotors have been classified into two categories: linear and rotation motors. Recently discovered was a third type of biomotor, including the viral DNA-packaging motor, beside the bacterial DNA translocases, that uses a revolving mechanism without rotation. By analogy, rotation resembles the Earth's rotation on its own axis, while revolving resembles the Earth's revolving around the Sun (see animations at http://rnanano.osu.edu/movie.html). Herein, we review the structures of viral dsDNA-packaging motors, the stoichiometries of motor components, and the motion mechanisms of the motors. All viral dsDNA-packaging motors, including those of dsDNA/dsRNA bacteriophages, adenoviruses, poxviruses, herpesviruses, mimiviruses, megaviruses, pandoraviruses, and pithoviruses, contain a high-stoichiometry machine composed of multiple components that work cooperatively and sequentially. Thus, it is an ideal target for potent drug development based on the power function of the stoichiometries of target complexes that work sequentially. PMID:27356896
Pamnani, Shitaldas J; Nyitray, Alan G; Abrahamsen, Martha; Rollison, Dana E; Villa, Luisa L; Lazcano-Ponce, Eduardo; Huang, Yangxin; Borenstein, Amy; Giuliano, Anna R
2016-10-15
The purpose of this study was to assess the risk of sequential acquisition of anal human papillomavirus (HPV) infection following a type-specific genital HPV infection for the 9-valent vaccine HPV types and investigate factors associated with sequential infection among men who have sex with women (MSW). Genital and anal specimens were available for 1348 MSW participants, and HPV genotypes were detected using the Roche Linear Array assay. Sequential risk of anal HPV infection was assessed using hazard ratios (HRs) among men with prior genital infection, compared with men with no prior genital infection, in individual HPV type and grouped HPV analyses. In individual analyses, men with prior HPV 16 genital infections had a significantly higher risk of subsequent anal HPV 16 infections (HR, 4.63; 95% confidence interval [CI], 1.41-15.23). In grouped analyses, a significantly higher risk of sequential type-specific anal HPV infections was observed for any of the 9 types (adjusted HR, 2.80; 95% CI, 1.32-5.99), high-risk types (adjusted HR, 2.65; 95% CI, 1.26, 5.55), and low-risk types (adjusted HR, 5.89; 95% CI, 1.29, 27.01). MSW with prior genital HPV infections had a higher risk of a subsequent type-specific anal infection. The higher risk was not explained by sexual intercourse with female partners. Autoinoculation is a possible mechanism for the observed association. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Sequential CFAR detectors using a dead-zone limiter
NASA Astrophysics Data System (ADS)
Tantaratana, Sawasd
1990-09-01
The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.
Lenehan, Claire E.; Lewis, Simon W.
2002-01-01
LabVIEW®-based software for the automation of a sequential injection analysis instrument for the determination of morphine is presented. Detection was based on its chemiluminescence reaction with acidic potassium permanganate in the presence of sodium polyphosphate. The calibration function approximated linearity (range 5 × 10-10 to 5 × 10-6 M) with a line of best fit of y=1.05x+8.9164 (R2 =0.9959), where y is the log10 signal (mV) and x is the log10 morphine concentration (M). Precision, as measured by relative standard deviation, was 0.7% for five replicate analyses of morphine standard (5 × 10-8 M). The limit of detection (3σ) was determined as 5 × 10-11 M morphine. PMID:18924729
Lenehan, Claire E; Barnett, Neil W; Lewis, Simon W
2002-01-01
LabVIEW-based software for the automation of a sequential injection analysis instrument for the determination of morphine is presented. Detection was based on its chemiluminescence reaction with acidic potassium permanganate in the presence of sodium polyphosphate. The calibration function approximated linearity (range 5 x 10(-10) to 5 x 10(-6) M) with a line of best fit of y=1.05(x)+8.9164 (R(2) =0.9959), where y is the log10 signal (mV) and x is the log10 morphine concentration (M). Precision, as measured by relative standard deviation, was 0.7% for five replicate analyses of morphine standard (5 x 10(-8) M). The limit of detection (3sigma) was determined as 5 x 10(-11) M morphine.
Recent Improvements in Aerodynamic Design Optimization on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Anderson, W. Kyle
2000-01-01
Recent improvements in an unstructured-grid method for large-scale aerodynamic design are presented. Previous work had shown such computations to be prohibitively long in a sequential processing environment. Also, robust adjoint solutions and mesh movement procedures were difficult to realize, particularly for viscous flows. To overcome these limiting factors, a set of design codes based on a discrete adjoint method is extended to a multiprocessor environment using a shared memory approach. A nearly linear speedup is demonstrated, and the consistency of the linearizations is shown to remain valid. The full linearization of the residual is used to precondition the adjoint system, and a significantly improved convergence rate is obtained. A new mesh movement algorithm is implemented and several advantages over an existing technique are presented. Several design cases are shown for turbulent flows in two and three dimensions.
Experimenters' reference based upon Skylab experiment management
NASA Technical Reports Server (NTRS)
1974-01-01
The methods and techniques for experiment development and integration that evolved during the Skylab Program are described to facilitate transferring this experience to experimenters in future manned space programs. Management responsibilities and the sequential process of experiment evolution from initial concept through definition, development, integration, operation and postflight analysis are outlined and amplified, as appropriate. Emphasis is placed on specific lessons learned on Skylab that are worthy of consideration by future programs.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
ERIC Educational Resources Information Center
Ayala, Erika
2016-01-01
The purpose of this sequential explanatory embedded mixed methods study was to: (a) investigate and describe the academic performance of eighth grade students in the Falcon School District (FSD) who were designated as Long Term English Learners (LTELs) and participants in FSD's reading intervention program during their fourth through eighth grade…
Multigrid Methods for Fully Implicit Oil Reservoir Simulation
NASA Technical Reports Server (NTRS)
Molenaar, J.
1996-01-01
In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for two-phase flow problems with strong heterogeneities and anisotropies is studied. Here we consider both possibilities. Moreover we present a novel way for constructing the coarse grid correction operator in linear multigrid algorithms. This approach has the advantage in that it preserves the sparsity pattern of the fine grid matrix and it can be extended to systems of equations in a straightforward manner. We compare the linear and nonlinear multigrid algorithms by means of a numerical experiment.
Improvement of the Narrowband Linear Predictive Coder. Part 2. Synthesis Improvements.
1984-06-11
it possible to generate a replica of the voiced excitation sig- nal which can be stored in memory and read out sequentially at every voiced pitch epoch...4.7 Sustention 74.0 77.1 +3.1 Sibilation 80.2 84.9 +4.7 Graveness 63.5 77.9 +14.4 Compactness 88.5 87.8 -0.7 Overall 81.5 85.1 +3.6 Table 8-DRT score
Orbit Determination Using Vinti’s Solution
2016-09-15
Surveillance Network STK Systems Tool Kit TBP Two Body Problem TLE Two-line Element Set xv Acronym Definition UKF Unscented Kalman Filter WPAFB Wright...simplicity, stability, and speed. On the other hand, Kalman filters would be best suited for sequential estimation of stochastic or random components of a...be likened to how an Unscented Kalman Filter samples a system’s nonlinearities directly, avoiding linearizing the dynamics in the partials matrices
NASA Astrophysics Data System (ADS)
Sarkar, Aritra; Nagesha, A.; Parameswaran, P.; Sandhya, R.; Laha, K.; Okazaki, M.
2017-03-01
Cumulative fatigue damage under sequential low cycle fatigue (LCF) and high cycle fatigue (HCF) cycling was investigated at 923 K (650 °C) by conducting HCF tests on specimens subjected to prior LCF cycling at various strain amplitudes. Remnant HCF lives were found to decrease drastically with increase in prior fatigue exposure as a result of strong LCF-HCF interactions. The rate of decrease in remnant lives varied as a function of the applied strain amplitude. A threshold damage in terms of prior LCF life-fraction was found, below which no significant LCF-HCF interaction takes place. Similarly, a critical damage during the LCF pre-cycling marking the highest degree of LCF-HCF interaction was identified which was found to depend on the applied strain amplitude. In view of the non-linear damage accumulation behavior, Miner's linear damage rule proved to be highly non-conservative. Manson's damage curve approach, suitably modified, was found to be a better alternative for predicting the remnant HCF life. The single constant ( β) employed in the model, which reflects the damage accumulation of the material under two/multi-level loading conditions is derived from the regression analysis of the experimental results and validated further.
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Acceleration of linear stationary iterative processes in multiprocessor computers. II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romm, Ya.E.
1982-05-01
For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less
[Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].
Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong
2015-11-01
With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.
Engineering of Machine tool’s High-precision electric drives
NASA Astrophysics Data System (ADS)
Khayatov, E. S.; Korzhavin, M. E.; Naumovich, N. I.
2018-03-01
In the article it is shown that in mechanisms with numerical program control, high quality of processes can be achieved only in systems that provide adjustment of the working element’s position with high accuracy, and this requires an expansion of the regulation range by the torque. In particular, the use of synchronous reactive machines with independent excitation control makes it possible to substantially increase the moment overload in the sequential excitation circuit. Using mathematical and physical modeling methods, it is shown that in the electric drive with a synchronous reactive machine with independent excitation in a circuit with sequential excitation, it is possible to significantly expand the range of regulation by the torque and this is achieved by the effect of sequential excitation, which makes it possible to compensate for the transverse reaction of the armature.
Protein structural similarity search by Ramachandran codes
Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang
2007-01-01
Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377
ERIC Educational Resources Information Center
Quinn, Karen M.; And Others
Designed to provide pre- and inservice administrators with the skills necessary to select appropriate program development and implementation, and monitor and evaluate their success, this competency-based learning module consists of an introduction and four sequential learning experiences. Each learning experience contains an overview, required and…
ERIC Educational Resources Information Center
Calderon, Carlos Trevino
2012-01-01
The purpose of this sequential mixed methods case study was to explore the role of a teacher's attitude towards Sheltered Instruction Observation Protocols (SIOP) and how those attitudes affect the program's effectiveness. SIOP is a program designed to mitigate the effects of limited English proficiency and promote equal access to the curriculum…
ERIC Educational Resources Information Center
Bell Haynes, Janel Elizabeth
2013-01-01
The purpose of this mixed method sequential explanatory case study was to describe the relationship of a student outcomes assessment program, as measured by the Peregrine Academic Leveling Course, (ALC), to the academic performance, determined by scores on the Peregrine Common Professional Component (CPC) examination, of students enrolled during…
ERIC Educational Resources Information Center
ALI KHAN, ANSAR
THE AUTHOR DISCUSSES THE NEED FOR FUNCTIONAL, SEQUENTIAL PROGRAMS OF LITERACY, VOCATIONAL, LIBERAL, POLITICAL, AND HUMAN RELATIONS EDUCATION IN RURAL AREAS OF PAKISTAN. PROBLEMS AND CHALLENGES ARE SEEN IN THE OCCUPATIONAL CASTE SYSTEM, FAMILY STRUCTURES, ATTITUDES TOWARD THE EDUCATION OF BOYS AND GIRLS, POOR MEANS OF TRANSPORTATION AND…
Getting a Jump on the Future: Everything You'll Ever Need to Know about Multimedia Authoring Tools.
ERIC Educational Resources Information Center
D'Ignazio, Fred
1992-01-01
Discusses issues involved with buying and using multimedia authoring programs. Six programs are compared: (1) MediaText, (2) HyperCard, (3) LinkWay Live!, (4) AmigaVision, (5) Director, and (6) Multimedia Desktop. Highlights include the use of multimedia in education, sequential versus hierarchical organization, price, system requirements, digital…
Transaction costs and sequential bargaining in transferable discharge permit markets.
Netusil, N R; Braden, J B
2001-03-01
Market-type mechanisms have been introduced and are being explored for various environmental programs. Several existing programs, however, have not attained the cost savings that were initially projected. Modeling that acknowledges the role of transactions costs and the discrete, bilateral, and sequential manner in which trades are executed should provide a more realistic basis for calculating potential cost savings. This paper presents empirical evidence on potential cost savings by examining a market for the abatement of sediment from farmland. Empirical results based on a market simulation model find no statistically significant change in mean abatement costs under several transaction cost levels when contracts are randomly executed. An alternative method of contract execution, gain-ranked, yields similar results. At the highest transaction cost level studied, trading reduces the total cost of compliance relative to a uniform standard that reflects current regulations.
CACTI: Free, Open-Source Software for the Sequential Coding of Behavioral Interactions
Glynn, Lisa H.; Hallgren, Kevin A.; Houck, Jon M.; Moyers, Theresa B.
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713
NASA Astrophysics Data System (ADS)
Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.
2017-05-01
Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.
Mehdid, Mohammed Amine; Djafri, Ayada; Roussel, Christian; Andreoli, Federico
2009-11-12
A new process is described for preparing very pure linear alkanethiols and linear alpha,omega-alkanedithiols using a sequential alkylation of the title compound, followed by a ring closure to quantitatively give the corresponding 3-methyl[1,3]thiazolo[3,2-a]-[3,1]benzimidazol-9-ium salt and the alkanethiol derivative under mild conditions. The alkanethiol and the heteroaromatic salt are easily separated by a simple extraction process. The intermediate thiazolium quaternary salts resulting from the first reaction step can be isolated in quantitative yields, affording an odourless protected form of the thiols.
Linear array optical edge sensor
NASA Technical Reports Server (NTRS)
Bejczy, Antal K. (Inventor); Primus, Howard C. (Inventor)
1987-01-01
A series of independent parallel pairs of light emitting and detecting diodes for a linear pixel array, which is laterally positioned over an edge-like discontinuity in a workpiece to be scanned, is disclosed. These independent pairs of light emitters and detectors sense along intersecting pairs of separate optical axes. A discontinuity, such as an edge in the sensed workpiece, reflects a detectable difference in the amount of light from that discontinuity in comparison to the amount of light that is reflected on either side of the discontinuity. A sequentially sychronized clamping and sampling circuit detects that difference as an electrical signal which is recovered by circuitry that exhibits an improved signal-to-noise capability for the system.
Exploiting fast detectors to enter a new dimension in room-temperature crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, Robin L., E-mail: robin.owen@diamond.ac.uk; Paterson, Neil; Axford, Danny
2014-05-01
A departure from a linear or an exponential decay in the diffracting power of macromolecular crystals is observed and accounted for through consideration of a multi-state sequential model. A departure from a linear or an exponential intensity decay in the diffracting power of protein crystals as a function of absorbed dose is reported. The observation of a lag phase raises the possibility of collecting significantly more data from crystals held at room temperature before an intolerable intensity decay is reached. A simple model accounting for the form of the intensity decay is reintroduced and is applied for the first timemore » to high frame-rate room-temperature data collection.« less
Keenan, Jeffrey E; Speicher, Paul J; Nussbaum, Daniel P; Adam, Mohamed Abdelgadir; Miller, Timothy E; Mantyh, Christopher R; Thacker, Julie K M
2015-08-01
The purpose of this study was to examine the impact of the sequential implementation of the enhanced recovery program (ERP) and surgical site infection bundle (SSIB) on short-term outcomes in colorectal surgery (CRS) to determine if the presence of multiple standardized care programs provides additive benefit. Institutional ACS-NSQIP data were used to identify patients who underwent elective CRS from September 2006 to March 2013. The cohort was stratified into 3 groups relative to implementation of the ERP (February 1, 2010) and SSIB (July 1, 2011). Unadjusted characteristics and 30-day outcomes were assessed, and inverse proportional weighting was then used to determine the adjusted effect of these programs. There were 787 patients included: 337, 165, and 285 in the pre-ERP/SSIB, post-ERP/pre-SSIB, and post-ERP/SSIB periods, respectively. After inverse probability weighting (IPW) adjustment, groups were balanced with respect to patient and procedural characteristics considered. Compared with the pre-ERP/SSIB group, the post-ERP/pre-SSIB group had significantly reduced length of hospitalization (8.3 vs 6.6 days, p = 0.01) but did not differ with respect to postoperative wound complications and sepsis. Subsequent introduction of the SSIB then resulted in a significant decrease in superficial SSI (16.1% vs 6.3%, p < 0.01) and postoperative sepsis (11.2% vs 1.8%, p < 0.01). Finally, inflation-adjusted mean hospital cost for a CRS admission fell from $31,926 in 2008 to $22,044 in 2013 (p < 0.01). Sequential implementation of the ERP and SSIB provided incremental improvements in CRS outcomes while controlling hospital costs, supporting their combined use as an effective strategy toward improving the quality of patient care. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
VENVAL : a plywood mill cost accounting program
Henry Spelter
1991-01-01
This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...
Chakraborty, Bibhas; Davidson, Karina W.
2015-01-01
Summary Implementation study is an important tool for deploying state-of-the-art treatments from clinical efficacy studies into a treatment program, with the dual goals of learning about effectiveness of the treatments and improving the quality of care for patients enrolled into the program. In this article, we deal with the design of a treatment program of dynamic treatment regimens (DTRs) for patients with depression post acute coronary syndrome. We introduce a novel adaptive randomization scheme for a sequential multiple assignment randomized trial of DTRs. Our approach adapts the randomization probabilities to favor treatment sequences having comparatively superior Q-functions used in Q-learning. The proposed approach addresses three main concerns of an implementation study: it allows incorporation of historical data or opinions, it includes randomization for learning purposes, and it aims to improve care via adaptation throughout the program. We demonstrate how to apply our method to design a depression treatment program using data from a previous study. By simulation, we illustrate that the inputs from historical data are important for the program performance measured by the expected outcomes of the enrollees, but also show that the adaptive randomization scheme is able to compensate poorly specified historical inputs by improving patient outcomes within a reasonable horizon. The simulation results also confirm that the proposed design allows efficient learning of the treatments by alleviating the curse of dimensionality. PMID:25354029
Ranking Forestry Investments With Parametric Linear Programming
Paul A. Murphy
1976-01-01
Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.
2011-01-01
Background The potential benefits of coordinating infectious disease eradication programs that use campaigns such as supplementary immunization activities (SIAs) should not be over-looked. One example of a coordinated approach is an adaptive "sequential strategy": first, all annual SIA budget is dedicated to the eradication of a single infectious disease; once that disease is eradicated, the annual SIA budget is re-focussed on eradicating a second disease, etc. Herd immunity suggests that a sequential strategy may eradicate several infectious diseases faster than a non-adaptive "simultaneous strategy" of dividing annual budget equally among eradication programs for those diseases. However, mathematical modeling is required to understand the potential extent of this effect. Methods Our objective was to illustrate how budget allocation strategies can interact with the nonlinear nature of disease transmission to determine time to eradication of several infectious diseases under different budget allocation strategies. Using a mathematical transmission model, we analyzed three hypothetical vaccine-preventable infectious diseases in three different countries. A central decision-maker can distribute funding among SIA programs for these three diseases according to either a sequential strategy or a simultaneous strategy. We explored the time to eradication under these two strategies under a range of scenarios. Results For a certain range of annual budgets, all three diseases can be eradicated relatively quickly under the sequential strategy, whereas eradication never occurs under the simultaneous strategy. However, moderate changes to total SIA budget, SIA frequency, order of eradication, or funding disruptions can create disproportionately large differences in the time and budget required for eradication under the sequential strategy. We find that the predicted time to eradication can be very sensitive to small differences in the rate of case importation between the countries. We also find that the time to eradication of all three diseases is not necessarily lowest when the least transmissible disease is targeted first. Conclusions Relatively modest differences in budget allocation strategies in the near-term can result in surprisingly large long-term differences in time required to eradicate, as a result of the amplifying effects of herd immunity and the nonlinearities of disease transmission. More sophisticated versions of such models may be useful to large international donors or other organizations as a planning or portfolio optimization tool, where choices must be made regarding how much funding to dedicate to different infectious disease eradication efforts. PMID:21955853
Fitzpatrick, Tiffany; Bauch, Chris T
2011-09-28
The potential benefits of coordinating infectious disease eradication programs that use campaigns such as supplementary immunization activities (SIAs) should not be over-looked. One example of a coordinated approach is an adaptive "sequential strategy": first, all annual SIA budget is dedicated to the eradication of a single infectious disease; once that disease is eradicated, the annual SIA budget is re-focussed on eradicating a second disease, etc. Herd immunity suggests that a sequential strategy may eradicate several infectious diseases faster than a non-adaptive "simultaneous strategy" of dividing annual budget equally among eradication programs for those diseases. However, mathematical modeling is required to understand the potential extent of this effect. Our objective was to illustrate how budget allocation strategies can interact with the nonlinear nature of disease transmission to determine time to eradication of several infectious diseases under different budget allocation strategies. Using a mathematical transmission model, we analyzed three hypothetical vaccine-preventable infectious diseases in three different countries. A central decision-maker can distribute funding among SIA programs for these three diseases according to either a sequential strategy or a simultaneous strategy. We explored the time to eradication under these two strategies under a range of scenarios. For a certain range of annual budgets, all three diseases can be eradicated relatively quickly under the sequential strategy, whereas eradication never occurs under the simultaneous strategy. However, moderate changes to total SIA budget, SIA frequency, order of eradication, or funding disruptions can create disproportionately large differences in the time and budget required for eradication under the sequential strategy. We find that the predicted time to eradication can be very sensitive to small differences in the rate of case importation between the countries. We also find that the time to eradication of all three diseases is not necessarily lowest when the least transmissible disease is targeted first. Relatively modest differences in budget allocation strategies in the near-term can result in surprisingly large long-term differences in time required to eradicate, as a result of the amplifying effects of herd immunity and the nonlinearities of disease transmission. More sophisticated versions of such models may be useful to large international donors or other organizations as a planning or portfolio optimization tool, where choices must be made regarding how much funding to dedicate to different infectious disease eradication efforts.
specsim: A Fortran-77 program for conditional spectral simulation in 3D
NASA Astrophysics Data System (ADS)
Yao, Tingting
1998-12-01
A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.
MSFC Skylab experimenter's reference
NASA Technical Reports Server (NTRS)
1974-01-01
The methods and techniques for experiment development and integration that evolved during the Skylab Program are described to facilitate transferring this experience to experimenters in future manned space programs. Management responsibilities and the sequential process of experiment evolution from initial concept through definition, development, integration, operation and postflight analysis are outlined in the main text and amplified, as appropriate, in appendixes. Emphasis is placed on specific lessons learned on Skylab that are worthy of consideration by future programs.
Yun, Lifen; Wang, Xifu; Fan, Hongqiang; Li, Xiaopeng
2017-01-01
This paper proposes a reliable facility location design model under imperfect information with site-dependent disruptions; i.e., each facility is subject to a unique disruption probability that varies across the space. In the imperfect information contexts, customers adopt a realistic “trial-and-error” strategy to visit facilities; i.e., they visit a number of pre-assigned facilities sequentially until they arrive at the first operational facility or give up looking for the service. This proposed model aims to balance initial facility investment and expected long-term operational cost by finding the optimal facility locations. A nonlinear integer programming model is proposed to describe this problem. We apply a linearization technique to reduce the difficulty of solving the proposed model. A number of problem instances are studied to illustrate the performance of the proposed model. The results indicate that our proposed model can reveal a number of interesting insights into the facility location design with site-dependent disruptions, including the benefit of backup facilities and system robustness against variation of the loss-of-service penalty. PMID:28486564
An evaluation of exact methods for the multiple subset maximum cardinality selection problem.
Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas
2016-05-01
The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
Systems identification and the adaptive management of waterfowl in the United States
Williams, B.K.; Nichols, J.D.
2001-01-01
Waterfowl management in the United States is one of the more visible conservation success stories in the United States. It is authorized and supported by appropriate legislative authorities, based on large-scale monitoring programs, and widely accepted by the public. The process is one of only a limited number of large-scale examples of effective collaboration between research and management, integrating scientific information with management in a coherent framework for regulatory decision-making. However, harvest management continues to face some serious technical problems, many of which focus on sequential identification of the resource system in a context of optimal decision-making. The objective of this paper is to provide a theoretical foundation of adaptive harvest management, the approach currently in use in the United States for regulatory decision-making. We lay out the legal and institutional framework for adaptive harvest management and provide a formal description of regulatory decision-making in terms of adaptive optimization. We discuss some technical and institutional challenges in applying adaptive harvest management and focus specifically on methods of estimating resource states for linear resource systems.
Santos, LL; Hughes, SC; Pereira, AI; Young, GC; Hussey, E; Charlton, P; Baptiste‐Brown, S; Stuart, JS; Vincent, V; van Marle, SP; Schmith, VD
2016-01-01
Umeclidinium (UMEC), a long‐acting muscarinic antagonist approved for chronic obstructive pulmonary disease (COPD), was investigated for primary hyperhidrosis as topical therapy. This study evaluated the pharmacokinetics, safety, and tolerability of a single dose of [14C]UMEC applied to either unoccluded axilla (UA), occluded axilla (OA), or occluded palm (OP) of healthy males. After 8 h the formulation was removed. [14C]UMEC plasma concentrations (Cp) were quantified by accelerator mass spectrometry. Occlusion increased systemic exposure by 3.8‐fold. Due to UMEC absorption‐limited pharmacokinetics, Cp data from the OA were combined with intravenous data from a phase I study. The data were described by a two‐compartment population model with sequential zero and first‐order absorption and linear elimination. Simulated systemic exposure following q.d. doses to axilla was similar to the exposure from the inhaled therapy, suggesting that systemic safety following dermal administration can be bridged to the inhaled program, and offering the potential for a reduced number of studies and/or subjects. PMID:27304394
Optical and probe determination of soot concentrations in a model gas turbine combustor
NASA Technical Reports Server (NTRS)
Eckerle, W. A.; Rosfjord, T. J.
1986-01-01
An experimental program was conducted to track the variation in soot loading in a generic gas turbine combustor. The burner is a 12.7-cm dia cylindrical device consisting of six sheet-metal louvers. Determination of soot loading along the burner length is achieved by measurement at the exit of the combustor and then at upstream stations by sequential removal of liner louvers to shorten burner length. Alteration of the flow field approaching and within the shortened burners is minimized by bypassing flow in order to maintain a constant linear pressure drop. The burner exhaust flow is sampled at the burner centerline to determine soot mass concentration and smoke number. Characteristic particle size and number density, transmissivity of the exhaust flow, and local radiation from luminous soot particles in the exhaust are determined by optical techniques. Four test fuels are burned at three fuel-air ratios to determine fuel chemical property and flow temperature influences. Particulate concentration data indicate a strong oxidation mechanism in the combustor secondary zone, though the oxidation is significantly affected by flow temperature. Soot production is directly related to fuel smoke point.
Investigating Integer Restrictions in Linear Programming
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach
Gonzales, Lucia K; Glaser, Dale; Howland, Lois; Clark, Mary Jo; Hutchins, Susie; Macauley, Karen; Close, Jacqueline F; Leveque, Noelle Lipkin; Failla, Kim Reina; Brooks, Raelene; Ward, Jillian
2017-01-01
A number of studies across different disciplines have investigated students' learning styles. Differences are known to exist between graduate and baccalaureate nursing students. However, few studies have investigated the learning styles of students in graduate entry nursing programs. . Study objective was to describe graduate entry nursing students' learning styles. A descriptive design was used for this study. The Index of Learning Styles (ILS) was administered to 202 graduate entry nursing student volunteers at a southwestern university. Descriptive statistics, tests of association, reliability, and validity were performed. Graduate nursing students and faculty participated in data collection, analysis, and dissemination of the results. Predominant learning styles were: sensing - 82.7%, visual - 78.7%, sequential - 65.8%, and active - 59.9%. Inter-item reliabilities for the postulated subscales were: sensing/intuitive (α=0.70), visual/verbal (α=0.694), sequential/global (α=0.599), and active/reflective (α=0.572). Confirmatory factor analysis for results of validity were: χ 2 (896)=1110.25, p<0.001, CFI=0.779, TLI=0.766, WRMR=1.14, and RMSEA =0.034. Predominant learning styles described students as being concrete thinkers oriented toward facts (sensing); preferring pictures, diagrams, flow charts, demonstrations (visual); being linear thinkers (sequencing); and enjoying working in groups and trying things out (active),. The predominant learning styles suggest educators teach concepts through simulation, discussion, and application of knowledge. Multiple studies, including this one, provided similar psychometric results. Similar reliability and validity results for the ILS have been noted in previous studies and therefore provide sufficient evidence to use the ILS with graduate entry nursing students. This study provided faculty with numerous opportunities for actively engaging students in data collection, analysis, and dissemination of results. Copyright © 2016 Elsevier Ltd. All rights reserved.
User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.
1988-01-01
An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Spin effects in transport through single-molecule magnets in the sequential and cotunneling regimes
NASA Astrophysics Data System (ADS)
Misiorny, Maciej; Weymann, Ireneusz; Barnaś, Józef
2009-06-01
We analyze the stationary spin-dependent transport through a single-molecule magnet weakly coupled to external ferromagnetic leads. Using the real-time diagrammatic technique, we calculate the sequential and cotunneling contributions to current, tunnel magnetoresistance, and Fano factor in both linear and nonlinear response regimes. We show that the effects of cotunneling are predominantly visible in the blockade regime and lead to enhancement of tunnel magnetoresistance (TMR) above the Julliere value, which is accompanied with super-Poissonian shot noise due to bunching of inelastic cotunneling processes through different virtual spin states of the molecule. The effects of external magnetic field and the role of type and strength of exchange interaction between the LUMO level and the molecule’s spin are also considered. When the exchange coupling is ferromagnetic, we find an enhanced TMR, while in the case of antiferromagnetic coupling we predict a large negative TMR effect.
Microcomputer Software Engineering, Documentation and Evaluation
1981-03-31
local dealer or call for complete specificalons. eAUTOMATIC INC To proceed step by step, we need toUe G T A TOMA IC NC. know where we are going and a...MICROPROCESSOR normal sequence that should be DIRECT MEMORY ACCESS preserved in the documentation. For INTRODUCTION 2.2 DRIVE CONTROLS example, you...with linear, sequential logic (like a computer). It is also the verbal side and controls language. The right side specializes in images, music, pictures
2008-07-01
operators in Hilbert spaces. The homogenization procedure through successive multi- resolution projections is presented, followed by a numerical example of...is intended to be essentially self-contained. The mathematical ( Greenberg 1978; Gilbert 2006) and signal processing (Strang and Nguyen 1995...literature listed in the references. The ideas behind multi-resolution analysis unfold from the theory of linear operators in Hilbert spaces (Davis 1975
Zhang, Liangliang; Yuan, Shuai; Feng, Liang; Guo, Bingbing; Qin, Jun-Sheng; Xu, Ben; Lollar, Christina; Sun, Daofeng; Zhou, Hong-Cai
2018-04-23
Multi-component metal-organic frameworks (MOFs) with precisely controlled pore environments are highly desired owing to their potential applications in gas adsorption, separation, cooperative catalysis, and biomimetics. A series of multi-component MOFs, namely PCN-900(RE), were constructed from a combination of tetratopic porphyrinic linkers, linear linkers, and rare-earth hexanuclear clusters (RE 6 ) under the guidance of thermodynamics. These MOFs exhibit high surface areas (up to 2523 cm 2 g -1 ) and unlimited tunability by modification of metal nodes and/or linker components. Post-synthetic exchange of linear linkers and metalation of two organic linkers were realized, allowing the incorporation of a wide range of functional moieties. Two different metal sites were sequentially placed on the linear linker and the tetratopic porphyrinic linker, respectively, giving rise to an ideal platform for heterogeneous catalysis. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
NASA Astrophysics Data System (ADS)
Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.
2014-12-01
Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well-suited to solution on hybrid computational clusters. To manage the combinatorial explosion of solver options (which include hybridizations of all the approaches mentioned above), we leverage the modularity of the PETSc library.
The Impact of an Informal Science Program on Students' Science Knowledge and Interest
ERIC Educational Resources Information Center
Zandstra, Anne Maria
2012-01-01
In this sequential explanatory mixed methods study, quantitative and qualitative data were used to measure the impact of an informal science program on eleventh grade students' science knowledge and interest. The local GEAR UP project has been working for six years with a cohort of students who were in eleventh and twelfth grade during the time of…
ERIC Educational Resources Information Center
Abildso, Christiaan; Zizzi, Sam; Gilleland, Diana; Thomas, James; Bonner, Daniel
2010-01-01
Physical activity is critical in healthy weight loss, yet there is still much to be learned about psychosocial mechanisms of physical activity behavior change in weight loss. A sequential mixed methods approach was used to assess the physical and psychosocial impact of a 12-week cognitive-behavioral weight management program and explore factors…
ERIC Educational Resources Information Center
Adams-Budde, Melissa; Howard, Christy; Jolliff, Grant; Myers, Joy
2014-01-01
The purpose of this mixed methods sequential explanatory study was to explain the relationship between literacy experiences over time and the literacy identities of the doctoral students in a teacher education and higher education program. The quantitative phase, surveying 36 participants, revealed a positive correlation between participant's…
NASA Astrophysics Data System (ADS)
Kunze, Herb; La Torre, Davide; Lin, Jianyi
2017-01-01
We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.
Dumbauld, Jill; Black, Michelle; Depp, Colin A; Daly, Rebecca; Curran, Maureen A; Winegarden, Babbi; Jeste, Dilip V
2014-12-01
With a growing need for developing future physician scientists, identifying characteristics of medical students who are likely to benefit from research training programs is important. This study assessed if specific learning styles of medical students, participating in federally funded short-term research training programs, were associated with research self-efficacy, a potential predictor of research career success. Seventy-five first-year medical students from 28 medical schools, selected to participate in two competitive NIH-supported summer programs for research training in aging, completed rating scales to evaluate learning styles at baseline, and research self-efficacy before and after training. We examined associations of individual learning styles (visual-verbal, sequential-global, sensing-intuitive, and active-reflective) with students' gender, ranking of medical school, and research self-efficacy. Research self-efficacy improved significantly following the training programs. Students with a verbal learning style reported significantly greater research self-efficacy at baseline, while visual, sequential, and intuitive learners demonstrated significantly greater increases in research self-efficacy from baseline to posttraining. No significant relationships were found between learning styles and students' gender or ranking of their medical school. Assessments of learning styles may provide useful information to guide future training endeavors aimed at developing the next generation of physician-scientists. © 2014 Wiley Periodicals, Inc.
Wasser, Tobias; Pollard, Jessica; Fisk, Deborah; Srihari, Vinod
2017-10-01
In first-episode psychosis there is a heightened risk of aggression and subsequent criminal justice involvement. This column reviews the evidence pointing to these heightened risks and highlights opportunities, using a sequential intercept model, for collaboration between mental health services and existing diversionary programs, particularly for patients whose behavior has already brought them to the attention of the criminal justice system. Coordinating efforts in these areas across criminal justice and clinical spheres can decrease the caseload burden on the criminal justice system and optimize clinical and legal outcomes for this population.
Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew
2012-10-01
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing
Yang, Changju; Kim, Hyongsuk
2016-01-01
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
Positive feedback : exploring current approaches in iterative travel demand model implementation.
DOT National Transportation Integrated Search
2012-01-01
Currently, the models that TxDOTs Transportation Planning and Programming Division (TPP) developed are : traditional three-step models (i.e., trip generation, trip distribution, and traffic assignment) that are sequentially : applied. A limitation...
Teimoury, Ebrahim; Jabbarzadeh, Armin; Babaei, Mohammadhosein
2017-01-01
Inventory management has frequently been targeted by researchers as one of the most pivotal problems in supply chain management. With the expansion of research studies on inventory management in supply chains, perishable inventory has been introduced and its fundamental differences from non-perishable inventory have been emphasized. This article presents livestock as a type of inventory that has been less studied in the literature. Differences between different inventory types, affect various levels of strategic, tactical and operational decision-making. In most articles, different levels of decision-making are discussed independently and sequentially. In this paper, not only is the livestock inventory introduced, but also a model has been developed to integrate decisions across different levels of decision-making using bi-level programming. Computational results indicate that the proposed bi-level approach is more efficient than the sequential decision-making approach.
Jabbarzadeh, Armin; Babaei, Mohammadhosein
2017-01-01
Inventory management has frequently been targeted by researchers as one of the most pivotal problems in supply chain management. With the expansion of research studies on inventory management in supply chains, perishable inventory has been introduced and its fundamental differences from non-perishable inventory have been emphasized. This article presents livestock as a type of inventory that has been less studied in the literature. Differences between different inventory types, affect various levels of strategic, tactical and operational decision-making. In most articles, different levels of decision-making are discussed independently and sequentially. In this paper, not only is the livestock inventory introduced, but also a model has been developed to integrate decisions across different levels of decision-making using bi-level programming. Computational results indicate that the proposed bi-level approach is more efficient than the sequential decision-making approach. PMID:28982180
Kidwell, Kelley M; Hyde, Luke W
2016-09-01
Heterogeneity between and within people necessitates the need for sequential personalized interventions to optimize individual outcomes. Personalized or adaptive interventions (AIs) are relevant for diseases and maladaptive behavioral trajectories when one intervention is not curative and success of a subsequent intervention may depend on individual characteristics or response. AIs may be applied to medical settings and to investigate best prevention, education, and community-based practices. AIs can begin with low-cost or low-burden interventions and followed with intensified or alternative interventions for those who need it most. AIs that guide practice over the course of a disease, program, or school year can be investigated through sequential multiple assignment randomized trials (SMARTs). To promote the use of SMARTs, we provide a hypothetical SMART in a Head Start program to address child behavior problems. We describe the advantages and limitations of SMARTs, particularly as they may be applied to the field of evaluation.
The PlusCal Algorithm Language
NASA Astrophysics Data System (ADS)
Lamport, Leslie
Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
Engine With Regression and Neural Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2001-01-01
At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Estimation of Cadmium uptake by tobacco plants from laboratory leaching tests.
Marković, Jelena P; Jović, Mihajlo D; Smičiklas, Ivana D; Šljivić-Ivanović, Marija Z; Smiljanić, Slavko N; Onjia, Antonije E; Popović, Aleksandar R
2018-03-21
The objective of the present study was to determine the impact of cadmium (Cd) concentration in the soil on its uptake by tobacco plants, and to compare the ability of diverse extraction procedures for determining Cd bioavailability and predicting soil-to-plant transfer and Cd plant concentrations. The pseudo-total digestion procedure, modified Tessier sequential extraction and six standard single-extraction tests for estimation of metal mobility and bioavailability were used for the leaching of Cd from a native soil, as well as samples artificially contaminated over a wide range of Cd concentrations. The results of various leaching tests were compared between each other, as well as with the amounts of Cd taken up by tobacco plants in pot experiments. In the native soil sample, most of the Cd was found in fractions not readily available under natural conditions, but with increasing pollution level, Cd amounts in readily available forms increased. With increasing concentrations of Cd in the soil, the quantity of pollutant taken up in tobacco also increased, while the transfer factor (TF) decreased. Linear and non-linear empirical models were developed for predicting the uptake of Cd by tobacco plants based on the results of selected leaching tests. The non-linear equations for ISO 14870 (diethylenetriaminepentaacetic acid extraction - DTPA), ISO/TS 21268-2 (CaCl 2 leaching procedure), US EPA 1311 (toxicity characteristic leaching procedure - TCLP) single step extractions, and the sum of the first two fractions of the sequential extraction, exhibited the best correlation with the experimentally determined concentrations of Cd in plants over the entire range of pollutant concentrations. This approach can improve and facilitate the assessment of human exposure to Cd by tobacco smoking, but may also have wider applicability in predicting soil-to-plant transfer.
Very Low-Cost Nutritious Diet Plans Designed by Linear Programming.
ERIC Educational Resources Information Center
Foytik, Jerry
1981-01-01
Provides procedural details of Linear Programing, developed by the U.S. Department of Agriculture to devise a dietary guide for consumers that minimizes food costs without sacrificing nutritional quality. Compares Linear Programming with the Thrifty Food Plan, which has been a basis for allocating coupons under the Food Stamp Program. (CS)
Sleep to the beat: A nap favours consolidation of timing.
Verweij, Ilse M; Onuki, Yoshiyuki; Van Someren, Eus J W; Van der Werf, Ysbrand D
2016-06-01
Growing evidence suggests that sleep is important for procedural learning, but few studies have investigated the effect of sleep on the temporal aspects of motor skill learning. We assessed the effect of a 90-min day-time nap on learning a motor timing task, using 2 adaptations of a serial interception sequence learning (SISL) task. Forty-two right-handed participants performed the task before and after a 90-min period of sleep or wake. Electroencephalography (EEG) was recorded throughout. The motor task consisted of a sequential spatial pattern and was performed according to 2 different timing conditions, that is, either following a sequential or a random temporal pattern. The increase in accuracy was compared between groups using a mixed linear regression model. Within the sleep group, performance improvement was modeled based on sleep characteristics, including spindle- and slow-wave density. The sleep group, but not the wake group, showed improvement in the random temporal, but especially and significantly more strongly in the sequential temporal condition. None of the sleep characteristics predicted improvement on either general of the timing conditions. In conclusion, a daytime nap improves performance on a timing task. We show that performance on the task with a sequential timing sequence benefits more from sleep than motor timing. More important, the temporal sequence did not benefit initial learning, because differences arose only after an offline period and specifically when this period contained sleep. Sleep appears to aid in the extraction of regularities for optimal subsequent performance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Kusumawati, Rosita; Subekti, Retno
2017-04-01
Fuzzy bi-objective linear programming (FBOLP) model is bi-objective linear programming model in fuzzy number set where the coefficients of the equations are fuzzy number. This model is proposed to solve portfolio selection problem which generate an asset portfolio with the lowest risk and the highest expected return. FBOLP model with normal fuzzy numbers for risk and expected return of stocks is transformed into linear programming (LP) model using magnitude ranking function.
Brown, Louise A.
2016-01-01
Working memory is vulnerable to age-related decline, but there is debate regarding the age-sensitivity of different forms of spatial-sequential working memory task, depending on their passive or active nature. The functional architecture of spatial working memory was therefore explored in younger (18–40 years) and older (64–85 years) adults, using passive and active recall tasks. Spatial working memory was assessed using a modified version of the Spatial Span subtest of the Wechsler Memory Scale – Third Edition (WMS-III; Wechsler, 1998). Across both age groups, the effects of interference (control, visual, or spatial), and recall type (forward and backward), were investigated. There was a clear effect of age group, with younger adults demonstrating a larger spatial working memory capacity than the older adults overall. There was also a specific effect of interference, with the spatial interference task (spatial tapping) reliably reducing performance relative to both the control and visual interference (dynamic visual noise) conditions in both age groups and both recall types. This suggests that younger and older adults have similar dependence upon active spatial rehearsal, and that both forward and backward recall require this processing capacity. Linear regression analyses were then carried out within each age group, to assess the predictors of performance in each recall format (forward and backward). Specifically the backward recall task was significantly predicted by age, within both the younger and older adult groups. This finding supports previous literature showing lifespan linear declines in spatial-sequential working memory, and in working memory tasks from other domains, but contrasts with previous evidence that backward spatial span is no more sensitive to aging than forward span. The study suggests that backward spatial span is indeed more processing-intensive than forward span, even when both tasks include a retention period, and that age predicts backward spatial span performance across the adult lifespan, within both younger and older adulthood. PMID:27757096
Brown, Louise A
2016-01-01
Working memory is vulnerable to age-related decline, but there is debate regarding the age-sensitivity of different forms of spatial-sequential working memory task, depending on their passive or active nature. The functional architecture of spatial working memory was therefore explored in younger (18-40 years) and older (64-85 years) adults, using passive and active recall tasks. Spatial working memory was assessed using a modified version of the Spatial Span subtest of the Wechsler Memory Scale - Third Edition (WMS-III; Wechsler, 1998). Across both age groups, the effects of interference (control, visual, or spatial), and recall type (forward and backward), were investigated. There was a clear effect of age group, with younger adults demonstrating a larger spatial working memory capacity than the older adults overall. There was also a specific effect of interference, with the spatial interference task (spatial tapping) reliably reducing performance relative to both the control and visual interference (dynamic visual noise) conditions in both age groups and both recall types. This suggests that younger and older adults have similar dependence upon active spatial rehearsal, and that both forward and backward recall require this processing capacity. Linear regression analyses were then carried out within each age group, to assess the predictors of performance in each recall format (forward and backward). Specifically the backward recall task was significantly predicted by age, within both the younger and older adult groups. This finding supports previous literature showing lifespan linear declines in spatial-sequential working memory, and in working memory tasks from other domains, but contrasts with previous evidence that backward spatial span is no more sensitive to aging than forward span. The study suggests that backward spatial span is indeed more processing-intensive than forward span, even when both tasks include a retention period, and that age predicts backward spatial span performance across the adult lifespan, within both younger and older adulthood.
Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.
Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari
2018-04-05
The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood volume. Further work is required to validate.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J
2015-01-27
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05-1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J.
2015-01-01
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05–1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed. PMID:25633600
A proposed method to detect kinematic differences between and within individuals.
Frost, David M; Beach, Tyson A C; McGill, Stuart M; Callaghan, Jack P
2015-06-01
The primary objective was to examine the utility of a novel method of detecting "actual" kinematic changes using the within-subject variation. Twenty firefighters were assigned to one of two groups (lifting or firefighting). Participants performed 25 repetitions of two lifting or firefighting tasks, in three sessions. The magnitude and within-subject variation of several discrete kinematic measures were computed. Sequential averages of each variable were used to derive a cubic, quadratic and linear regression equation. The efficacy of each equation was examined by contrasting participants' sequential means to their 25-trial mean±1SD and 2SD. The magnitude and within-subject variation of each dependent measure was repeatable for all tasks; however, each participant did not exhibit the same movement patterns as the group. The number of instances across all variables, tasks and testing sessions whereby the 25-trial mean±1SD was contained within the boundaries established by the regression equations increased as the aggregate scores included more trials. Each equation achieved success in at least 88% of all instances when three trials were included in the sequential mean (95% with five trials). The within-subject variation may offer a means to examine participant-specific changes without having to collect a large number of trials. Copyright © 2015 Elsevier Ltd. All rights reserved.
Takita, Eiji; Kohda, Katsunori; Tomatsu, Hajime; Hanano, Shigeru; Moriya, Kanami; Hosouchi, Tsutomu; Sakurai, Nozomu; Suzuki, Hideyuki; Shinmyo, Atsuhiko; Shibata, Daisuke
2013-01-01
Ligation, the joining of DNA fragments, is a fundamental procedure in molecular cloning and is indispensable to the production of genetically modified organisms that can be used for basic research, the applied biosciences, or both. Given that many genes cooperate in various pathways, incorporating multiple gene cassettes in tandem in a transgenic DNA construct for the purpose of genetic modification is often necessary when generating organisms that produce multiple foreign gene products. Here, we describe a novel method, designated PRESSO (precise sequential DNA ligation on a solid substrate), for the tandem ligation of multiple DNA fragments. We amplified donor DNA fragments with non-palindromic ends, and ligated the fragment to acceptor DNA fragments on solid beads. After the final donor DNA fragments, which included vector sequences, were joined to the construct that contained the array of fragments, the ligation product (the construct) was thereby released from the beads via digestion with a rare-cut meganuclease; the freed linear construct was circularized via an intra-molecular ligation. PRESSO allowed us to rapidly and efficiently join multiple genes in an optimized order and orientation. This method can overcome many technical challenges in functional genomics during the post-sequencing generation. PMID:23897972
Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel
2017-01-01
Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
NASA Astrophysics Data System (ADS)
Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli
2018-04-01
Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.
Update on Bayesian Blocks: Segmented Models for Sequential Data
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2017-01-01
The Bayesian Block algorithm, in wide use in astronomy and other areas, has been improved in several ways. The model for block shape has been generalized to include other than constant signal rate - e.g., linear, exponential, or other parametric models. In addition the computational efficiency has been improved, so that instead of O(N**2) the basic algorithm is O(N) in most cases. Other improvements in the theory and application of segmented representations will be described.
NASA Astrophysics Data System (ADS)
Rosas, Alexandre; Van den Broeck, Christian; Lindenberg, Katja
2018-06-01
The stochastic thermodynamic analysis of a time-periodic single particle pump sequentially exposed to three thermochemical reservoirs is presented. The analysis provides explicit results for flux, thermodynamic force, entropy production, work, and heat. These results apply near equilibrium as well as far from equilibrium. In the linear response regime, a different type of Onsager-Casimir symmetry is uncovered. The Onsager matrix becomes symmetric in the limit of zero dissipation.
Attractors in complex networks
NASA Astrophysics Data System (ADS)
Rodrigues, Alexandre A. P.
2017-10-01
In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).
Attractors in complex networks.
Rodrigues, Alexandre A P
2017-10-01
In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).
Freak-Poli, Rosanne L A; Wolfe, Rory; Walls, Helen; Backholer, Kathryn; Peeters, Anna
2011-10-25
Workplace health programs have demonstrated improvements in a number of risk factors for chronic disease. However, there has been little investigation of participant characteristics that may be associated with change in risk factors during such programs. The aim of this paper is to identify participant characteristics associated with improved waist circumference (WC) following participation in a four-month, pedometer-based, physical activity, workplace health program. 762 adults employed in primarily sedentary occupations and voluntarily enrolled in a four-month workplace program aimed at increasing physical activity were recruited from ten Australian worksites in 2008. Seventy-nine percent returned at the end of the health program. Data included demographic, behavioural, anthropometric and biomedical measurements. WC change (before versus after) was assessed by multivariable linear and logistic regression analyses. Seven groupings of potential associated variables from baseline were sequentially added to build progressively larger regression models. Greater improvement in WC during the program was associated with having completed tertiary education, consuming two or less standard alcoholic beverages in one occasion in the twelve months prior to baseline, undertaking less baseline weekend sitting time and lower baseline total cholesterol. A greater WC at baseline was strongly associated with a greater improvement in WC. A sub-analysis in participants with a 'high-risk' baseline WC revealed that younger age, enrolling for reasons other than appearance, undertaking less weekend sitting time at baseline, eating two or more pieces of fruit per day at baseline, higher baseline physical functioning and lower baseline body mass index were associated with greater odds of moving to 'low risk' WC at the end of the program. While employees with 'high-risk' WC at baseline experienced the greatest improvements in WC, the other variables associated with greater WC improvement were generally indicators of better baseline health. These results indicate that employees who started with better health, potentially due to lifestyle or recent behavioural changes, were more likely to respond positively to the program. Future health program initiators should think innovatively to encourage all enrolees along the health spectrum to achieve a successful outcome.
ERIC Educational Resources Information Center
Longbotham, Pamela J.
2012-01-01
The study examined the impact of participation in an optional flexible year program (OFYP) on academic achievement. The ex post facto study employed an explanatory sequential mixed methods design. The non-probability sample consisted of 163 fifth grade students in an OFYP district and 137 5th graders in a 180-day instructional year school…
ERIC Educational Resources Information Center
Eden, S.; Bezer, M.
2011-01-01
The research examined the effect of an intervention program employing 3D immersive virtual reality (IVR), which focused on the perception of sequential time, on the mediation level and behavioural aspects of children with intellectual disability (ID). The intervention is based on the mediated learning experience (MLE) theory, which refers the…
Detecting Potential Synchronization Constraint Deadlocks from Formal System Specifications
1992-03-01
family of languages, consisting of the Larch Shared Language and a series of Larch interface languages, specific to particular programming languages...specify sequential (non- concurrent) programs , and explicitly does not include the ability to specify atomic actions (Guttag, 1985). Larch is therefore...synchronized communication between two such agents is ronsidered as a single action. The transitions in CCS trees are labelled to show how they are
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.
a method of gravity and seismic sequential inversion and its GPU implementation
NASA Astrophysics Data System (ADS)
Liu, G.; Meng, X.
2011-12-01
In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing, we use the GPU to accelerate our gravity and seismic inversion. Taking the gravity inversion as an example, its kernels are gravity forward simulation and correlation imaging, after the parallelization in GPU, in 3D case,the inversion module, the original five CPU loops are reduced to three,the forward module the original five CPU loops are reduced to two. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).
ERIC Educational Resources Information Center
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-01-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…
Mining reflective continuing medical education data for family physician learning needs.
Lewis, Denice Colleen; Pluye, Pierre; Rodriguez, Charo; Grad, Roland
2016-04-06
A mixed methods research (sequential explanatory design) studied the potential of mining the data from the consumers of continuing medical education (CME) programs, for the developers of CME programs. The quantitative data generated by family physicians, through applying the information assessment method to CME content, was presented to key informants from the CME planning community through a qualitative description study.The data were revealed to have many potential applications including supporting the creation of CME content, CME program planning and personal learning portfolios.
2012-12-01
identity operation SIMD Single instruction, multiple datastream parallel computing Scala A byte-compiled programming language featuring dynamic type...Specific Languages 5a. CONTRACT NUMBER FA8750-10-1-0191 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Armando Fox 5d...application performance, but usually must rely on efficiency programmers who are experts in explicit parallel programming to achieve it. Since such efficiency
Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J
2012-11-21
The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(t(i)) and the projected marker positions p(x(p), y(p); t(i)) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(x(p), y(p); t(i)) - P(θ(i)) · (aR(t(i)) + bR(t(i) - τ) + c)‖(2) with the projection operator P(θ(i)). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.
NASA Astrophysics Data System (ADS)
Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J.
2012-11-01
The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(ti) and the projected marker positions p(xp, yp; ti) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(xp, yp; ti) - P(θi) · (aR(ti) + bR(ti - τ) + c)‖2 with the projection operator P(θi). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.
Raghunandhan, S; Ravikumar, A; Kameswaran, Mohan; Mandke, Kalyani; Ranjith, R
2014-05-01
Indications for cochlear implantation have expanded today to include very young children and those with syndromes/multiple handicaps. Programming the implant based on behavioural responses may be tedious for audiologists in such cases, wherein matching an effective Measurable Auditory Percept (MAP) and appropriate MAP becomes the key issue in the habilitation program. In 'Difficult to MAP' scenarios, objective measures become paramount to predict optimal current levels to be set in the MAP. We aimed to (a) study the trends in multi-modal electrophysiological tests and behavioural responses sequentially over the first year of implant use; (b) generate normative data from the above; (c) correlate the multi-modal electrophysiological thresholds levels with behavioural comfort levels; and (d) create predictive formulae for deriving optimal comfort levels (if unknown), using linear and multiple regression analysis. This prospective study included 10 profoundly hearing impaired children aged between 2 and 7 years with normal inner ear anatomy and no additional handicaps. They received the Advanced Bionics HiRes 90 K Implant with Harmony Speech processor and used HiRes-P with Fidelity 120 strategy. They underwent, impedance telemetry, neural response imaging, electrically evoked stapedial response telemetry (ESRT), and electrically evoked auditory brainstem response (EABR) tests at 1, 4, 8, and 12 months of implant use, in conjunction with behavioural mapping. Trends in electrophysiological and behavioural responses were analyzed using paired t-test. By Karl Pearson's correlation method, electrode-wise correlations were derived for neural response imaging (NRI) thresholds versus most comfortable level (M-levels) and offset based (apical, mid-array, and basal array) correlations for EABR and ESRT thresholds versus M-levels were calculated over time. These were used to derive predictive formulae by linear and multiple regression analysis. Such statistically predicted M-levels were compared with the behaviourally recorded M-levels among the cohort, using Cronbach's alpha reliability test method for confirming the efficacy of this method. NRI, ESRT, and EABR thresholds showed statistically significant positive correlations with behavioural M-levels, which improved with implant use over time. These correlations were used to derive predicted M-levels using regression analysis. On an average, predicted M-levels were found to be statistically reliable and they were a fair match to the actual behavioural M-levels. When applied in clinical practice, the predicted values were found to be useful for programming members of the study group. However, individuals showed considerable deviations in behavioural M-levels, above and below the electrophysiologically predicted values, due to various factors. While the current method appears helpful as a reference to predict initial maps in 'difficult to Map' subjects, it is recommended that behavioural measures are mandatory to further optimize the maps for these individuals. The study explores the trends, correlations and individual variabilities that occur between electrophysiological tests and behavioural responses, recorded over time among a cohort of cochlear implantees. The statistical method shown may be used as a guideline to predict optimal behavioural levels in difficult situations among future implantees, bearing in mind that optimal M-levels for individuals can vary from predicted values. In 'Difficult to MAP' scenarios, following a protocol of sequential behavioural programming, in conjunction with electrophysiological correlates will provide the best outcomes.
Logistics planning for phased programs.
NASA Technical Reports Server (NTRS)
Cook, W. H.
1973-01-01
It is pointed out that the proper and early integration of logistics planning into the phased program planning process will drastically reduce these logistics costs. Phased project planning is a phased approach to the planning, approval, and conduct of major research and development activity. A progressive build-up of knowledge of all aspects of the program is provided. Elements of logistics are discussed together with aspects of integrated logistics support, logistics program planning, and logistics activities for phased programs. Continuing logistics support can only be assured if there is a comprehensive sequential listing of all logistics activities tied to the program schedule and a real-time inventory of assets.
Casado, Pilar; Martín-Loeches, Manuel; León, Inmaculada; Hernández-Gutiérrez, David; Espuny, Javier; Muñoz, Francisco; Jiménez-Ortega, Laura; Fondevila, Sabela; de Vega, Manuel
2018-03-01
This study aims to extend the embodied cognition approach to syntactic processing. The hypothesis is that the brain resources to plan and perform motor sequences are also involved in syntactic processing. To test this hypothesis, Event-Related brain Potentials (ERPs) were recorded while participants read sentences with embedded relative clauses, judging for their acceptability (half of the sentences contained a subject-verb morphosyntactic disagreement). The sentences, previously divided into three segments, were self-administered segment-by-segment in two different sequential manners: linear or non-linear. Linear self-administration consisted of successively pressing three buttons with three consecutive fingers in the right hand, while non-linear self-administration implied the substitution of the finger in the middle position by the right foot. Our aim was to test whether syntactic processing could be affected by the manner the sentences were self-administered. Main results revealed that the ERPs LAN component vanished whereas the P600 component increased in response to incorrect verbs, for non-linear relative to linear self-administration. The LAN and P600 components reflect early and late syntactic processing, respectively. Our results convey evidence that language syntactic processing and performing non-linguistic motor sequences may share resources in the human brain. Copyright © 2017 Elsevier Ltd. All rights reserved.
Roadmap for Navy Civilian Personnel Research
1984-05-10
productivity and Equal Employment Opportunity objectives for Navy civilian personnel programs. Each research array is broken down into sequential phases; each...93 Equal Employment Opportunity ................... 98 Overview .......................................... 98...Phase I: Establish Baseline Measures ................ 98 Phase II: Analyze Issues Affecting Equal Employ- ment Opportunity
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Linear Programming across the Curriculum
ERIC Educational Resources Information Center
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
Fundamental solution of the problem of linear programming and method of its determination
NASA Technical Reports Server (NTRS)
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
A Sawmill Manager Adapts To Change With Linear Programming
George F. Dutrow; James E. Granskog
1973-01-01
Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.
ERIC Educational Resources Information Center
Mueller, Kristin A.
2013-01-01
The purpose of this quantitative study was to examine the effects of attending a state-funded 4K program located in a large southern Wisconsin suburban school district. Reading gains were measured as results of the Creative Curriculum Developmental Continuum assessment given in the fall and spring of 4K, and then sequentially, kindergarten-reading…
Evans, Robert J.; Chum, Helena L.
1994-01-01
A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents, selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent.
Evans, Robert J.; Chum, Helena L.
1994-01-01
A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents; selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent.
Evans, Robert J.; Chum, Helena L.
1993-01-01
A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents; selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent.
Timetabling an Academic Department with Linear Programming.
ERIC Educational Resources Information Center
Bezeau, Lawrence M.
This paper describes an approach to faculty timetabling and course scheduling that uses computerized linear programming. After reviewing the literature on linear programming, the paper discusses the process whereby a timetable was created for a department at the University of New Brunswick. Faculty were surveyed with respect to course offerings…
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
A field trial of ethyl hexanediol against Aedes dorsalis in Sonoma County, California.
Rutledge, L C; Hooper, R L; Wirtz, R A; Gupta, R K
1989-09-01
The repellent ethyl hexanediol (2-ethyl-1,3-hexanediol) was tested against the mosquito Aedes dorsalis in a coastal salt marsh in California. The experimental design incorporated a linear regression model, sequential treatments and a proportional end point (95%) for protection time. The protection time of 0.10 mg/cm2 ethyl hexanediol was estimated at 0.8 h. This time is shorter than that obtained previously for deet (N,N-diethyl-3-methylbenzamide) against Ae. dorsalis (4.4 h).
NASA Technical Reports Server (NTRS)
Sarrafzadeh-Khoee, Adel K. (Inventor)
2000-01-01
The invention provides a method of triple-beam and triple-sensor in a laser speckle strain/deformation measurement system. The triple-beam/triple-camera configuration combined with sequential timing of laser beam shutters is capable of providing indications of surface strain and structure deformations. The strain and deformation quantities, the four variables of surface strain, in-plane displacement, out-of-plane displacement and tilt, are determined in closed form solutions.
Architecture for one-shot compressive imaging using computer-generated holograms.
Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D
2016-09-10
We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.
Applications of Goal Programming to Education.
ERIC Educational Resources Information Center
Van Dusseldorp, Ralph A.; And Others
This paper discusses goal programming, a computer-based operations research technique that is basically a modification and extension of linear programming. The authors first discuss the similarities and differences between goal programming and linear programming, then describe the limitations of goal programming and its possible applications for…
NASA Technical Reports Server (NTRS)
Klumpp, A. R.; Lawson, C. L.
1988-01-01
Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.
ERIC Educational Resources Information Center
O'Donnell, John F.
1968-01-01
Traditional English curriculums are giving way to new English programs built on the foundations of research and scholarship. The "new" English, being developed by the Project English Centers throughout the country, attempts to utilize the characteristic structure of the subject to plan sequential and spiral curriculums replacing outdated…
Application and Design Characteristics of Generalized Training Devices.
ERIC Educational Resources Information Center
Parker, Edward L.
This program identified applications and developed design characteristics for generalized training devices. The first of three sequential phases reviewed in detail new developments in Naval equipment technology that influence the design of maintenance training devices: solid-state circuitry, modularization, digital technology, standardization,…
Computer-Based Career Interventions.
ERIC Educational Resources Information Center
Mau, Wei-Cheng
The possible utilities and limitations of computer-assisted career guidance systems (CACG) have been widely discussed although the effectiveness of CACG has not been systematically considered. This paper investigates the effectiveness of a theory-based CACG program, integrating Sequential Elimination and Expected Utility strategies. Three types of…
Alternative Certification Programs & Pre-Service Teacher Preparedness
ERIC Educational Resources Information Center
Koehler, Adrie; Feldhaus, Charles Robert; Fernandez, Eugenia; Hundley, Stephen
2013-01-01
This explanatory sequential mixed methods research study investigated motives and purpose exhibited by professionals transitioning from careers in science, technology, engineering and math (STEM) to secondary education. The study also analyzed personal perceptions of teaching preparedness, and explored barriers to successful teaching. STEM career…
Multithreaded transactions in scientific computing. The Growth06_v2 program
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2009-07-01
Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronization, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents a new version of the GROWTHGr and GROWTH06 programs. New version program summaryProgram title: GROWTH06_v2 Catalogue identifier: ADVL_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 65 255 No. of bytes in distributed program, including test data, etc.: 865 985 Distribution format: tar.gz Programming language: Object Pascal Computer: Pentium-based PC Operating system: Windows 9x, XP, NT, Vista RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADVL_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 678 Does the new version supersede the previous version?: Yes Nature of problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory. Solution method: Epitaxial growth of thin films is modelled by a set of non-linear differential equations [1]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [2]. Reasons for new version: According to the users' suggestions functionality of the program has been improved. Moreover, new use cases have been added which make the handling of the program easier and more efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Evolving binary classifiers through parallel computation of multiple fitness cases.
Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni
2005-06-01
This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.
Domingues, Carla Magda Allan S.; de Fátima Pereira, Sirlene; Marreiros, Ana Carolina Cunha; Menezes, Nair; Flannery, Brendan
2015-01-01
In August 2012, the Brazilian Ministry of Health introduced inactivated polio vaccine (IPV) as part of sequential polio vaccination schedule for all infants beginning their primary vaccination series. The revised childhood immunization schedule included 2 doses of IPV at 2 and 4 months of age followed by 2 doses of oral polio vaccine (OPV) at 6 and 15 months of age. One annual national polio immunization day was maintained to provide OPV to all children aged 6 to 59 months. The decision to introduce IPV was based on preventing rare cases of vaccine-associated paralytic polio, financially sustaining IPV introduction, ensuring equitable access to IPV, and preparing for future OPV cessation following global eradication. Introducing IPV during a national multivaccination campaign led to rapid uptake, despite challenges with local vaccine supply due to high wastage rates. Continuous monitoring is required to achieve high coverage with the sequential polio vaccine schedule. PMID:25316829
Novel 2D Triple-Resonance NMR Experiments for Sequential Resonance Assignments of Proteins
NASA Astrophysics Data System (ADS)
Ding, Keyang; Gronenborn, Angela M.
2002-06-01
We present 2D versions of the popular triple resonance HN(CO) CACB, HN(COCA)CACB, HN(CO)CAHA, and HN(COCA) CAHA experiments, commonly used for sequential resonance assignments of proteins. These experiments provide information about correlations between amino proton and nitrogen chemical shifts and the α- and β-carbon and α-proton chemical shifts within and between amino acid residues. Using these 2D spectra, sequential resonance assignments of H N, N, C α, C β, and H α nuclei are easily achieved. The resolution of these spectra is identical to the well-resolved 2D 15N- 1H HSQC and H(NCO)CA spectra, with slightly reduced sensitivity compared to their 3D and 4D versions. These types of spectra are ideally suited for exploitation in automated assignment procedures and thereby constitute a fast and efficient means for NMR structural determination of small and medium-sized proteins in solution in structural genomics programs.
Herman, Gabor T; Chen, Wei
2008-03-01
The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
XANES Spectroscopic Analysis of Phosphorus Speciation in Alum-Amended Poultry Litter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiter,J.; Staats-Borda, K.; Ginder-Vogel, M.
2008-01-01
Aluminum sulfate (alum; Al2(SO4)3{center_dot}14H2O) is used as a chemical treatment of poultry litter to reduce the solubility and release of phosphate, thereby minimizing the impacts on adjacent aquatic ecosystems when poultry litter is land applied as a crop fertilizer. The objective of this study was to determine, through the use of X-ray absorption near edge structure (XANES) spectroscopy and sequential extraction, how alum amendments alter P distribution and solid-state speciation within the poultry litter system. Our results indicate that traditional sequential fractionation procedures may not account for variability in P speciation in heterogeneous animal manures. Analysis shows that NaOH-extracted Pmore » in alum amended litters is predominantly organic ({approx}80%), whereas in the control samples, >60% of NaOH-extracted P was inorganic P. Linear least squares fitting (LLSF) analysis of spectra collected of sequentially extracted litters showed that the P is present in inorganic (P sorbed on Al oxides, calcium phosphates) and organic forms (phytic acid, polyphosphates, and monoesters) in alum- and non-alum-amended poultry litter. When determining land application rates of poultry litter, all of these compounds must be considered, especially organic P. Results of the sequential extractions in conjunction with LLSF suggest that no P species is completely removed by a single extractant. Rather, there is a continuum of removal as extractant strength increases. Overall, alum-amended litters exhibited higher proportions of Al-bound P species and phytic acid, whereas untreated samples contained Ca-P minerals and organic P compounds. This study provides in situ information about P speciation in the poultry litter solid and about P availability in alum- and non-alum-treated poultry litter that will dictate P losses to ground and surface water systems.« less
dos Santos, Luciana B O; Infante, Carlos M C; Masini, Jorge C
2010-03-01
This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 µL s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), µA) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): i(p) = (-20.5 ± 0.3)C (paraquat) - (0.02 ± 0.03). The limits of detection and quantification were 2.0 and 7.0 µg L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.
The PMHT: solutions for some of its problems
NASA Astrophysics Data System (ADS)
Wieneke, Monika; Koch, Wolfgang
2007-09-01
Tracking multiple targets in a cluttered environment is a challenging task. Probabilistic Multiple Hypothesis Tracking (PMHT) is an efficient approach for dealing with it. Essentially PMHT is based on the method of Expectation-Maximization for handling with association conflicts. Linearity in the number of targets and measurements is the main motivation for a further development and extension of this methodology. Unfortunately, compared with the Probabilistic Data Association Filter (PDAF), PMHT has not yet shown its superiority in terms of track-lost statistics. Furthermore, the problem of track extraction and deletion is apparently not yet satisfactorily solved within this framework. Four properties of PMHT are responsible for its problems in track maintenance: Non-Adaptivity, Hospitality, Narcissism and Local Maxima. 1, 2 In this work we present a solution for each of them and derive an improved PMHT by integrating the solutions into the PMHT formalism. The new PMHT is evaluated by Monte-Carlo simulations. A sequential Likelihood-Ratio (LR) test for track extraction has been developed and already integrated into the framework of traditional Bayesian Multiple Hypothesis Tracking. 3 As a multi-scan approach, also the PMHT methodology has the potential for track extraction. In this paper an analogous integration of a sequential LR test into the PMHT framework is proposed. We present an LR formula for track extraction and deletion using the PMHT update formulae. As PMHT provides all required ingredients for a sequential LR calculation, the LR is thus a by-product of the PMHT iteration process. Therefore the resulting update formula for the sequential LR test affords the development of Track-Before-Detect algorithms for PMHT. The approach is illustrated by a simple example.
Fu, Yongshuo H; Campioli, Matteo; Deckmyn, Gaby; Janssens, Ivan A
2012-01-01
Budburst phenology is a key driver of ecosystem structure and functioning, and it is sensitive to global change. Both cold winter temperatures (chilling) and spring warming (forcing) are important for budburst. Future climate warming is expected to have a contrasting effect on chilling and forcing, and subsequently to have a non-linear effect on budburst timing. To clarify the different effects of warming during chilling and forcing phases of budburst phenology in deciduous trees, (i) we conducted a temperature manipulation experiment, with separate winter and spring warming treatments on well irrigated and fertilized saplings of beech, birch and oak, and (ii) we analyzed the observations with five temperature-based budburst models (Thermal Time model, Parallel model, Sequential model, Alternating model, and Unified model). The results show that both winter warming and spring warming significantly advanced budburst date, with the combination of winter plus spring warming accelerating budburst most. As expected, all three species were more sensitive to spring warming than to winter warming. Although the different chilling requirement, the warming sensitivity was not significantly different among the studied species. Model evaluation showed that both one- and two- phase models (without and with chilling, respectively) are able to accurately predict budburst. For beech, the Sequential model reproduced budburst dates best. For oak and birch, both Sequential model and the Thermal Time model yielded good fit with the data but the latter was slightly better in case of high parameter uncertainty. However, for late-flushing species, the Sequential model is likely be the most appropriate to predict budburst data in a future warmer climate.
Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q
2017-03-22
Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.
User's Guide for a Modular Flutter Analysis Software System (Fast Version 1.0)
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Bennett, R. M.
1978-01-01
The use and operation of a group of computer programs to perform a flutter analysis of a single planar wing are described. This system of programs is called FAST for Flutter Analysis System, and consists of five programs. Each program performs certain portions of a flutter analysis and can be run sequentially as a job step or individually. FAST uses natural vibration modes as input data and performs a conventional V-g type of solution. The unsteady aerodynamics programs in FAST are based on the subsonic kernel function lifting-surface theory although other aerodynamic programs can be used. Application of the programs is illustrated by a sample case of a complete flutter calculation that exercises each program.
A recursive Bayesian updating model of haptic stiffness perception.
Wu, Bing; Klatzky, Roberta L
2018-06-01
Stiffness of many materials follows Hooke's Law, but the mechanism underlying the haptic perception of stiffness is not as simple as it seems in the physical definition. The present experiments support a model by which stiffness perception is adaptively updated during dynamic interaction. Participants actively explored virtual springs and estimated their stiffness relative to a reference. The stimuli were simulations of linear springs or nonlinear springs created by modulating a linear counterpart with low-amplitude, half-cycle (Experiment 1) or full-cycle (Experiment 2) sinusoidal force. Experiment 1 showed that subjective stiffness increased (decreased) as a linear spring was positively (negatively) modulated by a half-sinewave force. In Experiment 2, an opposite pattern was observed for full-sinewave modulations. Modeling showed that the results were best described by an adaptive process that sequentially and recursively updated an estimate of stiffness using the force and displacement information sampled over trajectory and time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
TG study of the Li0.4Fe2.4Zn0.2O4 ferrite synthesis
NASA Astrophysics Data System (ADS)
Lysenko, E. N.; Nikolaev, E. V.; Surzhikov, A. P.
2016-02-01
In this paper, the kinetic analysis of Li-Zn ferrite synthesis was studied using thermogravimetry (TG) method through the simultaneous application of non-linear regression to several measurements run at different heating rates (multivariate non-linear regression). Using TG-curves obtained for the four heating rates and Netzsch Thermokinetics software package, the kinetic models with minimal adjustable parameters were selected to quantitatively describe the reaction of Li-Zn ferrite synthesis. It was shown that the experimental TG-curves clearly suggest a two-step process for the ferrite synthesis and therefore a model-fitting kinetic analysis based on multivariate non-linear regressions was conducted. The complex reaction was described by a two-step reaction scheme consisting of sequential reaction steps. It is established that the best results were obtained using the Yander three-dimensional diffusion model at the first stage and Ginstling-Bronstein model at the second step. The kinetic parameters for lithium-zinc ferrite synthesis reaction were found and discussed.
NASA Astrophysics Data System (ADS)
Srinivasan, V.; Clement, T. P.
2008-02-01
Multi-species reactive transport equations coupled through sorption and sequential first-order reactions are commonly used to model sites contaminated with radioactive wastes, chlorinated solvents and nitrogenous species. Although researchers have been attempting to solve various forms of these reactive transport equations for over 50 years, a general closed-form analytical solution to this problem is not available in the published literature. In Part I of this two-part article, we derive a closed-form analytical solution to this problem for spatially-varying initial conditions. The proposed solution procedure employs a combination of Laplace and linear transform methods to uncouple and solve the system of partial differential equations. Two distinct solutions are derived for Dirichlet and Cauchy boundary conditions each with Bateman-type source terms. We organize and present the final solutions in a common format that represents the solutions to both boundary conditions. In addition, we provide the mathematical concepts for deriving the solution within a generic framework that can be used for solving similar transport problems.
NASA Astrophysics Data System (ADS)
Vimmrová, Alena; Kočí, Václav; Krejsová, Jitka; Černý, Robert
2016-06-01
A method for lightweight-gypsum material design using waste stone dust as the foaming agent is described. The main objective is to reach several physical properties which are inversely related in a certain way. Therefore, a linear optimization method is applied to handle this task systematically. The optimization process is based on sequential measurement of physical properties. The results are subsequently point-awarded according to a complex point criterion and new composition is proposed. After 17 trials the final mixture is obtained, having the bulk density equal to (586 ± 19) kg/m3 and compressive strength (1.10 ± 0.07) MPa. According to a detailed comparative analysis with reference gypsum, the newly developed material can be used as excellent thermally insulating interior plaster with the thermal conductivity of (0.082 ± 0.005) W/(m·K). In addition, its practical application can bring substantial economic and environmental benefits as the material contains 25 % of waste stone dust.
NASA Astrophysics Data System (ADS)
Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.
2017-12-01
In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.
A renewed perspective on agroforestry concepts and classification.
Torquebiau, E F
2000-11-01
Agroforestry, the association of trees with farming practices, is progressively becoming a recognized land-use discipline. However, it is still perceived by some scientists, technicians and farmers as a sort of environmental fashion which does not deserve credit. The peculiar history of agroforestry and the complex relationships between agriculture and forestry explain some misunderstandings about the concepts and classification of agroforestry and reveal that, contrarily to common perception, agroforestry is closer to agriculture than to forestry. Based on field experience from several countries, a structural classification of agroforestry into six simple categories is proposed: crops under tree cover, agroforests, agroforestry in a linear arrangement, animal agroforestry, sequential agroforestry and minor agroforestry techniques. It is argued that this pragmatic classification encompasses all major agroforestry associations and allows simultaneous agroforestry to be clearly differentiated from sequential agroforestry, two categories showing contrasting ecological tree-crop interactions. It can also contribute to a betterment of the image of agroforestry and lead to a simplification of its definition.
Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations
NASA Astrophysics Data System (ADS)
Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Rieben, R.; Tomov, V.
2018-03-01
We present a new predictor-corrector approach to enforcing local maximum principles in piecewise-linear finite element schemes for the compressible Euler equations. The new element-based limiting strategy is suitable for continuous and discontinuous Galerkin methods alike. In contrast to synchronized limiting techniques for systems of conservation laws, we constrain the density, momentum, and total energy in a sequential manner which guarantees positivity preservation for the pressure and internal energy. After the density limiting step, the total energy and momentum gradients are adjusted to incorporate the irreversible effect of density changes. Antidiffusive corrections to bounds-compatible low-order approximations are limited to satisfy inequality constraints for the specific total and kinetic energy. An accuracy-preserving smoothness indicator is introduced to gradually adjust lower bounds for the element-based correction factors. The employed smoothness criterion is based on a Hessian determinant test for the density. A numerical study is performed for test problems with smooth and discontinuous solutions.
Thaithet, Sujitra; Kradtap Hartwell, Supaporn; Lapanantnoppakhun, Somchai
2017-01-01
A low-pressure separation procedure of α-tocopherol and γ-oryzanol was developed based on a sequential injection chromatography (SIC) system coupled with an ultra-short (5 mm) C-18 monolithic column, as a lower cost and more compact alternative to the HPLC system. A green sample preparation, dilution with a small amount of hexane followed by liquid-liquid extraction with 80% ethanol, was proposed. Very good separation resolution (R s = 3.26), a satisfactory separation time (10 min) and a total run time including column equilibration (16 min) were achieved. The linear working range was found to be 0.4 - 40 μg with R 2 being more than 0.99. The detection limits of both analytes were 0.28 μg with the repeatability within 5% RSD (n = 7). Quantitative analyses of the two analytes in vegetable oil and nutrition supplement samples, using the proposed SIC method, agree well with the results from HPLC.
Teacher's Guide: Social Studies, 5.
ERIC Educational Resources Information Center
Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.
Part of a sequential K-12 program, this teacher's guide provides objectives and activities for students in grade 5. Five major sections correspond to learning, inquiry, and discussion skills, concepts, and values and moral reasoning. Learning skills include listening, speaking, viewing, reading, writing, map, and statistical abilities. Students…
ARITHMETIC PROGRAM FOURTH YEAR.
ERIC Educational Resources Information Center
GARBER, CLAIRE N.
THE 4TH YEAR SHOULD CONTINUE THE SEQUENTIAL PRESENTATION MATHEMATICAL UNDERSTANDINGS AND RELATIONSHIPS. NEW LEARNINGS SHOULD BE PRESENTED CONCRETELY IN SOCIAL SETTINGS WITHIN THE CHILDREN'S FRAMEWORK OF UNDERSTANDING. GRAPHIC MATERIALS MAY BE USED TO BRIDGE THE UNDERSTANDINGS FROM THE CONCRETE TO THE ABSTRACT LEVEL. THE NUMBER SYSTEM UNIT SHOULD…
Certification of Physical Education Teachers.
ERIC Educational Resources Information Center
Bentz, Susan K.
The author discusses various trends in the preparation of physical education teachers, including emphasis on Title IX requirements and handicapped child needs. Future directions in teacher certification are surveyed, and it is urged that certification be based upon sequential training programs rather than course accumulation-credit hour…
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweet, D.L.; Golomb, H.M.; Ultmann, J.E.
A program of combination sequential chemotherapy using cyclophosphamide, vincristine, methotrexate with leucovorin rescue, and cytarabine (COMLA) was administered to 42 previously untreated patients with advanced diffuse histiocytic lymphoma. Twenty-three patients achieved a complete remission as determined by strict clinical restaging criteria. The observed median duration of survival for the complete responders is longer than 33 months. Eight patients achieved a partial response, with a median survival longer than 21 months. Eleven patients showed no response, with a median survival of 5 months. Toxicity was acceptable. None of the responders have shown central nervous system relapse. There was no difference inmore » response rates between patients with stage III or IV lymphoma or between asymptomatic or symptomatic patients. The COMLA program produces a high rate of complete and durable remissions and should be considered as an initial form of management of patients with advanced diffuse histiocytic lymphoma.« less
Harada, Ryuhei; Mashiko, Takako; Tachikawa, Masanori; Hiraoka, Shuichi; Shigeta, Yasuteru
2018-04-04
Self-organization processes of a gear-shaped amphiphile molecule (1) to form a hexameric structure (nanocube, 16) were inferred from sequential dissociation processes by using molecular dynamics (MD) simulations. Our MD study unveiled that programed dynamic ordering exists in the dissociation processes of 16. According to the dissociation processes, it is proposed that triple π-stacking among three 3-pyridyl groups and other weak molecular interactions such as CH-π and van der Waals interactions, some of which arise from the solvophobic effect, were sequentially formed in stable and transient oligomeric states in the self-organization processes, i.e.12, 13, 14, and 15. By subsequent analyses on structural stabilities, it was found that 13 and 14 are stable intermediate oligomers, whereas 12 and 15 are transient ones. Thus, the formation of 13 from three monomers and of 16 from 14 and two monomers via corresponding transients is time consuming in the self-assembly process.
Approximate dynamic programming approaches for appointment scheduling with patient preferences.
Li, Xin; Wang, Jin; Fung, Richard Y K
2018-04-01
During the appointment booking process in out-patient departments, the level of patient satisfaction can be affected by whether or not their preferences can be met, including the choice of physicians and preferred time slot. In addition, because the appointments are sequential, considering future possible requests is also necessary for a successful appointment system. This paper proposes a Markov decision process model for optimizing the scheduling of sequential appointments with patient preferences. In contrast to existing models, the evaluation of a booking decision in this model focuses on the extent to which preferences are satisfied. Characteristics of the model are analysed to develop a system for formulating booking policies. Based on these characteristics, two types of approximate dynamic programming algorithms are developed to avoid the curse of dimensionality. Experimental results suggest directions for further fine-tuning of the model, as well as improving the efficiency of the two proposed algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
Kidwell, Kelley M.; Hyde, Luke W.
2016-01-01
Heterogeneity between and within people necessitates the need for sequential personalized interventions to optimize individual outcomes. Personalized or adaptive interventions (AIs) are relevant for diseases and maladaptive behavioral trajectories when one intervention is not curative and success of a subsequent intervention may depend on individual characteristics or response. AIs may be applied to medical settings and to investigate best prevention, education, and community-based practices. AIs can begin with low-cost or low-burden interventions and followed with intensified or alternative interventions for those who need it most. AIs that guide practice over the course of a disease, program, or school year can be investigated through sequential multiple assignment randomized trials (SMARTs). To promote the use of SMARTs, we provide a hypothetical SMART in a Head Start program to address child behavior problems. We describe the advantages and limitations of SMARTs, particularly as they may be applied to the field of evaluation. PMID:28239254
Garey, Lorra; Cheema, Mina K; Otal, Tanveer K; Schmidt, Norman B; Neighbors, Clayton; Zvolensky, Michael J
2016-10-01
Smoking rates are markedly higher among trauma-exposed individuals relative to non-trauma-exposed individuals. Extant work suggests that both perceived stress and negative affect reduction smoking expectancies are independent mechanisms that link trauma-related symptoms and smoking. Yet, no work has examined perceived stress and negative affect reduction smoking expectancies as potential explanatory variables for the relation between trauma-related symptom severity and smoking in a sequential pathway model. Methods The present study utilized a sample of treatment-seeking, trauma-exposed smokers (n = 363; 49.0% female) to examine perceived stress and negative affect reduction expectancies for smoking as potential sequential explanatory variables linking trauma-related symptom severity and nicotine dependence, perceived barriers to smoking cessation, and severity of withdrawal-related problems and symptoms during past quit attempts. As hypothesized, perceived stress and negative affect reduction expectancies had a significant sequential indirect effect on trauma-related symptom severity and criterion variables. Findings further elucidate the complex pathways through which trauma-related symptoms contribute to smoking behavior and cognitions, and highlight the importance of addressing perceived stress and negative affect reduction expectancies in smoking cessation programs among trauma-exposed individuals. (Am J Addict 2016;25:565-572). © 2016 American Academy of Addiction Psychiatry.
Sequential avulsions of the tibial tubercle in an adolescent basketball player.
Huang, Ying Chieh; Chao, Ying-Hao; Lien, Fang-Chieh
2010-05-01
Tibial tubercle avulsion is an uncommon fracture in physically active adolescents. Sequential avulsion of tibial tubercles is extremely rare. We reported a healthy, active 15-year-old boy who suffered from left tibial tubercle avulsion fracture during a basketball game. He received open reduction and internal fixation with two smooth Kirschner wires and a cannulated screw, with every effort to reduce the plate injury. Long-leg splint was used for protection followed by programmed rehabilitation. He recovered uneventfully and returned to his previous level of activity soon. Another avulsion fracture happened at the right tibial tubercle 3.5 months later when he was playing the basketball. From the encouragement of previous successful treatment, we provided him open reduction and fixation with two small-caliber screws. He recovered uneventfully and returned to his previous level of activity soon. No genu recurvatum or other deformity was happening in our case at the end of 2-year follow-up. No evidence of Osgood-Schlatter disease or osteogenesis imperfecta was found. Sequential avulsion fractures of tibial tubercles are rare. Good functional recovery can often be obtained like our case if we treat it well. To a physically active adolescent, we should never overstate the risk of sequential avulsion of the other leg to postpone the return to an active, functional life.
Modeling sustainability in renewable energy supply chain systems
NASA Astrophysics Data System (ADS)
Xie, Fei
This dissertation aims at modeling sustainability of renewable fuel supply chain systems against emerging challenges. In particular, the dissertation focuses on the biofuel supply chain system design, and manages to develop advanced modeling framework and corresponding solution methods in tackling challenges in sustaining biofuel supply chain systems. These challenges include: (1) to integrate "environmental thinking" into the long-term biofuel supply chain planning; (2) to adopt multimodal transportation to mitigate seasonality in biofuel supply chain operations; (3) to provide strategies in hedging against uncertainty from conversion technology; and (4) to develop methodologies in long-term sequential planning of the biofuel supply chain under uncertainties. All models are mixed integer programs, which also involves multi-objective programming method and two-stage/multistage stochastic programming methods. In particular for the long-term sequential planning under uncertainties, to reduce the computational challenges due to the exponential expansion of the scenario tree, I also developed efficient ND-Max method which is more efficient than CPLEX and Nested Decomposition method. Through result analysis of four independent studies, it is found that the proposed modeling frameworks can effectively improve the economic performance, enhance environmental benefits and reduce risks due to systems uncertainties for the biofuel supply chain systems.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Modern Gemini-Approach to Technology Development for Human Space Exploration
NASA Technical Reports Server (NTRS)
White, Harold
2010-01-01
In NASA's plan to put men on the moon, there were three sequential programs: Mercury, Gemini, and Apollo. The Gemini program was used to develop and integrate the technologies that would be necessary for the Apollo program to successfully put men on the moon. We would like to present an analogous modern approach that leverages legacy ISS hardware designs, and integrates developing new technologies into a flexible architecture This new architecture is scalable, sustainable, and can be used to establish human exploration infrastructure beyond low earth orbit and into deep space.
ERIC Educational Resources Information Center
Diggs, Gwendolyn Smith
2013-01-01
In Texas, there is an increase in the enrollment of men of various ethnicities in nursing schools, especially Associate Degree Nursing (ADN) programs. As these men strive to complete the nursing education, they face many concerns that center on barriers that are encountered in what is still a predominately Caucasian and female environment. In…
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
NASA Technical Reports Server (NTRS)
Lisano, Michael E.
2007-01-01
Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to nonlinear sequential consider covariance analysis, i.e. in the presence of nonlinear dynamics and nonlinear measurements. A simple SPCF for orbit determination, exemplifying an algorithm hosted in the guidance, navigation and control (GN&C) computer processor of a hypothetical robotic spacecraft, was implemented, and compared with an identically-parameterized (standard) extended, consider-parameterized Kalman filter. The onboard filtering scenario examined is a hypothetical spacecraft orbit about a small natural body with imperfectly-known mass. The formulations, relative complexities, and performances of the filters are compared and discussed.
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galli, M.R.; Cerda, J.
1998-06-01
A mathematical representation of a heat-exchanger network structure that explicitly accounts for the relative location of heat-transfer units, splitters, and mixers is presented. It is the basis of a mixed-integer linear programming sequential approach to the synthesis of heat-exchanger networks that allows the designer to specify beforehand some desired topology features as further design targets. Such structural information stands for additional problem data to be considered in the problem formulation, thus enhancing the involvement of the design engineer in the synthesis task. The topology constraints are expressed in terms of (1) the equipment items (heat exchangers, splitters, and mixers) thatmore » could be incorporated into the network, (2) the feasible neighbors for every potential unit, and (3) the heat matches, if any, with which a heat exchanger can be accomplished in parallel over any process stream. Moreover, the number and types of splitters being arranged over either a particular stream or the whole network can also be restrained. The new approach has been successfully applied to the solution of five example problems at each of which a wide variety of structural design restrictions were specified.« less
NASA Astrophysics Data System (ADS)
Verma, Arjun; Privman, Vladimir
2018-02-01
We study approach to the large-time jammed state of the deposited particles in the model of random sequential adsorption. The convergence laws are usually derived from the argument of Pomeau which includes the assumption of the dominance, at large enough times, of small landing regions into each of which only a single particle can be deposited without overlapping earlier deposited particles and which, after a certain time are no longer created by depositions in larger gaps. The second assumption has been that the size distribution of gaps open for particle-center landing in this large-time small-gaps regime is finite in the limit of zero gap size. We report numerical Monte Carlo studies of a recently introduced model of random sequential adsorption on patterned one-dimensional substrates that suggest that the second assumption must be generalized. We argue that a region exists in the parameter space of the studied model in which the gap-size distribution in the Pomeau large-time regime actually linearly vanishes at zero gap sizes. In another region, the distribution develops a threshold property, i.e., there are no small gaps below a certain gap size. We discuss the implications of these findings for new asymptotic power-law and exponential-modified-by-a-power-law convergences to jamming in irreversible one-dimensional deposition.
Sequential establishment of stripe patterns in an expanding cell population.
Liu, Chenli; Fu, Xiongfei; Liu, Lizhong; Ren, Xiaojing; Chau, Carlos K L; Li, Sihong; Xiang, Lu; Zeng, Hualing; Chen, Guanhua; Tang, Lei-Han; Lenz, Peter; Cui, Xiaodong; Huang, Wei; Hwa, Terence; Huang, Jian-Dong
2011-10-14
Periodic stripe patterns are ubiquitous in living organisms, yet the underlying developmental processes are complex and difficult to disentangle. We describe a synthetic genetic circuit that couples cell density and motility. This system enabled programmed Escherichia coli cells to form periodic stripes of high and low cell densities sequentially and autonomously. Theoretical and experimental analyses reveal that the spatial structure arises from a recurrent aggregation process at the front of the continuously expanding cell population. The number of stripes formed could be tuned by modulating the basal expression of a single gene. The results establish motility control as a simple route to establishing recurrent structures without requiring an extrinsic pacemaker.
Green digital signage using nanoparticle embedded narrow-gap field sequential TN-LCDs
NASA Astrophysics Data System (ADS)
Kobayashi, Shunsuke; Shiraishi, Yukihide; Sawai, Hiroya; Toshima, Naoki; Okita, Masaya; Takeuchi, Kiyofumi; Takatsu, Haruyoshi
2012-03-01
We have fabricated field sequential color (FSC)-LCDs using cells and modules of narrow-gap TN-LCDs with and without doping the nanoparticles of PCyD-ZrO2 and AF-SiO2. It is shown that the FSC-LCD exhibits a high optical efficiency of OE=4.5 that is defined as OE=[Luminance]/[W/m2]=(cd/W). This figure may provide us a good reference or to clear the Energy Star Program Version 5-3 that issues a guideline: LCD with 50 inch on the diagonal consumes the energy of 108W. Through this research it is claimed that our FSC=LCD may be a novel green digital signage.
Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis
NASA Astrophysics Data System (ADS)
Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.
As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.
Albert-García, J R; Calatayud, J Martínez
2008-05-15
The present paper deals with an analytical strategy based on coupling photo-induced chemiluminescence in a multicommutation continuous-flow methodology for the determination of the herbicide benfuresate. The solenoid valve inserted as small segments of the analyte solution was sequentially alternated with segments of the NaOH solution for adjusting the medium for the photodegradation. Both flow rates (sample and medium) were adjusted to required time for photodegradation, 90 s; and then, the resulting solution was also sequentially inserted as segments alternated with segments of the oxidizing solution system, hexacyanoferrate (III) in alkaline medium. The calibration range from 1 microg L(-1) to 95 mg L(-1), resulted in a linear behaviour over the range 1 microg L(-1) to 4 mg L(-1) and fitting the linear equation: I=4555.7x+284.2, correlation coefficient 0.9999. The limit of detection was 0.1 microg L(-1) (n=5, criteria 3 sigma) and the sample throughput was 22 h(-1). The consumption of solutions was very small; per peak were 0.66 mL, 0.16 mL and 0. 32 mL sample, medium and oxidant, respectively. Inter- and intra-day reproducibility resulted in a R.S.D. of 3.9% and 3.4%, respectively. After testing the influence of a large series of potential interferents the method is applied to water samples obtained from different places, human urine and to one formulation.
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.
Management of mendelian traits in breeding programs by gene editing
USDA-ARS?s Scientific Manuscript database
High-density single nucleotide polymorphism genotypes have recently been used to identify a number of novel recessive mutations that adversely affect fertility in dairy cattle, as well as to track conditions such as polled. Recent findings suggest that the use of sequential mate allocation strategie...
A Guide to Curriculum Planning in Foreign Language.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
A guide designed to help local curriculum planners develop and implement curriculums to provide all students with equal access to foreign languages provides an overview of current philosophies, objectives, methods, materials, and equipment and a guide to sequential program development, articulation, and evaluation. An introductory section…
Developing Latent Mathematics Abilities in Economically Disadvantaged Students
ERIC Educational Resources Information Center
McKenna, Michele A.; Hollingsworth, Patricia L.; Barnes, Laura L. B.
2005-01-01
The current study was undertaken as an effort to attend to the potential giftedness of economically disadvantaged students, to give opportunities for mathematics acceleration, and to provide a sequential, individualized mathematics program for students of high mobility. The authors evaluated the Project SAIL (Students' Active Interdisciplinary…
Arcentales, Andres; Rivera, Patricio; Caminal, Pere; Voss, Andreas; Bayes-Genis, Antonio; Giraldo, Beatriz F
2016-08-01
Changes in the left ventricle function produce alternans in the hemodynamic and electric behavior of the cardiovascular system. A total of 49 cardiomyopathy patients have been studied based on the blood pressure signal (BP), and were classified according to the left ventricular ejection fraction (LVEF) in low risk (LR: LVEF>35%, 17 patients) and high risk (HR: LVEF≤35, 32 patients) groups. We propose to characterize these patients using a linear and a nonlinear methods, based on the spectral estimation and the recurrence plot, respectively. From BP signal, we extracted each systolic time interval (STI), upward systolic slope (BPsl), and the difference between systolic and diastolic BP, defined as pulse pressure (PP). After, the best subset of parameters were obtained through the sequential feature selection (SFS) method. According to the results, the best classification was obtained using a combination of linear and nonlinear features from STI and PP parameters. For STI, the best combination was obtained considering the frequency peak and the diagonal structures of RP, with an area under the curve (AUC) of 79%. The same results were obtained when comparing PP values. Consequently, the use of combined linear and nonlinear parameters could improve the risk stratification of cardiomyopathy patients.
Large Scale Document Inversion using a Multi-threaded Computing System
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2018-01-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701
Large Scale Document Inversion using a Multi-threaded Computing System.
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2017-06-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Indarsih, Indrati, Ch. Rini
2016-02-01
In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
Chum, H.L.; Evans, R.J.
1992-08-04
A process is described for using fast pyrolysis in a carrier gas to convert a waste phenolic resin containing feedstreams in a manner such that pyrolysis of said resins and a given high value monomeric constituent occurs prior to pyrolyses of the resins in other monomeric components therein comprising: selecting a first temperature program range to cause pyrolysis of said resin and a given high value monomeric constituent prior to a temperature range that causes pyrolysis of other monomeric components; selecting, if desired, a catalyst and a support and treating said feedstreams with said catalyst to effect acid or basic catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said first temperature program range to utilize reactive gases such as oxygen and steam in the pyrolysis process to drive the production of specific products; differentially heating said feedstreams at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantity of said high value monomeric constituent prior to pyrolysis of other monomeric components therein; separating said high value monomeric constituent; selecting a second higher temperature program range to cause pyrolysis of a different high value monomeric constituent of said phenolic resins waste and differentially heating said feedstreams at said higher temperature program range to cause pyrolysis of said different high value monomeric constituent; and separating said different high value monomeric constituent. 11 figs.
Chum, Helena L.; Evans, Robert J.
1992-01-01
A process of using fast pyrolysis in a carrier gas to convert a waste phenolic resin containing feedstreams in a manner such that pyrolysis of said resins and a given high value monomeric constituent occurs prior to pyrolyses of the resins in other monomeric components therein comprising: selecting a first temperature program range to cause pyrolysis of said resin and a given high value monomeric constituent prior to a temperature range that causes pyrolysis of other monomeric components; selecting, if desired, a catalyst and a support and treating said feedstreams with said catalyst to effect acid or basic catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said first temperature program range to utilize reactive gases such as oxygen and steam in the pyrolysis process to drive the production of specific products; differentially heating said feedstreams at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantity of said high value monomeric constituent prior to pyrolysis of other monomeric components therein; separating said high value monomeric constituent; selecting a second higher temperature program range to cause pyrolysis of a different high value monomeric constituent of said phenolic resins waste and differentially heating said feedstreams at said higher temperature program range to cause pyrolysis of said different high value monomeric constituent; and separating said different high value monomeric constituent.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
The Application of Finite Element Solution Techniques in Structural Analysis on a Microcomputer.
1981-12-01
my wife for her support of this research project and the amount of time she spent helping me in preparation. Thanks go to the personnel at Computer...questions which had to be answered concerning the microcomputer in relation to a sequentially programmed finite element program. The first was how big...central site, then usefullness of the microcomputer is limited. The first series of problems consisted of a simple truss structure, which was expanded
Should bilingual children learn reading in two languages at the same time or in sequence?
Berens, Melody S.; Kovelman, Ioulia; Petitto, Laura-Ann
2013-01-01
Is it best to learn reading in two languages simultaneously or sequentially? We observed 2nd and 3rd grade children in two-way dual-language learning contexts: (i) 50:50 or Simultaneous dual-language (two languages within same developmental period) and (ii) 90:10 or Sequential dual-language (one language, followed gradually by the other). They were compared to matched monolingual English-only children in single-language English schools. Bilinguals (home language was Spanish only, English-only, or Spanish and English in dual-language schools), were tested in both languages, and monolingual children were tested in English using standardized reading and language tasks. Bilinguals in 50:50 programs performed better than bilinguals in 90:10 programs on English Irregular Words and Passage Comprehension tasks, suggesting language and reading facilitation for underlying grammatical class and linguistic structure analyses. By contrast, bilinguals in 90:10 programs performed better than bilinguals in the 50:50 programs on English Phonological Awareness and Reading Decoding tasks, suggesting language and reading facilitation for surface phonological regularity analysis. Notably, children from English-only homes in dual-language learning contexts performed equally well, or better than, children from monolingual English-only homes in single-language learning contexts. Overall, the findings provide tantalizing evidence that dual-language learning during the same developmental period may provide bilingual reading advantages. PMID:23794952
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Evans, R.J.; Chum, H.L.
1994-10-25
A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents; selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent. 83 figs.
Evans, Robert J.; Chum, Helena L.
1994-01-01
A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents; selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent.
Evans, R.J.; Chum, H.L.
1994-04-05
A process is described for using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents, selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent. 87 figures.
Evans, R.J.; Chum, H.L.
1994-10-25
A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents; selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent. 83 figs.
Teacher Variation in Concept Presentation in BSCS Curriculum Program
ERIC Educational Resources Information Center
Gallagher, James J.
2015-01-01
The classroom, with its complex social structure and kaleidoscope of cognitive and phycho-sociological variables, has not often been the object of serious research. Content area specialists have concentrated on the sequential organization of materials and have left the direct applications of these materials, either to the intuitive strategies of…
Portfolio Development as a Three-Semester Process: The Value of Sequential Experience.
ERIC Educational Resources Information Center
Senne, Terry A.
This study examined nine cohort teacher candidates from each of two physical education teacher education (PETE) programs developed teaching portfolios in three consecutive semesters of comparable courses: (1) elementary methods; (2) secondary methods; and (3) the student teaching internship. Studied were changes over time in teacher candidate…
Academy of READING®. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2014
2014-01-01
"Academy of READING"® is an online program that aims to improve students' reading skills using a structured and sequential approach to learning in five core areas--phonemic awareness, phonics, fluency, vocabulary, and comprehension. The What Works Clearinghouse (WWC) identified 38 studies of "Academy of READING"® for adolescent…
Trainees' Perceptions on Supervisor Factors That Influence Transfer of Training
ERIC Educational Resources Information Center
Fagan, Sharon Lee
2017-01-01
A midsize nonprofit blood bank organization is experiencing a high percentage of supervisors and managers not transferring skills taught in leadership development training programs back to the workplace. The purpose of this mixed methods, sequential, explanatory study was to understand the relationship between supervisor support or opposition and…
Project Gearing Academics to Individual Needs: Grade Eight.
ERIC Educational Resources Information Center
Bohler, Ann; And Others
This curriculum guide for an eighth grade civics course in a county in Florida was developed to provide a sequential program geared toward development of a positive self concept, "whole-some attitudes," functional citizenship, and educational enrichment. The guide presents five units--family and community, religion and education,…
Descriptive Analyses of Pediatric Food Refusal: The Structure of Parental Attention
ERIC Educational Resources Information Center
Woods, Julia N.; Borrero, John C.; Laud, Rinita B.; Borrero, Carrie S. W.
2010-01-01
Mealtime observations were conducted and occurrences of appropriate and inappropriate mealtime behavior and various forms of parental attention (e.g., coaxing, reprimands) were recorded for 25 children admitted to an intensive feeding program and their parents. Using the data from the observations, lag sequential analyses were conducted to…
Humanities II: Man and Revolution.
ERIC Educational Resources Information Center
Stanton School District, Wilmington, DE.
"Man and Revolution," the second syllabus in a sequential program, provides 11th grade students with a humanities course that deals heavily in political theory. The rationale, objectives, guidelines, methods, and arrangement are the same as those described in SO 004 030. The introductory unit, followed by further units, helps students define and…
Machine Shop: Scope and Sequence.
ERIC Educational Resources Information Center
Nashville - Davidson County Metropolitan Public Schools, TN.
Intended for use by all machine shop instructors in the Metropolitan Nashville Public Schools, this guide provides a sequential listing of course content and scope. A course description provides a brief overview of the content of the courses offered in the machine shop program. General course objectives are then listed. Outlines of the course…
A Sequential Quadratic Programming Algorithm Using an Incomplete Solution of the Subproblem
1990-09-01
Electr6nica e Inform’itica Industrial E.T.S. Ingenieros Industriales Universidad Polit6cnica, Madrid Technical Report SOL 90-12 September 1990 -Y...MURRAY* AND FRANCISCO J. PRIETOt *Systems Optimization Laboratory Department of Operations Research Stanford University tDept. de Automitica, Ingenieria
Trowel Trades: Scope and Sequence.
ERIC Educational Resources Information Center
Nashville - Davidson County Metropolitan Public Schools, TN.
Intended for use by all trowel trade instructors in the Metropolitan Nashville Public Schools, this guide provides a sequential listing of course content and scope. A course description provides a brief overview of the content of the courses offered in the trowel trades (masonry) program. General course objectives are then listed. Outlines of the…
The TABA Social Studies Curriculum. Product Development Report 19.
ERIC Educational Resources Information Center
Sanderson, Barbara A.; Crawford, Jack J.
This program description is one of twenty-one reports dealing with the developmental history of recent educational products. Objectives of the Taba project are to help elementary grade students develop thinking skills, key concept understandings, desired attitudes and values, and cognitive abilities in a sequential manner through process…
Printing (Graphic Arts): Scope and Sequence.
ERIC Educational Resources Information Center
Nashville - Davidson County Metropolitan Public Schools, TN.
Intended for use by all printing (graphic arts) instructors in the Metropolitan Nashville Public Schools, this guide provides a sequential listing of course content and scope. A course description provides a brief overview of the content of the courses offered in the printing (graphic arts) program. General course objectives are then listed.…
PHYSICAL EDUCATION FOR BOYS, GRADES 7-12.
ERIC Educational Resources Information Center
LEBOWITZ, GORDON; AND OTHERS
TEACHERS IN THE JUNIOR AND SENIOR HIGH SCHOOLS ARE PROVIDED WITH TEACHING OUTLINES, TEACHING DEVICES, AND OTHER MATERIALS TO DEVELOP PUPILS' SKILLS, APTITUDES, AND PROFICIENCY IN PHYSICAL ACTIVITIES AND SPORTS. A GRADED AND SEQUENTIAL DEVELOPMENT OF ACTIVITIES IN A UNIFIED PROGRAM, BASED UPON THE CONCEPT OF UNIT TEACHING IN SEASONAL ACTIVITIES, IS…
The Use of Tailored Testing with Instructional Programs. Final Report.
ERIC Educational Resources Information Center
Reckase, Mark D.
A computerized testing system was implemented in conjunction with the Radar Technician Training Course at the Naval Training Center, Great Lakes, Illinois. The feasibility of the system and students' attitudes toward it were examined. The system, a multilevel, microprocessor-based computer network, administered tests in a sequential, fixed length…
Evaluation of Educational Administration: A Decade Review of Research (2001-2010)
ERIC Educational Resources Information Center
Parylo, Oksana
2012-01-01
This sequential mixed methods study analyzed how program evaluation was used to assess educational administration and examined thematic trends in educational evaluation published over 10 years (2001-2010). First, qualitative content analysis examined the articles in eight peer-reviewed evaluation journals. This analysis revealed that numerous…
Foreign Languages: A Guide to Curriculum Development [Revision].
ERIC Educational Resources Information Center
Connecticut State Board of Education, Hartford.
The guide is designed to help school district planners develop and implement suitable foreign language curricula. Focusing on programs in grades K-12, it provides an overview of current philosophies, objectives, methods, and materials in foreign language education; illustrates how these may be implemented in a sequential foreign language program…
An Inquiry into Workplace Incivility: Perceptions of Working Graduate Students
ERIC Educational Resources Information Center
Greene, Ashley E.
2012-01-01
The purpose of this sequential mixed methods study was to examine and determine the level of incivility in the workplace as a growing problem from the perceptional views of graduate students enrolled in accelerated degree programs for graduate studies in Business Administration, Criminal Justice Administration, Gerontology, Health Management, and…
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
NASA Astrophysics Data System (ADS)
Perino, E. J.; Matoz-Fernandez, D. A.; Pasinetti, P. M.; Ramirez-Pastor, A. J.
2017-07-01
Monte Carlo simulations and finite-size scaling analysis have been performed to study the jamming and percolation behavior of linear k-mers (also known as rods or needles) on a two-dimensional triangular lattice of linear dimension L, considering an isotropic RSA process and periodic boundary conditions. Extensive numerical work has been done to extend previous studies to larger system sizes and longer k-mers, which enables the confirmation of a nonmonotonic size dependence of the percolation threshold and the estimation of a maximum value of k from which percolation would no longer occur. Finally, a complete analysis of critical exponents and universality has been done, showing that the percolation phase transition involved in the system is not affected, having the same universality class of the ordinary random percolation.