A general-purpose optimization program for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Sugimoto, H.
1986-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.
General purpose optimization software for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1990-01-01
The author has developed several general purpose optimization programs over the past twenty years. The earlier programs were developed as research codes and served that purpose reasonably well. However, in taking the formal step from research to industrial application programs, several important lessons have been learned. Among these are the importance of clear documentation, immediate user support, and consistent maintenance. Most important has been the issue of providing software that gives a good, or at least acceptable, design at minimum computational cost. Here, the basic issues developing optimization software for industrial applications are outlined and issues of convergence rate, reliability, and relative minima are discussed. Considerable feedback has been received from users, and new software is being developed to respond to identified needs. The basic capabilities of this software are outlined. A major motivation for the development of commercial grade software is ease of use and flexibility, and these issues are discussed with reference to general multidisciplinary applications. It is concluded that design productivity can be significantly enhanced by the more widespread use of optimization as an everyday design tool.
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Optimization-based interactive segmentation interface for multiregion problems
Baxter, John S. H.; Rajchl, Martin; Peters, Terry M.; Chen, Elvis C. S.
2016-01-01
Abstract. Interactive segmentation is becoming of increasing interest to the medical imaging community in that it combines the positive aspects of both manual and automated segmentation. However, general-purpose tools have been lacking in terms of segmenting multiple regions simultaneously with a high degree of coupling between groups of labels. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently, these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. In a generalized form, the hierarchy for any given segmentation problem is specified in run-time, allowing different hierarchies to be quickly explored. We present an interactive segmentation interface, which uses generalized hierarchical max-flow for optimization-based multiregion segmentation guided by user-defined seeds. Applications in cardiac and neonatal brain segmentation are given as example applications of its generality. PMID:27335892
Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.
1975-01-01
A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.
Improving vaccination cold chain in the general practice setting.
Page, Sue L; Earnest, Arul; Birden, Hudson; Deaker, Rachelle; Clark, Chris
2008-10-01
This study compared temperature control in different types of vaccine storing refrigerators in general practice and tested knowledge of general practice staff in vaccine storage requirements. Temperature data loggers were set to serially record the temperature within vaccine refrigerators in 28 general practices, recording at 12 minute intervals over a period of 10 days on each occasion. A survey of vaccine storage knowledge and records of divisions of general practice immunisation contacts were also obtained. There was a significant relationship between type of refrigerator and optimal temperature, with the odds ratio for bar style refrigerator being 0.005 (95% CI: 0.001-0.044) compared to the purpose built vaccine refrigerators. Score on a survey of vaccine storage was also positively associated with optimal storage temperature. General practices that invest in purpose built vaccine refrigerators will achieve standards of vaccine cold chain maintenance significantly more reliably than can be achieved through regular cold chain monitoring and practice supports.
Combining analysis with optimization at Langley Research Center. An evolutionary process
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1982-01-01
The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.
General purpose graphic processing unit implementation of adaptive pulse compression algorithms
NASA Astrophysics Data System (ADS)
Cai, Jingxiao; Zhang, Yan
2017-07-01
This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 3: User manual
NASA Technical Reports Server (NTRS)
1972-01-01
The two-fold purpose of the Performance Analysis and Design Synthesis (PADS) computer program is discussed. The program can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general purpose branched trajectory optimization program. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent. The second module uses the method of quasi-linearization, which requires a starting solution from the first trajectory module.
NASA Technical Reports Server (NTRS)
1980-01-01
MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.
Literature Review: Weldability of Iridium DOP-26 Alloy for General Purpose Heat Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgardt, Paul; Pierce, Stanley W.
The basic purpose of this paper is to provide a literature review relative to fabrication of the General Purpose Heat Source (GPHS) that is used to provide electrical power for deep space missions of NASA. The particular fabrication operation to be addressed here is arc welding of the GPHS encapsulation. A considerable effort was made to optimize the fabrication of the fuel pellets and of other elements of the encapsulation; that work will not be directly addressed in this paper. This report consists of three basic sections: 1) a brief description of the GPHS will be provided as background informationmore » for the reader; 2) mechanical properties and the optimization thereof as relevant to welding will be discussed; 3) a review of the arc welding process development and optimization will be presented. Since the welding equipment must be upgraded for future production, some discussion of the historical establishment of relevant welding variables and possible changes thereto will also be discussed.« less
Quantum cryptography: Security criteria reexamined
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaszlikowski, Dagomir; Liang, Y.C.; Englert, Berthold-Georg
2004-09-01
We find that the generally accepted security criteria are flawed for a whole class of protocols for quantum cryptography. This is so because a standard assumption of the security analysis, namely that the so-called square-root measurement is optimal for eavesdropping purposes, is not true in general. There are rather large parameter regimes in which the optimal measurement extracts substantially more information than the square-root measurement.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Floating-Point Modules Targeted for Use with RC Compilation Tools
NASA Technical Reports Server (NTRS)
Sahin, Ibrahin; Gloster, Clay S.
2000-01-01
Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.
Generalized SMO algorithm for SVM-based multitask learning.
Cai, Feng; Cherkassky, Vladimir
2012-06-01
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
NASA Technical Reports Server (NTRS)
1972-01-01
The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.
Computer-aided linear-circuit design.
NASA Technical Reports Server (NTRS)
Penfield, P.
1971-01-01
Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.
COPS: Large-scale nonlinearly constrained optimization problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondarenko, A.S.; Bortz, D.M.; More, J.J.
2000-02-10
The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.
Overview of field gamma spectrometries based on Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Multidisciplinary optimization of a controlled space structure using 150 design variables
NASA Technical Reports Server (NTRS)
James, Benjamin B.
1992-01-01
A general optimization-based method for the design of large space platforms through integration of the disciplines of structural dynamics and control is presented. The method uses the global sensitivity equations approach and is especially appropriate for preliminary design problems in which the structural and control analyses are tightly coupled. The method is capable of coordinating general purpose structural analysis, multivariable control, and optimization codes, and thus, can be adapted to a variety of controls-structures integrated design projects. The method is used to minimize the total weight of a space platform while maintaining a specified vibration decay rate after slewing maneuvers.
Optimization of atmospheric transport models on HPC platforms
NASA Astrophysics Data System (ADS)
de la Cruz, Raúl; Folch, Arnau; Farré, Pau; Cabezas, Javier; Navarro, Nacho; Cela, José María
2016-12-01
The performance and scalability of atmospheric transport models on high performance computing environments is often far from optimal for multiple reasons including, for example, sequential input and output, synchronous communications, work unbalance, memory access latency or lack of task overlapping. We investigate how different software optimizations and porting to non general-purpose hardware architectures improve code scalability and execution times considering, as an example, the FALL3D volcanic ash transport model. To this purpose, we implement the FALL3D model equations in the WARIS framework, a software designed from scratch to solve in a parallel and efficient way different geoscience problems on a wide variety of architectures. In addition, we consider further improvements in WARIS such as hybrid MPI-OMP parallelization, spatial blocking, auto-tuning and thread affinity. Considering all these aspects together, the FALL3D execution times for a realistic test case running on general-purpose cluster architectures (Intel Sandy Bridge) decrease by a factor between 7 and 40 depending on the grid resolution. Finally, we port the application to Intel Xeon Phi (MIC) and NVIDIA GPUs (CUDA) accelerator-based architectures and compare performance, cost and power consumption on all the architectures. Implications on time-constrained operational model configurations are discussed.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Ehsani, Hossein; Rostami, Mostafa; Gudarzi, Mohammad
2016-02-01
Computation of muscle force patterns that produce specified movements of muscle-actuated dynamic models is an important and challenging problem. This problem is an undetermined one, and then a proper optimization is required to calculate muscle forces. The purpose of this paper is to develop a general model for calculating all muscle activation and force patterns in an arbitrary human body movement. For this aim, the equations of a multibody system forward dynamics, which is considered for skeletal system of the human body model, is derived using Lagrange-Euler formulation. Next, muscle contraction dynamics is added to this model and forward dynamics of an arbitrary musculoskeletal system is obtained. For optimization purpose, the obtained model is used in computed muscle control algorithm, and a closed-loop system for tracking desired motions is derived. Finally, a popular sport exercise, biceps curl, is simulated by using this algorithm and the validity of the obtained results is evaluated via EMG signals.
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Barthelemy, J.-F. M.
1986-01-01
An expert system called EXADS has been developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. ADS has approximately 100 combinations of strategy, optimizer, and one-dimensional search options from which to choose. It is difficult for a nonexpert to make this choice. This expert system aids the user in choosing the best combination of options based on the users knowledge of the problem and the expert knowledge stored in the knowledge base. The knowledge base is divided into three categories; constrained problems, unconstrained problems, and constrained problems being treated as unconstrained problems. The inference engine and rules are written in LISP, contains about 200 rules, and executes on DEC-VAX (with Franz-LISP) and IBM PC (with IQ-LISP) computers.
Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)
NASA Astrophysics Data System (ADS)
Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.
2016-05-01
This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.
Stored program concept for analog computers
NASA Technical Reports Server (NTRS)
Hannauer, G., III; Patmore, J. R.
1971-01-01
Optimization of three-stage matrices, modularization, and black boxes design techniques provides for automatically interconnecting computing component inputs and outputs in general purpose analog computer. Design also produces relatively inexpensive and less complex automatic patching system.
Non linear predictive control of a LEGO mobile robot
NASA Astrophysics Data System (ADS)
Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.
2014-10-01
Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
The integrated manual and automatic control of complex flight systems
NASA Technical Reports Server (NTRS)
Schmidt, David K.
1991-01-01
Research dealt with the general area of optimal flight control synthesis for manned flight vehicles. The work was generic; no specific vehicle was the focus of study. However, the class of vehicles generally considered were those for which high authority, multivariable control systems might be considered, for the purpose of stabilization and the achievement of optimal handling characteristics. Within this scope, the topics of study included several optimal control synthesis techniques, control-theoretic modeling of the human operator in flight control tasks, and the development of possible handling qualities metrics and/or measures of merit. Basic contributions were made in all these topics, including human operator (pilot) models for multi-loop tasks, optimal output feedback flight control synthesis techniques; experimental validations of the methods developed, and fundamental modeling studies of the air-to-air tracking and flared landing tasks.
CIRCAL-2 - General-purpose on-line circuit design.
NASA Technical Reports Server (NTRS)
Dertouzos, M. L.; Jessel, G. P.; Stinger, J. R.
1972-01-01
CIRCAL-2 is a second-generation general-purpose on-line circuit-design program with the following main features: (1) multiple-analysis capability; (2) uniform and general data structures for handling text editing, network representations, and output results, regardless of analysis; (3) special techniques and structures for minimizing and controlling user-program interaction; (4) use of functionals for the description of hysteresis and heat effects; and (5) ability to define optimization procedures that 'replace' the user. The paper discusses the organization of CIRCAL-2, the aforementioned main features, and their consequences, such as a set of network elements and models general enough for most analyses and a set of functions tailored to circuit-design requirements. The presentation is descriptive, concentrating on conceptual rather than on program implementation details.
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Strong scaling of general-purpose molecular dynamics simulations on GPUs
NASA Astrophysics Data System (ADS)
Glaser, Jens; Nguyen, Trung Dac; Anderson, Joshua A.; Lui, Pak; Spiga, Filippo; Millan, Jaime A.; Morse, David C.; Glotzer, Sharon C.
2015-07-01
We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, 2013). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5 ×.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Design optimization studies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.
1993-01-01
The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation
NASA Technical Reports Server (NTRS)
1972-01-01
The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.
A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.
2013-04-15
Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation
Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
NASA Astrophysics Data System (ADS)
Golikov, S. Yu; Petukhov, V. I.; Maiorov, I. S.
2017-11-01
The features of artificial landscapes’ spatial systems created for the optimization of industry facilities and settlements (as well as forest and agricultural crops taking into account the optimality of the existing microclimate and the possibilities for its improvement) are discussed in the paper. They improve the population health (through the optimization of the environment of settlements), the health of farm animals, the state of crops (through optimization of the climate), watercourses and forests reducing catastrophes on the shores and slopes by the proper selection and placement of species for artificial planting. They are achieved by revegetation, greening, garden and landscape design. In general, their purpose is to optimize the human environment.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
A General-Purpose Optimization Engine for Multi-Disciplinary Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A general purpose optimization tool for multidisciplinary applications, which in the literature is known as COMETBOARDS, is being developed at NASA Lewis Research Center. The modular organization of COMETBOARDS includes several analyzers and state-of-the-art optimization algorithms along with their cascading strategy. The code structure allows quick integration of new analyzers and optimizers. The COMETBOARDS code reads input information from a number of data files, formulates a design as a set of multidisciplinary nonlinear programming problems, and then solves the resulting problems. COMETBOARDS can be used to solve a large problem which can be defined through multiple disciplines, each of which can be further broken down into several subproblems. Alternatively, a small portion of a large problem can be optimized in an effort to improve an existing system. Some of the other unique features of COMETBOARDS include design variable formulation, constraint formulation, subproblem coupling strategy, global scaling technique, analysis approximation, use of either sequential or parallel computational modes, and so forth. The special features and unique strengths of COMETBOARDS assist convergence and reduce the amount of CPU time used to solve the difficult optimization problems of aerospace industries. COMETBOARDS has been successfully used to solve a number of problems, including structural design of space station components, design of nozzle components of an air-breathing engine, configuration design of subsonic and supersonic aircraft, mixed flow turbofan engines, wave rotor topped engines, and so forth. This paper introduces the COMETBOARDS design tool and its versatility, which is illustrated by citing examples from structures, aircraft design, and air-breathing propulsion engine design.
No reason to expect "reading universals".
Levy, Yonata
2012-10-01
Writing systems encode linguistic information in diverse ways, relying on cognitive procedures that are likely to be general purpose rather than specific to reading. Optimality in reading for meaning is achieved via the entire communicative act, involving, when the need arises, syntax, nonlinguistic context, and selective attention.
Deterministic Reconfigurable Control Design for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Wagner, Elaine A.; Burken, John J.; Hanson, Curtis E.; Wohletz, Jerry M.
1998-01-01
In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. Four reconfigurable control design methods were investigated for the X-33 vehicle: Redistributed Pseudo-Inverse, General Constrained Optimization, Automated Failure Dependent Gain Schedule, and an Off-line Nonlinear General Constrained Optimization. The Off-line Nonlinear General Constrained Optimization approach was chosen for implementation on the X-33. Two example failures are shown, a right outboard elevon jam at 25 deg. at a Mach 3 entry condition, and a left rudder jam at 30 degrees. Note however, that reconfigurable control laws have been designed for the entire flight envelope. Comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.
Support Vector Machine algorithm for regression and classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Chenggang; Zavaljevski, Nela
2001-08-01
The software is an implementation of the Support Vector Machine (SVM) algorithm that was invented and developed by Vladimir Vapnik and his co-workers at AT&T Bell Laboratories. The specific implementation reported here is an Active Set method for solving a quadratic optimization problem that forms the major part of any SVM program. The implementation is tuned to specific constraints generated in the SVM learning. Thus, it is more efficient than general-purpose quadratic optimization programs. A decomposition method has been implemented in the software that enables processing large data sets. The size of the learning data is virtually unlimited by themore » capacity of the computer physical memory. The software is flexible and extensible. Two upper bounds are implemented to regulate the SVM learning for classification, which allow users to adjust the false positive and false negative rates. The software can be used either as a standalone, general-purpose SVM regression or classification program, or be embedded into a larger software system.« less
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Habeger, A. R.; Stevenson, R.
1974-01-01
The basic equations and models used in a computer program (6D POST) to optimize simulated trajectories with six degrees of freedom were documented. The 6D POST program was conceived as a direct extension of the program POST, which dealt with point masses, and considers the general motion of a rigid body with six degrees of freedom. It may be used to solve a wide variety of atmospheric flight mechanics and orbital transfer problems for powered or unpowered vehicles operating near a rotating oblate planet. Its principal features are: an easy to use NAMELIST type input procedure, an integrated set of Flight Control System (FCS) modules, and a general-purpose discrete parameter targeting and optimization capability. It was written in FORTRAN 4 for the CDC 6000 series computers.
Conceptual Comparison of Population Based Metaheuristics for Engineering Problems
Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265
Conceptual comparison of population based metaheuristics for engineering problems.
Adekanmbi, Oluwole; Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
Affordable CZT SPECT with dose-time minimization (Conference Presentation)
NASA Astrophysics Data System (ADS)
Hugg, James W.; Harris, Brian W.; Radley, Ian
2017-03-01
PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to <4 % at 140 keV; maximum count rate is <1.5 times higher; non-detection camera edges are reduced 3-fold. Scattered photons are greatly reduced in the photopeak energy window; image contrast is improved; and the optimal FOV is increased to the entire camera area. CONCLUSION Continual improvements in CZT detector arrays for molecular imaging, coupled with optimal collimator and image reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
Variable Complexity Structural Optimization of Shells
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Venkataraman, Satchi
1999-01-01
Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.
Variable Complexity Structural Optimization of Shells
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Venkataraman, Satchi
1998-01-01
Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.
Mathematical modeling of a Ti:sapphire solid-state laser
NASA Technical Reports Server (NTRS)
Swetits, John J.
1987-01-01
The project initiated a study of a mathematical model of a tunable Ti:sapphire solid-state laser. A general mathematical model was developed for the purpose of identifying design parameters which will optimize the system, and serve as a useful predictor of the system's behavior.
Characterisation of Sleep Problems in Children with Williams Syndrome
ERIC Educational Resources Information Center
Annaz, Dagmara; Hill, Catherine M.; Ashworth, Anna; Holley, Simone; Karmiloff-Smith, Annette
2011-01-01
Sleep is critical to optimal daytime functioning, learning and general health. In children with established developmental disorders sleep difficulties may compound existing learning difficulties. The purpose of the present study was to evaluate the prevalence and syndrome specificity of sleep problems in Williams syndrome (WS), a…
NASA Technical Reports Server (NTRS)
Rasmussen, John
1990-01-01
Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are available for the solution of single problems. By implementing collections of the available techniques into general software systems, operational environments for structural optimization have been created. The forthcoming years must bring solutions to the problem of integrating such systems into more general design environments. The result of this work should be CAD systems for rational design in which structural optimization is one important design tool among many others.
Designing electronic properties of two-dimensional crystals through optimization of deformations
NASA Astrophysics Data System (ADS)
Jones, Gareth W.; Pereira, Vitor M.
2014-09-01
One of the enticing features common to most of the two-dimensional (2D) electronic systems that, in the wake of (and in parallel with) graphene, are currently at the forefront of materials science research is the ability to easily introduce a combination of planar deformations and bending in the system. Since the electronic properties are ultimately determined by the details of atomic orbital overlap, such mechanical manipulations translate into modified (or, at least, perturbed) electronic properties. Here, we present a general-purpose optimization framework for tailoring physical properties of 2D electronic systems by manipulating the state of local strain, allowing a one-step route from their design to experimental implementation. A definite example, chosen for its relevance in light of current experiments in graphene nanostructures, is the optimization of the experimental parameters that generate a prescribed spatial profile of pseudomagnetic fields (PMFs) in graphene. But the method is general enough to accommodate a multitude of possible experimental parameters and conditions whereby deformations can be imparted to the graphene lattice, and complies, by design, with graphene's elastic equilibrium and elastic compatibility constraints. As a result, it efficiently answers the inverse problem of determining the optimal values of a set of external or control parameters (such as substrate topography, sample shape, load distribution, etc) that result in a graphene deformation whose associated PMF profile best matches a prescribed target. The ability to address this inverse problem in an expedited way is one key step for practical implementations of the concept of 2D systems with electronic properties strain-engineered to order. The general-purpose nature of this calculation strategy means that it can be easily applied to the optimization of other relevant physical quantities which directly depend on the local strain field, not just in graphene but in other 2D electronic membranes.
Population-based metaheuristic optimization in neutron optics and shielding design
NASA Astrophysics Data System (ADS)
DiJulio, D. D.; Björgvinsdóttir, H.; Zendler, C.; Bentley, P. M.
2016-11-01
Population-based metaheuristic algorithms are powerful tools in the design of neutron scattering instruments and the use of these types of algorithms for this purpose is becoming more and more commonplace. Today there exists a wide range of algorithms to choose from when designing an instrument and it is not always initially clear which may provide the best performance. Furthermore, due to the nature of these types of algorithms, the final solution found for a specific design scenario cannot always be guaranteed to be the global optimum. Therefore, to explore the potential benefits and differences between the varieties of these algorithms available, when applied to such design scenarios, we have carried out a detailed study of some commonly used algorithms. For this purpose, we have developed a new general optimization software package which combines a number of common metaheuristic algorithms within a single user interface and is designed specifically with neutronic calculations in mind. The algorithms included in the software are implementations of Particle-Swarm Optimization (PSO), Differential Evolution (DE), Artificial Bee Colony (ABC), and a Genetic Algorithm (GA). The software has been used to optimize the design of several problems in neutron optics and shielding, coupled with Monte-Carlo simulations, in order to evaluate the performance of the various algorithms. Generally, the performance of the algorithms depended on the specific scenarios, however it was found that DE provided the best average solutions in all scenarios investigated in this work.
Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware
NASA Astrophysics Data System (ADS)
Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc
2007-02-01
Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.
Intelligence in the brain: a theory of how it works and how to build it.
Werbos, Paul J
2009-04-01
This paper presents a theory of how general-purpose learning-based intelligence is achieved in the mammal brain, and how we can replicate it. It reviews four generations of ever more powerful general-purpose learning designs in Adaptive, Approximate Dynamic Programming (ADP), which includes reinforcement learning as a special case. It reviews empirical results which fit the theory, and suggests important new directions for research, within the scope of NSF's recent initiative on Cognitive Optimization and Prediction. The appendices suggest possible connections to the realms of human subjective experience, comparative cognitive neuroscience, and new challenges in electric power. The major challenge before us today in mathematical neural networks is to replicate the "mouse level", but the paper does contain a few thoughts about building, understanding and nourishing levels of general intelligence beyond the mouse.
Partnerships for optimizing organizational flexibility
Louis Poliquin
1999-01-01
For the purpose of this conference, I was asked to discuss partnerships in general. We will first review the reasons that bring organizations to enter into a collaborative agreement, then provide examples of different types of partnerships, discuss some factors that seem to explain the success of partnerships, and review important points to consider before preparing...
The Future of General Surgery: Evolving to Meet a Changing Practice.
Webber, Eric M; Ronson, Ashley R; Gorman, Lisa J; Taber, Sarah A; Harris, Kenneth A
2016-01-01
Similar to other countries, the practice of General Surgery in Canada has undergone significant evolution over the past 30 years without major changes to the training model. There is growing concern that current General Surgery residency training does not provide the skills required to practice the breadth of General Surgery in all Canadian communities and practice settings. Led by a national Task Force on the Future of General Surgery, this project aimed to develop recommendations on the optimal configuration of General Surgery training in Canada. A series of 4 evidence-based sub-studies and a national survey were launched to inform these recommendations. Generalized findings from the multiple methods of the project speak to the complexity of the current practice of General Surgery: (1) General surgeons have very different practice patterns depending on the location of practice; (2) General Surgery training offers strong preparation for overall clinical competence; (3) Subspecialized training is a new reality for today's general surgeons; and (4) Generation of the report and recommendations for the future of General Surgery. A total of 4 key recommendations were developed to optimize General Surgery for the 21st century. This project demonstrated that a high variability of practice dependent on location contrasts with the principles of implementing the same objectives of training for all General Surgery graduates. The overall results of the project have prompted the Royal College to review the training requirements and consider a more "fit for purpose" training scheme, thus ensuring that General Surgery residency training programs would optimally prepare residents for a broad range of practice settings and locations across Canada. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Zheng, Xiaoming
2017-12-01
The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.
Optimization of rotor shaft shrink fit method for motor using "Robust design"
NASA Astrophysics Data System (ADS)
Toma, Eiji
2018-01-01
This research is collaborative investigation with the general-purpose motor manufacturer. To review construction method in production process, we applied the parameter design method of quality engineering and tried to approach the optimization of construction method. Conventionally, press-fitting method has been adopted in process of fitting rotor core and shaft which is main component of motor, but quality defects such as core shaft deflection occurred at the time of press fitting. In this research, as a result of optimization design of "shrink fitting method by high-frequency induction heating" devised as a new construction method, its construction method was feasible, and it was possible to extract the optimum processing condition.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Bhat, R. B.
1979-01-01
A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1995-01-01
This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1994-01-01
This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.
A New Framework for Textual Information Mining over Parse Trees. CRESST Report 805
ERIC Educational Resources Information Center
Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.
2011-01-01
Textual information mining is a challenging problem that has resulted in the creation of many different rule-based linguistic query languages. However, these languages generally are not optimized for the purpose of text mining. In other words, they usually consider queries as individuals and only return raw results for each query. Moreover they…
Strategies for Helping Parents of Young Children Address Challenging Behaviors in the Home
ERIC Educational Resources Information Center
Chai, Zhen; Lieberman-Betz, Rebecca
2016-01-01
Challenging behavior can be defined as any repeated pattern of behavior, or perception of behavior, that interferes with or is at risk of interfering with optimal learning or engagement in prosocial interactions with peers and adults. It is generally accepted in young children that challenging behaviors serve some sort of communicative purpose--to…
1993-09-01
goal ( Heizer , Render , and Stair, 1993:94). Integer Prgronmming. Integer programming is a general purpose approach used to optimally solve job shop...Scheduling," Operations Research Journal. 29, No 4: 646-667 (July-August 1981). Heizer , Jay, Barry Render and Ralph M. Stair, Jr. Production and Operations
ERIC Educational Resources Information Center
Dudin, Mikhail N.; Ivashchenko, Natalia P.; Frolova, ?vgenia ?.; Abashidze, Aslan H.
2017-01-01
The purpose of the present article is to generalize and unify the approaches to improvement of the institutional environment that ensures optimal functioning and sustainable development of the Russian academic sphere. The following conclusions and results have been obtained through presentation of the materials in the article: (1) Improvement of…
Performance optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1991-01-01
As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.
MIDACO on MINLP space applications
NASA Astrophysics Data System (ADS)
Schlueter, Martin; Erb, Sven O.; Gerdts, Matthias; Kemble, Stephen; Rückmann, Jan-J.
2013-04-01
A numerical study on two challenging mixed-integer non-linear programming (MINLP) space applications and their optimization with MIDACO, a recently developed general purpose optimization software, is presented. These applications are the optimal control of the ascent of a multiple-stage space launch vehicle and the space mission trajectory design from Earth to Jupiter using multiple gravity assists. Additionally, an NLP aerospace application, the optimal control of an F8 aircraft manoeuvre, is discussed and solved. In order to enhance the optimization performance of MIDACO a hybridization technique, coupling MIDACO with an SQP algorithm, is presented for two of these three applications. The numerical results show, that the applications can be solved to their best known solution (or even new best solution) in a reasonable time by the considered approach. Since using the concept of MINLP is still a novelty in the field of (aero)space engineering, the demonstrated capabilities are seen as very promising.
NASA Astrophysics Data System (ADS)
Ai, Xueshan; Dong, Zuo; Mo, Mingzhu
2017-04-01
The optimal reservoir operation is in generally a multi-objective problem. In real life, most of the reservoir operation optimization problems involve conflicting objectives, for which there is no single optimal solution which can simultaneously gain an optimal result of all the purposes, but rather a set of well distributed non-inferior solutions or Pareto frontier exists. On the other hand, most of the reservoirs operation rules is to gain greater social and economic benefits at the expense of ecological environment, resulting to the destruction of riverine ecology and reduction of aquatic biodiversity. To overcome these drawbacks, this study developed a multi-objective model for the reservoir operating with the conflicting functions of hydroelectric energy generation, irrigation and ecological protection. To solve the model with the objectives of maximize energy production, maximize the water demand satisfaction rate of irrigation and ecology, we proposed a multi-objective optimization method of variable penalty coefficient (VPC), which was based on integrate dynamic programming (DP) with discrete differential dynamic programming (DDDP), to generate a well distributed non-inferior along the Pareto front by changing the penalties coefficient of different objectives. This method was applied to an existing China reservoir named Donggu, through a course of a year, which is a multi-annual storage reservoir with multiple purposes. The case study results showed a good relationship between any two of the objectives and a good Pareto optimal solutions, which provide a reference for the reservoir decision makers.
Application of optimal data assimilation techniques in oceanography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, R.N.
Application of optimal data assimilation methods in oceanography is, if anything, more important than it is in numerical weather prediction, due to the sparsity of data. Here, a general framework is presented and practical examples taken from the author`s work are described, with the purpose of conveying to the reader some idea of the state of the art of data assimilation in oceanography. While no attempt is made to be exhaustive, references to other lines of research are included. Major challenges to the community include design of statistical error models and handling of strong nonlinearity.
Fourier Spectral Filter Array for Optimal Multispectral Imaging.
Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo
2016-04-01
Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.
Expert System for Automated Design Synthesis
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Barthelemy, Jean-Francois M.
1987-01-01
Expert-system computer program EXADS developed to aid users of Automated Design Synthesis (ADS) general-purpose optimization program. EXADS aids engineer in determining best combination based on knowledge of specific problem and expert knowledge stored in knowledge base. Available in two interactive machine versions. IBM PC version (LAR-13687) written in IQ-LISP. DEC VAX version (LAR-13688) written in Franz-LISP.
A genetic algorithm solution to the unit commitment problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.
1996-02-01
This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 unitsmore » and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.« less
Gradient-Based Optimization of Wind Farms with Different Turbine Heights: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanley, Andrew P. J.; Thomas, Jared; Ning, Andrew
Turbine wakes reduce power production in a wind farm. Current wind farms are generally built with turbines that are all the same height, but if wind farms included turbines with different tower heights, the cost of energy (COE) may be reduced. We used gradient-based optimization to demonstrate a method to optimize wind farms with varied hub heights. Our study includes a modified version of the FLORIS wake model that accommodates three-dimensional wakes integrated with a tower structural model. Our purpose was to design a process to minimize the COE of a wind farm through layout optimization and varying turbine hubmore » heights. Results indicate that when a farm is optimized for layout and height with two separate height groups, COE can be lowered by as much as 5%-9%, compared to a similar layout and height optimization where all the towers are the same. The COE has the best improvement in farms with high turbine density and a low wind shear exponent.« less
Gradient-Based Optimization of Wind Farms with Different Turbine Heights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanley, Andrew P. J.; Thomas, Jared; Ning, Andrew
Turbine wakes reduce power production in a wind farm. Current wind farms are generally built with turbines that are all the same height, but if wind farms included turbines with different tower heights, the cost of energy (COE) may be reduced. We used gradient-based optimization to demonstrate a method to optimize wind farms with varied hub heights. Our study includes a modified version of the FLORIS wake model that accommodates three-dimensional wakes integrated with a tower structural model. Our purpose was to design a process to minimize the COE of a wind farm through layout optimization and varying turbine hubmore » heights. Results indicate that when a farm is optimized for layout and height with two separate height groups, COE can be lowered by as much as 5%-9%, compared to a similar layout and height optimization where all the towers are the same. The COE has the best improvement in farms with high turbine density and a low wind shear exponent.« less
OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. BOETTCHER; A. PERCUS
2000-08-01
We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less
Monte Carlo simulations in Nuclear Medicine
NASA Astrophysics Data System (ADS)
Loudos, George K.
2007-11-01
Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. Monte Carlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.
A generalized concept of power helped to choose optimal endpoints in clinical trials.
Borm, George F; van der Wilt, Gert J; Kremer, Jan A M; Zielhuis, Gerhard A
2007-04-01
A clinical trial may have multiple objectives. Sometimes the results for several parameters may need to be significant or meet certain other criteria. In such cases, it is important to evaluate the probability that all these objectives will be met, rather than the probability that each will be met. The purpose of this article is to introduce a definition of power that is tailored to handle this situation and that is helpful for the design of such trials. We introduce a generalized concept of power. It can handle complex situations, for example, in which there is a logical combination of partial objectives. These may be formulated not only in terms of statistical tests and of confidence intervals, but also in nonstatistical terms, such as "selecting the optimal by dose." The power of a trial was calculated for various objectives and combinations of objectives. The generalized concept of power may lead to power calculations that closely match the objectives of the trial and contribute to choosing more efficient endpoints and designs.
The importance of emotional intelligence and meaning in life in psycho-oncology.
Teques, Andreia Pereira; Carrera, Glória Bueno; Ribeiro, José Pais; Teques, Pedro; Ramón, Ginés Llorca
2016-03-01
Cancer was considered the disease of the 20th century, and the management, treatment, and adaptation of patients to general wellbeing were worldwide concerns. Emotional intelligence has frequently been associated with wellbeing and considered one important factor to optimal human functioning. The purpose of the present study was to test the differences regarding the relationship between emotional intelligence, purpose in life, and satisfaction with life between cancer and healthy people. This model was tested using structural path analysis in two independent samples. First, in a general Portuguese population without chronic disease, 214 participants (nmale = 41, nfemale = 173; Mage = 53). Second, in 202 patients with cancer (nmale = 40, nfemale = 162; Mage = 58.65). A two-step methodology was used to test the research hypothesis. First, a confirmatory factor analysis supported the measurement model. All factors also show reliability, convergent, and discriminate validity. Second, the path coefficients for each model indicate that the proposed relationships differ significantly according to the groups. The perception capacities of emotional intelligence were more related to satisfaction with life and purpose in life in oncologic patients than in the general population without chronic disease, specifically emotional understanding and regulation. Likewise, the relationship between purpose in life and satisfaction with life in oncologic patients was significantly higher than for the general population. The current findings thus suggest that emotional intelligence and purpose in life are potential components to promoting satisfaction in life in healthy people and more so in oncologic patients. Copyright © 2015 John Wiley & Sons, Ltd.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury
2015-04-01
Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or the SDDP methods. The independent use of surface and groundwater can be examined with and without the aquifer. The ESPAT_DET, ESPATR and ESPAT_SDP modules were executed for the surface system, while the ESPAT_RA and the ESPAT_DET modules were run for the surface-groundwater system. The surface system's results show a similar performance between the ESPAT_SDP and ESPATR modules, with outperform the one showed by the current policies besides being outperformed by the ESPAT_DET results, which have the advantage of the perfect foresight. The surface-groundwater system's results show a robust situation in which the differences between the module's results and the current policies are lower due the use of pumped groundwater in the XX century crops when surface water is scarce. The results are realistic, with the deterministic optimization outperforming the stochastic one, which at the same time outperforms the current policies; showing that the tool is able to stochastically optimize river-aquifer water resources systems. We are currently working in the application of these tools in the analysis of changes in systems' operation under global change conditions. ACKNOWLEDGEMENT: This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) funds.
Strategies for the Optimization of Natural Leads to Anticancer Drugs or Drug Candidates
Xiao, Zhiyan; Morris-Natschke, Susan L.; Lee, Kuo-Hsiung
2015-01-01
Natural products have made significant contribution to cancer chemotherapy over the past decades and remain an indispensable source of molecular and mechanistic diversity for anticancer drug discovery. More often than not, natural products may serve as leads for further drug development rather than as effective anticancer drugs by themselves. Generally, optimization of natural leads into anticancer drugs or drug candidates should not only address drug efficacy, but also improve ADMET profiles and chemical accessibility associated with the natural leads. Optimization strategies involve direct chemical manipulation of functional groups, structure-activity relationship-directed optimization and pharmacophore-oriented molecular design based on the natural templates. Both fundamental medicinal chemistry principles (e.g., bio-isosterism) and state-of-the-art computer-aided drug design techniques (e.g., structure-based design) can be applied to facilitate optimization efforts. In this review, the strategies to optimize natural leads to anticancer drugs or drug candidates are illustrated with examples and described according to their purposes. Furthermore, successful case studies on lead optimization of bioactive compounds performed in the Natural Products Research Laboratories at UNC are highlighted. PMID:26359649
Fuzzy linear model for production optimization of mining systems with multiple entities
NASA Astrophysics Data System (ADS)
Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar
2011-12-01
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
NASA Astrophysics Data System (ADS)
Alegria Mira, Lara; Thrall, Ashley P.; De Temmerman, Niels
2016-02-01
Deployable scissor structures are well equipped for temporary and mobile applications since they are able to change their form and functionality. They are structural mechanisms that transform from a compact state to an expanded, fully deployed configuration. A barrier to the current design and reuse of scissor structures, however, is that they are traditionally designed for a single purpose. Alternatively, a universal scissor component (USC)-a generalized element which can achieve all traditional scissor types-introduces an opportunity for reuse in which the same component can be utilized for different configurations and spans. In this article, the USC is optimized for structural performance. First, an optimized length for the USC is determined based on a trade-off between component weight and structural performance (measured by deflections). Then, topology optimization, using the simulated annealing algorithm, is implemented to determine a minimum weight layout of beams within a single USC component.
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Combining local search with co-evolution in a remarkably simple way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.
2000-05-01
The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
KARMA: the observation preparation tool for KMOS
NASA Astrophysics Data System (ADS)
Wegner, Michael; Muschielok, Bernard
2008-08-01
KMOS is a multi-object integral field spectrometer working in the near infrared which is currently being built for the ESO VLT by a consortium of UK and German institutes. It is capable of selecting up to 24 target fields for integral field spectroscopy simultaneously by means of 24 robotic pick-off arms. For the preparation of observations with KMOS a dedicated preparation tool KARMA ("KMOS Arm Allocator") will be provided which optimizes the assignment of targets to these arms automatically, thereby taking target priorities and several mechanical and optical constraints into account. For this purpose two efficient algorithms, both being able to cope with the underlying optimization problem in a different way, were developed. We present the concept and architecture of KARMA in general and the optimization algorithms in detail.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1993-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.
NASA Astrophysics Data System (ADS)
Scott, Elaine P.
1993-12-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.
An intelligent agent for optimal river-reservoir system management
NASA Astrophysics Data System (ADS)
Rieker, Jeffrey D.; Labadie, John W.
2012-09-01
A generalized software package is presented for developing an intelligent agent for stochastic optimization of complex river-reservoir system management and operations. Reinforcement learning is an approach to artificial intelligence for developing a decision-making agent that learns the best operational policies without the need for explicit probabilistic models of hydrologic system behavior. The agent learns these strategies experientially in a Markov decision process through observational interaction with the environment and simulation of the river-reservoir system using well-calibrated models. The graphical user interface for the reinforcement learning process controller includes numerous learning method options and dynamic displays for visualizing the adaptive behavior of the agent. As a case study, the generalized reinforcement learning software is applied to developing an intelligent agent for optimal management of water stored in the Truckee river-reservoir system of California and Nevada for the purpose of streamflow augmentation for water quality enhancement. The intelligent agent successfully learns long-term reservoir operational policies that specifically focus on mitigating water temperature extremes during persistent drought periods that jeopardize the survival of threatened and endangered fish species.
NASA Astrophysics Data System (ADS)
Miura, Yasunari; Sugiyama, Yuki
2017-12-01
We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Li, Pin; Lee, Jay
2018-03-01
Sparsity is becoming a more and more important topic in the area of machine learning and signal processing recently. One big family of sparse measures in current literature is the generalized lp /lq norm, which is scale invariant and is widely regarded as normalized lp norm. However, the characteristics of the generalized lp /lq norm are still less discussed and its application to the condition monitoring of rotating devices has been still unexplored. In this study, we firstly discuss the characteristics of the generalized lp /lq norm for sparse optimization and then propose a method of sparse filtering with the generalized lp /lq norm for the purpose of impulsive signature enhancement. Further driven by the trend of industrial big data and the need of reducing maintenance cost for industrial equipment, the proposed sparse filter is customized for vibration signal processing and also implemented on bearing and gearbox for the purpose of condition monitoring. Based on the results from the industrial implementations in this paper, the proposed method has been found to be a promising tool for impulsive feature enhancement, and the superiority of the proposed method over previous methods is also demonstrated.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Synthetic Aperture Radar (SAR) data processing
NASA Technical Reports Server (NTRS)
Beckner, F. L.; Ahr, H. A.; Ausherman, D. A.; Cutrona, L. J.; Francisco, S.; Harrison, R. E.; Heuser, J. S.; Jordan, R. L.; Justus, J.; Manning, B.
1978-01-01
The available and optimal methods for generating SAR imagery for NASA applications were identified. The SAR image quality and data processing requirements associated with these applications were studied. Mathematical operations and algorithms required to process sensor data into SAR imagery were defined. The architecture of SAR image formation processors was discussed, and technology necessary to implement the SAR data processors used in both general purpose and dedicated imaging systems was addressed.
Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces
2011-02-28
Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
Feng, Jun; Li, Shusheng; Chen, Huawen
2015-01-01
Background The high incidence of pesticide ingestion as a means to commit suicide is a critical public health problem. An important predictor of suicidal behavior is suicide ideation, which is related to stress. However, studies on how to defend against stress-induced suicidal thoughts are limited. Objective This study explores the impact of stress on suicidal ideation by investigating the mediating effect of self-efficacy and dispositional optimism. Methods Direct and indirect (via self-efficacy and dispositional optimism) effects of stress on suicidal ideation were investigated among 296 patients with acute pesticide poisoning from four general hospitals. For this purpose, structural equation modeling (SEM) and bootstrap method were used. Results Results obtained using SEM and bootstrap method show that stress has a direct effect on suicide ideation. Furthermore, self-efficacy and dispositional optimism partially weakened the relationship between stress and suicidal ideation. Conclusion The final model shows a significant relationship between stress and suicidal ideation through self-efficacy or dispositional optimism. The findings extended prior studies and provide enlightenment on how self-efficacy and optimism prevents stress-induced suicidal thoughts. PMID:25679994
Feng, Jun; Li, Shusheng; Chen, Huawen
2015-01-01
The high incidence of pesticide ingestion as a means to commit suicide is a critical public health problem. An important predictor of suicidal behavior is suicide ideation, which is related to stress. However, studies on how to defend against stress-induced suicidal thoughts are limited. This study explores the impact of stress on suicidal ideation by investigating the mediating effect of self-efficacy and dispositional optimism. Direct and indirect (via self-efficacy and dispositional optimism) effects of stress on suicidal ideation were investigated among 296 patients with acute pesticide poisoning from four general hospitals. For this purpose, structural equation modeling (SEM) and bootstrap method were used. Results obtained using SEM and bootstrap method show that stress has a direct effect on suicide ideation. Furthermore, self-efficacy and dispositional optimism partially weakened the relationship between stress and suicidal ideation. The final model shows a significant relationship between stress and suicidal ideation through self-efficacy or dispositional optimism. The findings extended prior studies and provide enlightenment on how self-efficacy and optimism prevents stress-induced suicidal thoughts.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.
Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T
2010-09-01
To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
Graphics Processing Unit Assisted Thermographic Compositing
NASA Technical Reports Server (NTRS)
Ragasa, Scott; Russell, Samuel S.
2012-01-01
Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.
Graphics Processing Unit Assisted Thermographic Compositing
NASA Technical Reports Server (NTRS)
Ragasa, Scott; McDougal, Matthew; Russell, Sam
2013-01-01
Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.
Digital optical computers at the optoelectronic computing systems center
NASA Technical Reports Server (NTRS)
Jordan, Harry F.
1991-01-01
The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.
Carnot cycle at finite power: attainability of maximal efficiency.
Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G
2013-08-02
We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.
Haanstra, Tsjitske M.; Tilbury, Claire; Kamper, Steven J.; Tordoir, Rutger L.; Vliet Vlieland, Thea P. M.; Nelissen, Rob G. H. H.; Cuijpers, Pim; de Vet, Henrica C. W.; Dekker, Joost; Knol, Dirk L.; Ostelo, Raymond W.
2015-01-01
Objectives The constructs optimism, pessimism, hope, treatment credibility and treatment expectancy are associated with outcomes of medical treatment. While these constructs are grounded in different theoretical models, they nonetheless show some conceptual overlap. The purpose of this study was to examine whether currently available measurement instruments for these constructs capture the conceptual differences between these constructs within a treatment setting. Methods Patients undergoing Total Hip and Total Knee Arthroplasty (THA and TKA) (Total N = 361; 182 THA; 179 TKA), completed the Life Orientation Test-Revised for optimism and pessimism, the Hope Scale, the Credibility Expectancy Questionnaire for treatment credibility and treatment expectancy. Confirmatory factor analysis was used to examine whether the instruments measure distinct constructs. Four theory-driven models with one, two, four and five latent factors were evaluated using multiple fit indices and Δχ2 tests, followed by some posthoc models. Results The results of the theory driven confirmatory factor analysis showed that a five factor model in which all constructs loaded on separate factors yielded the most optimal and satisfactory fit. Posthoc, a bifactor model in which (besides the 5 separate factors) a general factor is hypothesized accounting for the commonality of the items showed a significantly better fit than the five factor model. All specific factors, except for the hope factor, showed to explain a substantial amount of variance beyond the general factor. Conclusion Based on our primary analyses we conclude that optimism, pessimism, hope, treatment credibility and treatment expectancy are distinguishable in THA and TKA patients. Postdoc, we determined that all constructs, except hope, showed substantial specific variance, while also sharing some general variance. PMID:26214176
Haanstra, Tsjitske M; Tilbury, Claire; Kamper, Steven J; Tordoir, Rutger L; Vliet Vlieland, Thea P M; Nelissen, Rob G H H; Cuijpers, Pim; de Vet, Henrica C W; Dekker, Joost; Knol, Dirk L; Ostelo, Raymond W
2015-01-01
The constructs optimism, pessimism, hope, treatment credibility and treatment expectancy are associated with outcomes of medical treatment. While these constructs are grounded in different theoretical models, they nonetheless show some conceptual overlap. The purpose of this study was to examine whether currently available measurement instruments for these constructs capture the conceptual differences between these constructs within a treatment setting. Patients undergoing Total Hip and Total Knee Arthroplasty (THA and TKA) (Total N = 361; 182 THA; 179 TKA), completed the Life Orientation Test-Revised for optimism and pessimism, the Hope Scale, the Credibility Expectancy Questionnaire for treatment credibility and treatment expectancy. Confirmatory factor analysis was used to examine whether the instruments measure distinct constructs. Four theory-driven models with one, two, four and five latent factors were evaluated using multiple fit indices and Δχ2 tests, followed by some posthoc models. The results of the theory driven confirmatory factor analysis showed that a five factor model in which all constructs loaded on separate factors yielded the most optimal and satisfactory fit. Posthoc, a bifactor model in which (besides the 5 separate factors) a general factor is hypothesized accounting for the commonality of the items showed a significantly better fit than the five factor model. All specific factors, except for the hope factor, showed to explain a substantial amount of variance beyond the general factor. Based on our primary analyses we conclude that optimism, pessimism, hope, treatment credibility and treatment expectancy are distinguishable in THA and TKA patients. Postdoc, we determined that all constructs, except hope, showed substantial specific variance, while also sharing some general variance.
NASA Astrophysics Data System (ADS)
Kim, Kyung-Ha; Park, Chandeok; Park, Sang-Young
2015-12-01
This work presents fuel-optimal altitude maintenance of Low-Earth-Orbit (LEO) spacecrafts experiencing non-negligible air drag and J2 perturbation. A pseudospectral (direct) method is first applied to roughly estimate an optimal fuel consumption strategy, which is employed as an initial guess to precisely determine itself. Based on the physical specifications of KOrea Multi-Purpose SATellite-2 (KOMPSAT-2), a Korean artificial satellite, numerical simulations show that a satellite ascends with full thrust at the early stage of the maneuver period and then descends with null thrust. While the thrust profile is presumably bang-off, it is difficult to precisely determine the switching time by using a pseudospectral method only. This is expected, since the optimal switching epoch does not coincide with one of the collocation points prescribed by the pseudospectral method, in general. As an attempt to precisely determine the switching time and the associated optimal thrust history, a shooting (indirect) method is then employed with the initial guess being obtained through the pseudospectral method. This hybrid process allows the determination of the optimal fuel consumption for LEO spacecrafts and their thrust profiles efficiently and precisely.
NASA Technical Reports Server (NTRS)
Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.
2009-01-01
A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
NASA Astrophysics Data System (ADS)
Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Budiman, Rachmat; Riswanto
2017-06-01
Building has a big impact on the environmental developments. There are three general motives in building, namely the economy, society, and environment. Total completed building construction in Indonesia increased by 116% during 2009 to 2011. It made the energy consumption increased by 11% within the last three years. In fact, 70% of energy consumption is used for electricity needs on commercial buildings which leads to an increase of greenhouse gas emissions by 25%. Green Building cycle costs is known as highly building upfront cost in Indonesia. The purpose of optimization in this research improves building performance with some of green concept alternatives. Research methodology is mixed method of qualitative and quantitative approaches through questionnaire surveys and case study. Assessing the successful of optimization functions in the existing green building is based on the operational and maintenance phase with the Life Cycle Assessment Method. Choosing optimization results were based on the largest efficiency of building life cycle and the most effective cost to refund.
Lin, Cheng Yu; Kikuchi, Noboru; Hollister, Scott J
2004-05-01
An often-proposed tissue engineering design hypothesis is that the scaffold should provide a biomimetic mechanical environment for initial function and appropriate remodeling of regenerating tissue while concurrently providing sufficient porosity for cell migration and cell/gene delivery. To provide a systematic study of this hypothesis, the ability to precisely design and manufacture biomaterial scaffolds is needed. Traditional methods for scaffold design and fabrication cannot provide the control over scaffold architecture design to achieve specified properties within fixed limits on porosity. The purpose of this paper was to develop a general design optimization scheme for 3D internal scaffold architecture to match desired elastic properties and porosity simultaneously, by introducing the homogenization-based topology optimization algorithm (also known as general layout optimization). With an initial target for bone tissue engineering, we demonstrate that the method can produce highly porous structures that match human trabecular bone anisotropic stiffness using accepted biomaterials. In addition, we show that anisotropic bone stiffness may be matched with scaffolds of widely different porosity. Finally, we also demonstrate that prototypes of the designed structures can be fabricated using solid free-form fabrication (SFF) techniques.
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian
1988-01-01
The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
Lutz, David A; Burakowski, Elizabeth A; Murphy, Mackenzie B; Borsuk, Mark E; Niemiec, Rebecca M; Howarth, Richard B
2016-01-01
Forests are more frequently being managed to store and sequester carbon for the purposes of climate change mitigation. Generally, this practice involves long-term conservation of intact mature forests and/or reductions in the frequency and intensity of timber harvests. However, incorporating the influence of forest surface albedo often suggests that long rotation lengths may not always be optimal in mitigating climate change in forests characterized by frequent snowfall. To address this, we investigated trade-offs between three ecosystem services: carbon storage, albedo-related radiative forcing, and timber provisioning. We calculated optimal rotation length at 498 diverse Forest Inventory and Analysis forest sites in the state of New Hampshire, USA. We found that the mean optimal rotation lengths across all sites was 94 yr (standard deviation of sample means = 44 yr), with a large cluster of short optimal rotation lengths that were calculated at high elevations in the White Mountain National Forest. Using a regression tree approach, we found that timber growth, annual storage of carbon, and the difference between annual albedo in mature forest vs. a post-harvest landscape were the most important variables that influenced optimal rotation. Additionally, we found that the choice of a baseline albedo value for each site significantly altered the optimal rotation lengths across all sites, lowering the mean rotation to 59 yr with a high albedo baseline, and increasing the mean rotation to 112 yr given a low albedo baseline. Given these results, we suggest that utilizing temperate forests in New Hampshire for climate mitigation purposes through carbon storage and the cessation of harvest is appropriate at a site-dependent level that varies significantly across the state.
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.
1991-01-01
A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.
Programs To Optimize Spacecraft And Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Petersen, F. M.; Cornick, D.E.; Stevenson, R.; Olson, D. W.
1994-01-01
POST/6D POST is set of two computer programs providing ability to target and optimize trajectories of powered or unpowered spacecraft or aircraft operating at or near rotating planet. POST treats point-mass, three-degree-of-freedom case. 6D POST treats more-general rigid-body, six-degree-of-freedom (with point masses) case. Used to solve variety of performance, guidance, and flight-control problems for atmospheric and orbital vehicles. Applications include computation of performance or capability of vehicle in ascent, or orbit, and during entry into atmosphere, simulation and analysis of guidance and flight-control systems, dispersion-type analyses and analyses of loads, general-purpose six-degree-of-freedom simulation of controlled and uncontrolled vehicles, and validation of performance in six degrees of freedom. Written in FORTRAN 77 and C language. Two machine versions available: one for SUN-series computers running SunOS(TM) (LAR-14871) and one for Silicon Graphics IRIS computers running IRIX(TM) operating system (LAR-14869).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, T; Zhou, L; Li, Y
Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specificmore » dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive results. Conclusion: We have successfully developed a fast and automatic multi-objective optimization for intensity modulated radiotherapy. This work is supported by the National Natural Science Foundation of China (No: 81571771)« less
GAUSSIAN 76: An ab initio Molecular Orbital Program
DOE R&D Accomplishments Database
Binkley, J. S.; Whiteside, R.; Hariharan, P. C.; Seeger, R.; Hehre, W. J.; Lathan, W. A.; Newton, M. D.; Ditchfield, R.; Pople, J. A.
1978-01-01
Gaussian 76 is a general-purpose computer program for ab initio Hartree-Fock molecular orbital calculations. It can handle basis sets involving s, p and d-type Gaussian functions. Certain standard sets (STO-3G, 4-31G, 6-31G*, etc.) are stored internally for easy use. Closed shell (RHF) or unrestricted open shell (UHF) wave functions can be obtained. Facilities are provided for geometry optimization to potential minima and for limited potential surface scans.
Stochastic modelling of turbulent combustion for design optimization of gas turbine combustors
NASA Astrophysics Data System (ADS)
Mehanna Ismail, Mohammed Ali
The present work covers the development and the implementation of an efficient algorithm for the design optimization of gas turbine combustors. The purpose is to explore the possibilities and indicate constructive suggestions for optimization techniques as alternative methods for designing gas turbine combustors. The algorithm is general to the extent that no constraints are imposed on the combustion phenomena or on the combustor configuration. The optimization problem is broken down into two elementary problems: the first is the optimum search algorithm, and the second is the turbulent combustion model used to determine the combustor performance parameters. These performance parameters constitute the objective and physical constraints in the optimization problem formulation. The examination of both turbulent combustion phenomena and the gas turbine design process suggests that the turbulent combustion model represents a crucial part of the optimization algorithm. The basic requirements needed for a turbulent combustion model to be successfully used in a practical optimization algorithm are discussed. In principle, the combustion model should comply with the conflicting requirements of high fidelity, robustness and computational efficiency. To that end, the problem of turbulent combustion is discussed and the current state of the art of turbulent combustion modelling is reviewed. According to this review, turbulent combustion models based on the composition PDF transport equation are found to be good candidates for application in the present context. However, these models are computationally expensive. To overcome this difficulty, two different models based on the composition PDF transport equation were developed: an improved Lagrangian Monte Carlo composition PDF algorithm and the generalized stochastic reactor model. Improvements in the Lagrangian Monte Carlo composition PDF model performance and its computational efficiency were achieved through the implementation of time splitting, variable stochastic fluid particle mass control, and a second order time accurate (predictor-corrector) scheme used for solving the stochastic differential equations governing the particles evolution. The model compared well against experimental data found in the literature for two different configurations: bluff body and swirl stabilized combustors. The generalized stochastic reactor is a newly developed model. This model relies on the generalization of the concept of the classical stochastic reactor theory in the sense that it accounts for both finite micro- and macro-mixing processes. (Abstract shortened by UMI.)
Multidisciplinary Environments: A History of Engineering Framework Development
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Gillian, Ronnie E.
2006-01-01
This paper traces the history of engineering frameworks and their use by Multidisciplinary Design Optimization (MDO) practitioners. The approach is to reference papers that have been presented at one of the ten previous Multidisciplinary Analysis and Optimization (MA&O) conferences. By limiting the search to MA&O papers, the authors can (1) identify the key ideas that led to general purpose MDO frameworks and (2) uncover roadblocks that delayed the development of these ideas. The authors make no attempt to assign credit for revolutionary ideas or to assign blame for missed opportunities. Rather, the goal is to trace the various threads of computer architecture and software framework research and to observe how these threads contributed to the commercial framework products available today.
'Extremotaxis': computing with a bacterial-inspired algorithm.
Nicolau, Dan V; Burrage, Kevin; Nicolau, Dan V; Maini, Philip K
2008-01-01
We present a general-purpose optimization algorithm inspired by "run-and-tumble", the biased random walk chemotactic swimming strategy used by the bacterium Escherichia coli to locate regions of high nutrient concentration The method uses particles (corresponding to bacteria) that swim through the variable space (corresponding to the attractant concentration profile). By constantly performing temporal comparisons, the particles drift towards the minimum or maximum of the function of interest. We illustrate the use of our method with four examples. We also present a discrete version of the algorithm. The new algorithm is expected to be useful in combinatorial optimization problems involving many variables, where the functional landscape is apparently stochastic and has local minima, but preserves some derivative structure at intermediate scales.
Rapid solution of large-scale systems of equations
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.
Graphics Processing Unit Assisted Thermographic Compositing
NASA Technical Reports Server (NTRS)
Ragasa, Scott; McDougal, Matthew; Russell, Sam
2012-01-01
Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.
NASA Technical Reports Server (NTRS)
Junge, M. K.; Giacomi, M. J.
1981-01-01
The results of a human factors test to assay the suitability of a prototype general purpose work station (GPWS) for biosciences experiments on the fourth Spacelab mission are reported. The evaluation was performed to verify that users of the GPWS would optimally interact with the GPWS configuration and instrumentation. Six male subjects sat on stools positioned to allow assimilation of the zero-g body posture. Trials were run concerning the operator viewing angles facing the console, the console color, procedures for injecting rates with dye, a rat blood cell count, mouse dissection, squirrel monkey transfer, and plant fixation. The trials were run for several days in order to gage improvement or poor performance conditions. Better access to the work surface was found necessary, together with more distinct and better located LEDs, better access window latches, clearer sequences on control buttons, color-coded sequential buttons, and provisions made for an intercom system when operators of the GPWS work in tandem.
Intramedullary Fixation of Midshaft Clavicle Fractures.
Fritz, Erik M; van der Meijden, Olivier A; Hussain, Zaamin B; Pogorzelski, Jonas; Millett, Peter J
2017-08-01
Clavicle fractures are among the most common fractures occurring in the general population, and the vast majority are localized in the midshaft portion of the bone. Management of midshaft clavicle fractures remains controversial. Although many can be managed nonoperatively, certain patient populations and fracture patterns, such as completely displaced and shortened fractures, are at risk of less optimal outcomes with nonoperative management; surgical intervention should be considered in such cases. The purpose of this article is to demonstrate our technique of midshaft clavicle fixation using minimally invasive intramedullary fixation.
[When hair starts to fall out].
de Lorenzi, Caroline; Quenan, Sandrine
2018-03-28
Hair loss causes physical and psychological distress and represents a common motive of consultation both in general practice and dermatology. Causes of hair loss are highly diverse and can lead to a challenging diagnosis, which can delay its management. Knowledge of the main causes and their different mechanisms are thus necessary in order to optimize both the diagnosis and treatment. The purpose of this paper is to describe the main causes of hair loss in order to improve its diagnosis and management.
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...
2015-11-09
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.
Chen, Brian R; Poon, Emily; Alam, Murad
2018-01-01
Lighting is an important component of consistent, high-quality dermatologic photography. There are different types of lighting solutions available. To evaluate currently available lighting equipment and methods suitable for procedural dermatology. Overhead lighting, built-in camera flashes, external flash units, studio strobes, and light-emitting diode (LED) light panels were evaluated with regard to their utility for dermatologic surgeons. A set of ideal lighting characteristics was used to examine the capabilities and limitations of each type of lighting solution. Recommendations regarding lighting solutions and optimal usage configurations were made in terms of the context of the clinical environment and the purpose of the image. Overhead lighting may be a convenient option for general documentation. An on-camera lighting solution using a built-in camera flash or a camera-mounted external flash unit provides portability and consistent lighting with minimal training. An off-camera lighting solution with studio strobes, external flash units, or LED light panels provides versatility and even lighting with minimal shadows and glare. The selection of an optimal lighting solution is contingent on practical considerations and the purpose of the image.
Dispositional Optimism and Therapeutic Expectations in Early Phase Oncology Trials
Jansen, Lynn A.; Mahadevan, Daruka; Appelbaum, Paul S.; Klein, William MP; Weinstein, Neil D.; Mori, Motomi; Daffé, Racky; Sulmasy, Daniel P.
2016-01-01
Purpose Prior research has identified unrealistic optimism as a bias that might impair informed consent among patient-subjects in early phase oncology trials. Optimism, however, is not a unitary construct – it can also be defined as a general disposition, or what is called dispositional optimism. We assessed whether dispositional optimism would be related to high expectations for personal therapeutic benefit reported by patient-subjects in these trials but not to the therapeutic misconception. We also assessed how dispositional optimism related to unrealistic optimism. Methods Patient-subjects completed questionnaires designed to measure expectations for therapeutic benefit, dispositional optimism, unrealistic optimism, and the therapeutic misconception. Results Dispositional optimism was significantly associated with higher expectations for personal therapeutic benefit (Spearman r=0.333, p<0.0001), but was not associated with the therapeutic misconception. (Spearman r=−0.075, p=0.329). Dispositional optimism was weakly associated with unrealistic optimism (Spearman r=0.215, p=0.005). In multivariate analysis, both dispositional optimism (p=0.02) and unrealistic optimism (p<0.0001) were independently associated with high expectations for personal therapeutic benefit. Unrealistic optimism (p=.0001), but not dispositional optimism, was independently associated with the therapeutic misconception. Conclusion High expectations for therapeutic benefit among patient-subjects in early phase oncology trials should not be assumed to result from misunderstanding of specific information about the trials. Our data reveal that these expectations are associated with either a dispositionally positive outlook on life or biased expectations about specific aspects of trial participation. Not all manifestations of optimism are the same, and different types of optimism likely have different consequences for informed consent in early phase oncology research. PMID:26882017
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.
EXADS - EXPERT SYSTEM FOR AUTOMATED DESIGN SYNTHESIS
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1994-01-01
The expert system called EXADS was developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. Because of the general purpose nature of ADS, it is difficult for a nonexpert to select the best choice of strategy, optimizer, and one-dimensional search options from the one hundred or so combinations that are available. EXADS aids engineers in determining the best combination based on their knowledge of the problem and the expert knowledge previously stored by experts who developed ADS. EXADS is a customized application of the AESOP artificial intelligence program (the general version of AESOP is available separately from COSMIC. The ADS program is also available from COSMIC.) The expert system consists of two main components. The knowledge base contains about 200 rules and is divided into three categories: constrained, unconstrained, and constrained treated as unconstrained. The EXADS inference engine is rule-based and makes decisions about a particular situation using hypotheses (potential solutions), rules, and answers to questions drawn from the rule base. EXADS is backward-chaining, that is, it works from hypothesis to facts. The rule base was compiled from sources such as literature searches, ADS documentation, and engineer surveys. EXADS will accept answers such as yes, no, maybe, likely, and don't know, or a certainty factor ranging from 0 to 10. When any hypothesis reaches a confidence level of 90% or more, it is deemed as the best choice and displayed to the user. If no hypothesis is confirmed, the user can examine explanations of why the hypotheses failed to reach the 90% level. The IBM PC version of EXADS is written in IQ-LISP for execution under DOS 2.0 or higher with a central memory requirement of approximately 512K of 8 bit bytes. This program was developed in 1986.
Snowflake: A Lightweight Portable Stencil DSL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Nathan; Driscoll, Michael; Markley, Charles
Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less
Snowflake: A Lightweight Portable Stencil DSL
Zhang, Nathan; Driscoll, Michael; Markley, Charles; ...
2017-05-01
Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less
CORSS: Cylinder Optimization of Rings, Skin, and Stringers
NASA Technical Reports Server (NTRS)
Finckenor, J.; Rogers, P.; Otte, N.
1994-01-01
Launch vehicle designs typically make extensive use of cylindrical skin stringer construction. Structural analysis methods are well developed for preliminary design of this type of construction. This report describes an automated, iterative method to obtain a minimum weight preliminary design. Structural optimization has been researched extensively, and various programs have been written for this purpose. Their complexity and ease of use depends on their generality, the failure modes considered, the methodology used, and the rigor of the analysis performed. This computer program employs closed-form solutions from a variety of well-known structural analysis references and joins them with a commercially available numerical optimizer called the 'Design Optimization Tool' (DOT). Any ring and stringer stiffened shell structure of isotropic materials that has beam type loading can be analyzed. Plasticity effects are not included. It performs a more limited analysis than programs such as PANDA, but it provides an easy and useful preliminary design tool for a large class of structures. This report briefly describes the optimization theory, outlines the development and use of the program, and describes the analysis techniques that are used. Examples of program input and output, as well as the listing of the analysis routines, are included.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
Are Optimism and Cynical Hostility Associated with Smoking Cessation in Older Women?
Progovac, Ana M; Chang, Yue-Fang; Chang, Chung-Chou H; Matthews, Karen A; Donohue, Julie M; Scheier, Michael F; Habermann, Elizabeth B; Kuller, Lewis H; Goveas, Joseph S; Chapman, Benjamin P; Duberstein, Paul R; Messina, Catherine R; Weaver, Kathryn E; Saquib, Nazmus; Wallace, Robert B; Kaplan, Robert C; Calhoun, Darren; Smith, J Carson; Tindle, Hilary A
2017-08-01
Optimism and cynical hostility independently predict morbidity and mortality in Women's Health Initiative (WHI) participants and are associated with current smoking. However, their association with smoking cessation in older women is unknown. The purpose of this study is to test whether optimism (positive future expectations) or cynical hostility (mistrust of others) predicts smoking cessation in older women. Self-reported smoking status was assessed at years 1, 3, and 6 after study entry for WHI baseline smokers who were not missing optimism or cynical hostility scores (n = 10,242). Questionnaires at study entry assessed optimism (Life Orientation Test-Revised) and cynical hostility (Cook-Medley, cynical hostility subscale). Generalized linear mixed models adjusted for sociodemographics, lifestyle factors, and medical and psychosocial characteristics including depressive symptoms. After full covariate adjustment, optimism was not related to smoking cessation. Each 1-point increase in baseline cynical hostility score was associated with 5% lower odds of cessation over 6 years (OR = 0.95, CI = 0.92-0.98, p = 0.0017). In aging postmenopausal women, greater cynical hostility predicts lower smoking cessation over time. Future studies should examine whether individuals with this trait may benefit from more intensive cessation resources or whether attempting to mitigate cynical hostility itself may aid smoking cessation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teo, Troy; Alayoubi, Nadia; Bruce, Neil
Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons formore » the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.« less
Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter
2010-12-01
Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).
Morabito, Marco; Pavlinic, Daniela Z; Crisci, Alfonso; Capecchi, Valerio; Orlandini, Simone; Mekjavic, Igor B
2011-07-01
Military and civil defense personnel are often involved in complex activities in a variety of outdoor environments. The choice of appropriate clothing ensembles represents an important strategy to establish the success of a military mission. The main aim of this study was to compare the known clothing insulation of the garment ensembles worn by soldiers during two winter outdoor field trials (hike and guard duty) with the estimated optimal clothing thermal insulations recommended to maintain thermoneutrality, assessed by using two different biometeorological procedures. The overall aim was to assess the applicability of such biometeorological procedures to weather forecast systems, thereby developing a comprehensive biometeorological tool for military operational forecast purposes. Military trials were carried out during winter 2006 in Pokljuka (Slovenia) by Slovene Armed Forces personnel. Gastrointestinal temperature, heart rate and environmental parameters were measured with portable data acquisition systems. The thermal characteristics of the clothing ensembles worn by the soldiers, namely thermal resistance, were determined with a sweating thermal manikin. Results showed that the clothing ensemble worn by the military was appropriate during guard duty but generally inappropriate during the hike. A general under-estimation of the biometeorological forecast model in predicting the optimal clothing insulation value was observed and an additional post-processing calibration might further improve forecast accuracy. This study represents the first step in the development of a comprehensive personalized biometeorological forecast system aimed at improving recommendations regarding the optimal thermal insulation of military garment ensembles for winter activities.
Furbish, Shannon M L; Kroehl, Miranda E; Loeb, Danielle F; Lam, Huong Mindy; Lewis, Carmen L; Nelson, Jennifer; Chow, Zeta; Trinkley, Katy E
2017-08-01
Benzodiazepines are prescribed inappropriately in up to 40% of outpatients. The purpose of this study is to describe a collaborative team-based care model in which clinical pharmacists work with primary care providers (PCPs) to improve the safe use of benzodiazepines for anxiety and sleep disorders and to assess the preliminary results of the impact of the clinical service on patient outcomes. Adult patients were eligible if they received care from the academic primary care clinic, were prescribed a benzodiazepine chronically, and were not pregnant or managed by psychiatry. Outcomes included baseline PCP confidence and knowledge of appropriate benzodiazepine use, patient symptom severity, and medication changes. Twenty-five of 57 PCPs responded to the survey. PCPs reported greater confidence in diagnosing and treating generalized anxiety and panic disorders than sleep disorder and had variable knowledge of appropriate benzodiazepine prescribing. Twenty-nine patients had at least 1 visit. Over 44 total patient visits, 59% resulted in the addition or optimization of a nonbenzodiazepine medication and 46% resulted in the discontinuation or optimization of a benzodiazepine. Generalized anxiety symptom severity scores significantly improved (-2.0; 95% confidence interval (CI): -3.57 to -0.43). Collaborative team-based models that include clinical pharmacists in primary care can assist in optimizing high-risk benzodiazepine use. Although these findings suggest improvements in safe medication use and symptoms, additional studies are needed to confirm these preliminary results.
Shoemaker, W C; Patil, R; Appel, P L; Kram, H B
1992-11-01
A generalized decision tree or clinical algorithm for treatment of high-risk elective surgical patients was developed from a physiologic model based on empirical data. First, a large data bank was used to do the following: (1) describe temporal hemodynamic and oxygen transport patterns that interrelate cardiac, pulmonary, and tissue perfusion functions in survivors and nonsurvivors; (2) define optimal therapeutic goals based on the supranormal oxygen transport values of high-risk postoperative survivors; (3) compare the relative effectiveness of alternative therapies in a wide variety of clinical and physiologic conditions; and (4) to develop criteria for titration of therapy to the endpoints of the supranormal optimal goals using cardiac index (CI), oxygen delivery (DO2), and oxygen consumption (VO2) as proxy outcome measures. Second, a general purpose algorithm was generated from these data and tested in preoperatively randomized clinical trials of high-risk surgical patients. Improved outcome was demonstrated with this generalized algorithm. The concept that the supranormal values represent compensations that have survival value has been corroborated by several other groups. We now propose a unique approach to refine the generalized algorithm to develop customized algorithms and individualized decision analysis for each patient's unique problems. The present article describes a preliminary evaluation of the feasibility of artificial intelligence techniques to accomplish individualized algorithms that may further improve patient care and outcome.
A method for the dynamic management of genetic variability in dairy cattle
Colleau, Jean-Jacques; Moureaux, Sophie; Briend, Michèle; Bechu, Jérôme
2004-01-01
According to the general approach developed in this paper, dynamic management of genetic variability in selected populations of dairy cattle is carried out for three simultaneous purposes: procreation of young bulls to be further progeny-tested, use of service bulls already selected and approval of recently progeny-tested bulls for use. At each step, the objective is to minimize the average pairwise relationship coefficient in the future population born from programmed matings and the existing population. As a common constraint, the average estimated breeding value of the new population, for a selection goal including many important traits, is set to a desired value. For the procreation of young bulls, breeding costs are additionally constrained. Optimization is fully analytical and directly considers matings. Corresponding algorithms are presented in detail. The efficiency of these procedures was tested on the current Norman population. Comparisons between optimized and real matings, clearly showed that optimization would have saved substantial genetic variability without reducing short-term genetic gains. PMID:15231230
Preventive dental health care experiences of preschool-age children with special health care needs
Huebner, Colleen E.; Chi, Donald L.; Masterson, Erin; Milgrom, Peter
2014-01-01
Purpose This study examined the preventive dental health care experiences of young children with special needs and determined the feasibility of conducting clinical dental examinations at a community-based early intervention services center. Methods Study methods included 90 parent interviews and dental examinations of their preschool-age children. Results Thirteen percent of the children received optimal preventive care, defined as twice daily tooth brushing with fluoridated toothpaste and two preventive dental visits in the prior 12 months; 37 percent experienced care that fell short in both areas. Optimal care was more common among children of parents who reported tooth brushing was not a struggle and those with a personal dentist. Parents' opinion of the study experience was generally positive. Conclusions Few children with special needs receive effective preventive care early, when primary prevention could be achieved. Barriers to optimal care could be readily addressed by the dental community in coordination with early intervention providers. PMID:25082666
Optimization of inclusive fitness.
Grafen, Alan
2006-02-07
The first fully explicit argument is given that broadly supports a widespread belief among whole-organism biologists that natural selection tends to lead to organisms acting as if maximizing their inclusive fitness. The use of optimization programs permits a clear statement of what this belief should be understood to mean, in contradistinction to the common mathematical presumption that it should be formalized as some kind of Lyapunov or even potential function. The argument reveals new details and uncovers latent assumptions. A very general genetic architecture is allowed, and there is arbitrary uncertainty. However, frequency dependence of fitnesses is not permitted. The logic of inclusive fitness immediately draws together various kinds of intra-genomic conflict, and the concept of 'p-family' is introduced. Inclusive fitness is thus incorporated into the formal Darwinism project, which aims to link the mathematics of motion (difference and differential equations) used to describe gene frequency trajectories with the mathematics of optimization used to describe purpose and design. Important questions remain to be answered in the fundamental theory of inclusive fitness.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
1983-04-11
existing ones. * -37- !I T-472 REFERENCES [1] Avriel, M., W. E. Diewert, S. Schaible and W. T. Ziemba (1981). Introduction to concave and generalized concave...functions. In Generalized Concavity in Optimization and Economics (S. Schaible and W. T. Ziemba , eds.), Academic Press, New York, pp. 21-50. (21 Bank...Optimality conditions involving generalized convex mappings. In Generalized Concavity in Optimization and Economics (S. Schaible and W. T. Ziemba
Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine
NASA Astrophysics Data System (ADS)
Zhang, L.; Zhuge, W. L.; Peng, J.; Liu, S. J.; Zhang, Y. J.
2013-12-01
In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (WR and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β6, the exit tip radius R6t and hub radius R6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency ηts, and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency ηts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the value of τ is at 0.5 to 0.6.
Boy, Sonja; Crossley, David; Steenkamp, Gerhard
2016-01-01
Developmental tooth abnormalities in dogs are uncommon in general veterinary practice but understanding thereof is important for optimal management in order to maintain masticatory function through preservation of the dentition. The purpose of this review is to discuss clinical abnormalities of the enamel and general anatomy of dog teeth encountered in veterinary dental referral practice and described in the literature. More than 900 referral cases are seen annually between the two referral practices. The basis of the pathogenesis, resultant clinical appearance, and the principles of management for each anomaly will be described. Future research should be aimed toward a more detailed analysis of these conditions so rarely described in the literature. PMID:26904551
Restoring Natural Streamflow Variability by Modifying Multi-purpose Reservoir Operation
NASA Astrophysics Data System (ADS)
Shiau, J.
2010-12-01
Multi-purpose reservoirs typically provide benefits of water supply, hydroelectric power, and flood mitigation. Hydroelectric power generations generally do not consume water. However, temporal distribution of downstream flows is highly changed due to hydro-peaking effects. Associated with offstream diversion of water supplies for municipal, industrial, and agricultural requirements, natural streamflow characteristics of magnitude, duration, frequency, timing, and rate of change is significantly altered by multi-purpose reservoir operation. Natural flow regime has long been recognized a master factor for ecosystem health and biodiversity. Restoration of altered flow regime caused by multi-purpose reservoir operation is the main objective of this study. This study presents an optimization framework that modifying reservoir operation to seeking balance between human and environmental needs. The methodology presented in this study is applied to the Feitsui Reservoir, located in northern Taiwan, with main purpose of providing stable water-supply and auxiliary purpose of electricity generation and flood-peak attenuation. Reservoir releases are dominated by two decision variables, i.e., duration of water releases for each day and percentage of daily required releases within the duration. The current releasing policy of the Feitsui Reservoir releases water for water-supply and hydropower purposes during 8:00 am to 16:00 pm each day and no environmental flows releases. Although greater power generation is obtained by 100% releases distributed within 8-hour period, severe temporal alteration of streamflow is observed downstream of the reservoir. Modifying reservoir operation by relaxing these two variables and reserve certain ratio of streamflow as environmental flow to maintain downstream natural variability. The optimal reservoir releasing policy is searched by the multi-criterion decision making technique for considering reservoir performance in terms of shortage ratio and power generation and downstream hydrologic alterations in terms of ecological relevant indicators. The results show that the proposed methodology can mitigate hydro-peaking effects on natural variability, while maintains efficient reservoir operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurosu, K; Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka; Takashina, M
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximummore » step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarmand, H; Winey, B; Craft, D
2014-06-15
Purpose: To efficiently find quality-guaranteed treatment plans with the minimum number of beams for stereotactic body radiation therapy using RayStation. Methods: For a pre-specified pool of candidate beams we use RayStation (a treatment planning software for clinical use) to identify the deliverable plan which uses all the beams with the minimum dose to organs at risk (OARs) and dose to the tumor and other structures in specified ranges. Then use the dose matrix information for the generated apertures from RayStation to solve a linear program to find the ideal plan with the same objective and constraints allowing use of allmore » beams. Finally we solve a mixed integer programming formulation of the beam angle optimization problem (BAO) with the objective of minimizing the number of beams while remaining in a predetermined epsilon-optimality of the ideal plan with respect to the dose to OARs. Since the treatment plan optimization is a multicriteria optimization problem, the planner can exploit the multicriteria optimization capability of RayStation to navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus OARs sparing, and then use the proposed technique to reduce the number of beams while guaranteeing quality. For the numerical experiments two liver cases and one lung case with 33 non-coplanar beams are considered. Results: The ideal plan uses an impractically large number of beams. The proposed technique reduces the number of beams to the range of practical application (5 to 9 beams) while remaining in the epsilon-optimal range of 1% to 5% optimality gap. Conclusion: The proposed method can be integrated into a general algorithm for fast navigation of the ideal dose distribution Pareto surface and finding the treatment plan with the minimum number of beams, which corresponds to the delivery time, in epsilon-optimality range of the desired ideal plan. The project was supported by the Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center and partially by RaySearch Laboratories.« less
Kinematic Optimization in Birds, Bats and Ornithopters
NASA Astrophysics Data System (ADS)
Reichert, Todd
Birds and bats employ a variety of advanced wing motions in the efficient production of thrust. The purpose of this thesis is to quantify the benefit of these advanced wing motions, determine the optimal theoretical wing kinematics for a given flight condition, and to develop a methodology for applying the results in the optimal design of flapping-wing aircraft (ornithopters). To this end, a medium-fidelity, combined aero-structural model has been developed that is capable of simulating the advanced kinematics seen in bird flight, as well as the highly non-linear structural deformations typical of high-aspect ratio wings. Five unique methods of thrust production observed in natural species have been isolated, quantified and thoroughly investigated for their dependence on Reynolds number, airfoil selection, frequency, amplitude and relative phasing. A gradient-based optimization algorithm has been employed to determined the wing kinematics that result in the minimum required power for a generalized aircraft or species in any given flight condition. In addition to the theoretical work, with the help of an extended team, the methodology was applied to the design and construction of the world's first successful human-powered ornithopter. The Snowbird Human-Powered Ornithopter, is used as an example aircraft to show how additional design constraints can pose limits on the optimal kinematics. The results show significant trends that give insight into the kinematic operation of natural species. The general result is that additional complexity, whether it be larger twisting deformations or advanced wing-folding mechanisms, allows for the possibility of more efficient flight. At its theoretical optimum, the efficiency of flapping-wings exceeds that of current rotors and propellers, although these efficiencies are quite difficult to achieve in practice.
Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia
NASA Astrophysics Data System (ADS)
Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg
2013-03-01
Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.
Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture
NASA Technical Reports Server (NTRS)
Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek
2015-01-01
This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wei, E-mail: Liu.Wei@mayo.edu; Schild, Steven E.; Chang, Joe Y.
Purpose: The purpose of this study was to compare the impact of uncertainties and interplay on 3-dimensional (3D) and 4D robustly optimized intensity modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods and Materials: IMPT plans were created for 11 nonrandomly selected non-small cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D computed tomography (CT) to irradiate clinical target volume (CTV). Regular fractionation (66 Gy [relative biological effectiveness; RBE] in 33 fractions) was considered.more » In 4D optimization, the CTV of individual phases received nonuniform doses to achieve a uniform cumulative dose. The root-mean-square dose-volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram (DVH) indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed rank test. Results: 4D robust optimization plans led to smaller AUC for CTV (14.26 vs 18.61, respectively; P=.001), better CTV coverage (Gy [RBE]) (D{sub 95%} CTV: 60.6 vs 55.2, respectively; P=.001), and better CTV homogeneity (D{sub 5%}-D{sub 95%} CTV: 10.3 vs 17.7, resspectively; P=.002) in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage (D{sub 95%} CTV: 64.5 vs 63.8, respectively; P=.0068), comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions: Our exploratory methodology study showed that, compared to 3D robust optimization, 4D robust optimization produced significantly more robust and interplay-effect-resistant plans for targets with comparable dose distributions for normal tissues. A further study with a larger and more realistic patient population is warranted to generalize the conclusions.« less
Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications
Khodak, Andrei
2017-08-21
Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less
Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodak, Andrei
Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less
Structural optimization of framed structures using generalized optimality criteria
NASA Technical Reports Server (NTRS)
Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.
1989-01-01
The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.
Identifying multiple influential spreaders based on generalized closeness centrality
NASA Astrophysics Data System (ADS)
Liu, Huan-Li; Ma, Chuang; Xiang, Bing-Bing; Tang, Ming; Zhang, Hai-Feng
2018-02-01
To maximize the spreading influence of multiple spreaders in complex networks, one important fact cannot be ignored: the multiple spreaders should be dispersively distributed in networks, which can effectively reduce the redundance of information spreading. For this purpose, we define a generalized closeness centrality (GCC) index by generalizing the closeness centrality index to a set of nodes. The problem converts to how to identify multiple spreaders such that an objective function has the minimal value. By comparing with the K-means clustering algorithm, we find that the optimization problem is very similar to the problem of minimizing the objective function in the K-means method. Therefore, how to find multiple nodes with the highest GCC value can be approximately solved by the K-means method. Two typical transmission dynamics-epidemic spreading process and rumor spreading process are implemented in real networks to verify the good performance of our proposed method.
Diagonally Implicit Runge-Kutta Methods for Ordinary Differential Equations. A Review
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
2016-01-01
A review of diagonally implicit Runge-Kutta (DIRK) methods applied to rst-order ordinary di erential equations (ODEs) is undertaken. The goal of this review is to summarize the characteristics, assess the potential, and then design several nearly optimal, general purpose, DIRK-type methods. Over 20 important aspects of DIRKtype methods are reviewed. A design study is then conducted on DIRK-type methods having from two to seven implicit stages. From this, 15 schemes are selected for general purpose application. Testing of the 15 chosen methods is done on three singular perturbation problems. Based on the review of method characteristics, these methods focus on having a stage order of two, sti accuracy, L-stability, high quality embedded and dense-output methods, small magnitudes of the algebraic stability matrix eigenvalues, small values of aii, and small or vanishing values of the internal stability function for large eigenvalues of the Jacobian. Among the 15 new methods, ESDIRK4(3)6L[2]SA is recommended as a good default method for solving sti problems at moderate error tolerances.
Transfer of strength and power training to sports performance.
Young, Warren B
2006-06-01
The purposes of this review are to identify the factors that contribute to the transference of strength and power training to sports performance and to provide resistance-training guidelines. Using sprinting performance as an example, exercises involving bilateral contractions of the leg muscles resulting in vertical movement, such as squats and jump squats, have minimal transfer to performance. However, plyometric training, including unilateral exercises and horizontal movement of the whole body, elicits significant increases in sprint acceleration performance, thus highlighting the importance of movement pattern and contraction velocity specificity. Relatively large gains in power output in nonspecific movements (intramuscular coordination) can be accompanied by small changes in sprint performance. Research on neural adaptations to resistance training indicates that intermuscular coordination is an important component in achieving transfer to sports skills. Although the specificity of resistance training is important, general strength training is potentially useful for the purposes of increasing body mass, decreasing the risk of soft-tissue injuries, and developing core stability. Hypertrophy and general power exercises can enhance sports performance, but optimal transfer from training also requires a specific exercise program.
NASA Astrophysics Data System (ADS)
Liu, Guofeng; Li, Chun
2016-08-01
In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[ nx][ ny][ nh][ nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.
Mandala Networks: ultra-small-world and highly sparse graphs
Sampaio Filho, Cesar I. N.; Moreira, André A.; Andrade, Roberto F. S.; Herrmann, Hans J.; Andrade, José S.
2015-01-01
The increasing demands in security and reliability of infrastructures call for the optimal design of their embedded complex networks topologies. The following question then arises: what is the optimal layout to fulfill best all the demands? Here we present a general solution for this problem with scale-free networks, like the Internet and airline networks. Precisely, we disclose a way to systematically construct networks which are robust against random failures. Furthermore, as the size of the network increases, its shortest path becomes asymptotically invariant and the density of links goes to zero, making it ultra-small world and highly sparse, respectively. The first property is ideal for communication and navigation purposes, while the second is interesting economically. Finally, we show that some simple changes on the original network formulation can lead to an improved topology against malicious attacks. PMID:25765450
Use of optimization to predict the effect of selected parameters on commuter aircraft performance
NASA Technical Reports Server (NTRS)
Wells, V. L.; Shevell, R. S.
1982-01-01
The relationships between field length and cruise speed and aircraft direct operating cost were determined. A gradient optimizing computer program was developed to minimize direct operating cost (DOC) as a function of airplane geometry. In this way, the best airplane operating under one set of constraints can be compared with the best operating under another. A constant 30-passenger fuselage and rubberized engines based on the General Electric CT-7 were used as a baseline. All aircraft had to have a 600 nautical mile maximum range and were designed to FAR part 25 structural integrity and climb gradient regulations. Direct operating cost was minimized for a typical design mission of 150 nautical miles. For purposes of C sub L sub max calculation, all aircraft had double-slotted flaps but with no Fowler action.
A Comparison of Genetic Programming Variants for Hyper-Heuristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Sean
Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved, such as routing vehicles over highways with constantly changing traffic flows, because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. Hyper-heuristics typically employ Genetic Programming (GP) and this project has investigated the relationship between the choice of GP and performance inmore » Hyper-heuristics. Results are presented demonstrating the existence of problems for which there is a statistically significant performance differential between the use of different types of GP.« less
Suboptimal LQR-based spacecraft full motion control: Theory and experimentation
NASA Astrophysics Data System (ADS)
Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.
2016-05-01
This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.
Homeostatic Agent for General Environment
NASA Astrophysics Data System (ADS)
Yoshida, Naoto
2018-03-01
One of the essential aspect in biological agents is dynamic stability. This aspect, called homeostasis, is widely discussed in ethology, neuroscience and during the early stages of artificial intelligence. Ashby's homeostats are general-purpose learning machines for stabilizing essential variables of the agent in the face of general environments. However, despite their generality, the original homeostats couldn't be scaled because they searched their parameters randomly. In this paper, first we re-define the objective of homeostats as the maximization of a multi-step survival probability from the view point of sequential decision theory and probabilistic theory. Then we show that this optimization problem can be treated by using reinforcement learning algorithms with special agent architectures and theoretically-derived intrinsic reward functions. Finally we empirically demonstrate that agents with our architecture automatically learn to survive in a given environment, including environments with visual stimuli. Our survival agents can learn to eat food, avoid poison and stabilize essential variables through theoretically-derived single intrinsic reward formulations.
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.
2004-01-01
A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.
The impact of chief executive officer optimism on hospital strategic decision making.
Langabeer, James R; Yao, Emery
2012-01-01
Previous strategic decision making research has focused mostly on the analytical positioning approach, which broadly emphasizes an alignment between rationality and the external environment. In this study, we propose that hospital chief executive optimism (or the general tendency to expect positive future outcomes) will moderate the relationship between comprehensively rational decision-making process and organizational performance. The purpose of this study was to explore the impact that dispositional optimism has on the well-established relationship between rational decision-making processes and organizational performance. Specifically, we hypothesized that optimism will moderate the relationship between the level of rationality and the organization's performance. We further suggest that this relationship will be more negative for those with high, as opposed to low, optimism. We surveyed 168 hospital CEOs and used moderated hierarchical regression methods to statically test our hypothesis. On the basis of a survey study of 168 hospital CEOs, we found evidence of a complex interplay of optimism in the rationality-organizational performance relationship. More specifically, we found that the two-way interactions between optimism and rational decision making were negatively associated with performance and that where optimism was the highest, the rationality-performance relationship was the most negative. Executive optimism was positively associated with organizational performance. We also found that greater perceived environmental turbulence, when interacting with optimism, did not have a significant interaction effect on the rationality-performance relationship. These findings suggest potential for broader participation in strategic processes and the use of organizational development techniques that assess executive disposition and traits for recruitment processes, because CEO optimism influences hospital-level processes. Research implications include incorporating greater use of behavior and cognition constructs to better depict decision-making processes in complex organizations like hospitals.
Whole high-quality light environment for humans and plants
NASA Astrophysics Data System (ADS)
Sharakshane, Anton
2017-11-01
Plants sharing a single light environment on a spaceship with a human being and bearing a decorative function should look as natural and attractive as possible. And consequently they can be illuminated only with white light with a high color rendering index. Can lighting optimized for a human eye be effective and appropriate for plants? Spectrum-based effects have been compared under artificial lighting of plants by high-pressure sodium lamps and general-purpose white LEDs. It has been shown that for the survey sample phytochrome photo-equilibria does not depend significantly on the parameters of white LED light, while the share of phytoactive blue light grows significantly as the color temperature increases. It has been revealed that yield photon flux is proportional to luminous efficacy and increases as the color temperature decreases, general color rendering index Ra and the special color rendering index R14 (green leaf) increase. General-purpose white LED lamps with a color temperature of 2700 K, Ra > 90 and luminous efficacy of 100 lm/W are as efficient as the best high-pressure sodium lamps, and at a higher luminous efficacy their yield photon flux per joule is even bigger in proportion. Here we show that demand for high color rendering white LED light is not contradictory to the agro-technical objectives.
Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo
2017-01-01
This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Chen, Cong; Beckman, Robert A
2009-01-01
This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
Hall, Aaron Smalter; Shan, Yunfeng; Lushington, Gerald; Visvanathan, Mahesh
2016-01-01
Databases and exchange formats describing biological entities such as chemicals and proteins, along with their relationships, are a critical component of research in life sciences disciplines, including chemical biology wherein small information about small molecule properties converges with cellular and molecular biology. Databases for storing biological entities are growing not only in size, but also in type, with many similarities between them and often subtle differences. The data formats available to describe and exchange these entities are numerous as well. In general, each format is optimized for a particular purpose or database, and hence some understanding of these formats is required when choosing one for research purposes. This paper reviews a selection of different databases and data formats with the goal of summarizing their purposes, features, and limitations. Databases are reviewed under the categories of 1) protein interactions, 2) metabolic pathways, 3) chemical interactions, and 4) drug discovery. Representation formats will be discussed according to those describing chemical structures, and those describing genomic/proteomic entities. PMID:22934944
Smalter Hall, Aaron; Shan, Yunfeng; Lushington, Gerald; Visvanathan, Mahesh
2013-03-01
Databases and exchange formats describing biological entities such as chemicals and proteins, along with their relationships, are a critical component of research in life sciences disciplines, including chemical biology wherein small information about small molecule properties converges with cellular and molecular biology. Databases for storing biological entities are growing not only in size, but also in type, with many similarities between them and often subtle differences. The data formats available to describe and exchange these entities are numerous as well. In general, each format is optimized for a particular purpose or database, and hence some understanding of these formats is required when choosing one for research purposes. This paper reviews a selection of different databases and data formats with the goal of summarizing their purposes, features, and limitations. Databases are reviewed under the categories of 1) protein interactions, 2) metabolic pathways, 3) chemical interactions, and 4) drug discovery. Representation formats will be discussed according to those describing chemical structures, and those describing genomic/proteomic entities.
Minimizing water consumption when producing hydropower
NASA Astrophysics Data System (ADS)
Leon, A. S.
2015-12-01
In 2007, hydropower accounted for only 16% of the world electricity production, with other renewable sources totaling 3%. Thus, it is not surprising that when alternatives are evaluated for new energy developments, there is strong impulse for fossil fuel or nuclear energy as opposed to renewable sources. However, as hydropower schemes are often part of a multipurpose water resources development project, they can often help to finance other components of the project. In addition, hydropower systems and their associated dams and reservoirs provide human well-being benefits, such as flood control and irrigation, and societal benefits such as increased recreational activities and improved navigation. Furthermore, hydropower due to its associated reservoir storage, can provide flexibility and reliability for energy production in integrated energy systems. The storage capability of hydropower systems act as a regulating mechanism by which other intermittent and variable renewable energy sources (wind, wave, solar) can play a larger role in providing electricity of commercial quality. Minimizing water consumption for producing hydropower is critical given that overuse of water for energy production may result in a shortage of water for other purposes such as irrigation, navigation or fish passage. This paper presents a dimensional analysis for finding optimal flow discharge and optimal penstock diameter when designing impulse and reaction water turbines for hydropower systems. The objective of this analysis is to provide general insights for minimizing water consumption when producing hydropower. This analysis is based on the geometric and hydraulic characteristics of the penstock, the total hydraulic head and the desired power production. As part of this analysis, various dimensionless relationships between power production, flow discharge and head losses were derived. These relationships were used to withdraw general insights on determining optimal flow discharge and optimal penstock diameter. For instance, it was found that for minimizing water consumption, the ratio of head loss to gross head should not exceed about 15%. Two examples of application are presented to illustrate the procedure for determining optimal flow discharge and optimal penstock diameter for impulse and reaction turbines.
SU-E-J-193: Feasibility of MRI-Only Based IMRT Planning for Pancreatic Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prior, P; Botros, M; Chen, X
2014-06-01
Purpose: With the increasing use of MRI simulation and the advent of MRI-guided delivery, it is desirable to use MRI only for treatment planning. In this study, we assess the dosimetric difference between MRI- and CTbased IMRT planning for pancreatic cancer. Methods: Planning CTs and MRIs acquired for a representative pancreatic cancer patient were used. MRI-based planning utilized forced relative electron density (rED) assignment of organ specific values from IRCU report 46, where rED = 1.029 for PTV and a rED = 1.036 for non-specified tissue (NST). Six IMRT plans were generated with clinical dose-volume (DV) constraints using a researchmore » Monaco planning system employing Monte Carlo dose calculation with optional perpendicular magnetic field (MF) of 1.5T. The following five plans were generated and compared with the planning CT: 1.) CT plan with MF and dose recalculation without optimization; 2.) MRI (T2) plan with target and OARs redrawn based on MRI, forced rED, no MF, and recalculation without optimization; 3.) Similar as in 2 but with MF; 4.) MRI plan with MF but without optimization; and 5.) Similar as in 4 but with optimization. Results: Generally, noticeable differences in PTV point doses and DV parameters (DVPs) between the CT-and MRI-based plans with and without the MF were observed. These differences between the optimized plans were generally small, mostly within 2%. Larger differences were observed in point doses and mean doses for certain OARs between the CT and MRI plan, mostly due to differences between image acquisition times. Conclusion: MRI only based IMRT planning for pancreatic cancer is feasible. The differences observed between the optimized CT and MRI plans with or without the MF were practically negligible if excluding the differences between MRI and CT defined structures.« less
Ocampo, Cesar
2004-05-01
The modeling, design, and optimization of finite burn maneuvers for a generalized trajectory design and optimization system is presented. A generalized trajectory design and optimization system is a system that uses a single unified framework that facilitates the modeling and optimization of complex spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The modeling and optimization issues associated with the use of controlled engine burn maneuvers of finite thrust magnitude and duration are presented in the context of designing and optimizing a wide class of finite thrust trajectories. Optimal control theory is used examine the optimization of these maneuvers in arbitrary force fields that are generally position, velocity, mass, and are time dependent. The associated numerical methods used to obtain these solutions involve either, the solution to a system of nonlinear equations, an explicit parameter optimization method, or a hybrid parameter optimization that combines certain aspects of both. The theoretical and numerical methods presented here have been implemented in copernicus, a prototype trajectory design and optimization system under development at the University of Texas at Austin.
Atomicrex—a general purpose tool for the construction of atomic interaction models
NASA Astrophysics Data System (ADS)
Stukowski, Alexander; Fransson, Erik; Mock, Markus; Erhart, Paul
2017-07-01
We introduce atomicrex, an open-source code for constructing interatomic potentials as well as more general types of atomic-scale models. Such effective models are required to simulate extended materials structures comprising many thousands of atoms or more, because electronic structure methods become computationally too expensive at this scale. atomicrex covers a wide range of interatomic potential types and fulfills many needs in atomistic model development. As inputs, it supports experimental property values as well as ab initio energies and forces, to which models can be fitted using various optimization algorithms. The open architecture of atomicrex allows it to be used in custom model development scenarios beyond classical interatomic potentials while thanks to its Python interface it can be readily integrated e.g., with electronic structure calculations or machine learning algorithms.
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Gong, S
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
NASA Astrophysics Data System (ADS)
Hau, Jan-Niklas; Oberlack, Martin; Chagelishvili, George
2017-04-01
We present a unifying solution framework for the linearized compressible equations for two-dimensional linearly sheared unbounded flows using the Lie symmetry analysis. The full set of symmetries that are admitted by the underlying system of equations is employed to systematically derive the one- and two-dimensional optimal systems of subalgebras, whose connected group reductions lead to three distinct invariant ansatz functions for the governing sets of partial differential equations (PDEs). The purpose of this analysis is threefold and explicitly we show that (i) there are three invariant solutions that stem from the optimal system. These include a general ansatz function with two free parameters, as well as the ansatz functions of the Kelvin mode and the modal approach. Specifically, the first approach unifies these well-known ansatz functions. By considering two limiting cases of the free parameters and related algebraic transformations, the general ansatz function is reduced to either of them. This fact also proves the existence of a link between the Kelvin mode and modal ansatz functions, as these appear to be the limiting cases of the general one. (ii) The Lie algebra associated with the Lie group admitted by the PDEs governing the compressible dynamics is a subalgebra associated with the group admitted by the equations governing the incompressible dynamics, which allows an additional (scaling) symmetry. Hence, any consequences drawn from the compressible case equally hold for the incompressible counterpart. (iii) In any of the systems of ordinary differential equations, derived by the three ansatz functions in the compressible case, the linearized potential vorticity is a conserved quantity that allows us to analyze vortex and wave mode perturbations separately.
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
Optimal weighting in fNL constraints from large scale structure in an idealised case
NASA Astrophysics Data System (ADS)
Slosar, Anže
2009-03-01
We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.
Penningroth, Suzanna L; Scott, Walter D
2012-01-01
Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two theories. Older adults and two groups of younger adults (college students and non-students) listed their current goals, which were then coded by independent raters. Observed age group differences in goals generally supported both theories. Specifically, when compared to younger adults, older adults reported more goals focused on maintenance/loss prevention, the present, emotion-focus and generativity, and social selection, and less goals focused on knowledge acquisition and the future. However, contrary to prediction, older adults also showed less goal focusing than younger adults, reporting goals from a broader set of life domains (e.g., health, property/possessions, friendship).
NASA Technical Reports Server (NTRS)
Collins, L.; Saunders, D.
1986-01-01
User information for program PROFILE, an aerodynamics design utility for refining, plotting, and tabulating airfoil profiles is provided. The theory and implementation details for two of the more complex options are also presented. These are the REFINE option, for smoothing curvature in selected regions while retaining or seeking some specified thickness ratio, and the OPTIMIZE option, which seeks a specified curvature distribution. REFINE uses linear techniques to manipulate ordinates via the central difference approximation to second derivatives, while OPTIMIZE works directly with curvature using nonlinear least squares techniques. Use of programs QPLOT and BPLOT is also described, since all of the plots provided by PROFILE (airfoil coordinates, curvature distributions) are achieved via the general purpose QPLOT utility. BPLOT illustrates (again, via QPLOT) the shape functions used by two of PROFILE's options. The programs were designed and implemented for the Applied Aerodynamics Branch at NASA Ames Research Center, Moffett Field, California, and written in FORTRAN and run on a VAX-11/780 under VMS.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Coplanar tail-chase aerial combat as a differential game
NASA Technical Reports Server (NTRS)
Merz, A. W.; Hague, D. S.
1977-01-01
A reduced-order version of the one-on-one aerial combat problem is studied as a pursuit-evasion differential game. The coplanar motion takes place at given speeds and given maximum available turn rates, and is described by three state equations which are equivalent to the range, bearing, and heading of one aircraft relative to the other. The purpose of the study is to determine those relative geometries from which either aircraft can be guaranteed a win, regardless of the maneuver strategies of the other. Termination is specified by the tail-chase geometry, at which time the roles of pursuer and evader are known. The roles are found in general, together with the associated optimal turn maneuvers, by solution of the differential game of kind. For the numerical parameters chosen, neither aircraft can win from the majority of possible initial conditions if the other turns optimally in certain critical geometries.
A real-time MPEG software decoder using a portable message-passing library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan
1995-12-31
We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reis, Chuck; Nelson, Eric; Armer, James
The purpose of this playbook and accompanying spreadsheets is to generalize the detailed CBP analysis and to put tools in the hands of experienced refrigeration designers to evaluate multiple applications of refrigeration waste heat reclaim across the United States. Supermarkets with large portfolios of similar buildings can use these tools to assess the impact of large-scale implementation of heat reclaim systems. In addition, the playbook provides best practices for implementing heat reclaim systems to achieve the best long-term performance possible. It includes guidance on operations and maintenance as well as measurement and verification.
Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU
NASA Astrophysics Data System (ADS)
Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei
2013-09-01
The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyakh, Dmitry I.
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2015-04-01
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
Closed loop problems in biomechanics. Part II--an optimization approach.
Vaughan, C L; Hay, J G; Andrews, J G
1982-01-01
A closed loop problem in biomechanics may be defined as a problem in which there are one or more closed loops formed by the human body in contact with itself or with an external system. Under certain conditions the problem is indeterminate--the unknown forces and torques outnumber the equations. Force transducing devices, which would help solve this problem, have serious drawbacks, and existing methods are inaccurate and non-general. The purposes of the present paper are (1) to develop a general procedure for solving closed loop problems; (2) to illustrate the application of the procedure; and (3) to examine the validity of the procedure. A mathematical optimization approach is applied to the solution of three different closed loop problems--walking up stairs, vertical jumping and cartwheeling. The following conclusions are drawn: (1) the method described is reasonably successful for predicting horizontal and vertical reaction forces at the distal segments although problems exist for predicting the points of application of these forces; (2) the results provide some support for the notion that the human neuromuscular mechanism attempts to minimize the joint torques and thus, to a certain degree, the amount of muscular effort; (3) in the validation procedure it is desirable to have a force device for each of the distal segments in contact with a fixed external system; and (4) the method is sufficiently general to be applied to all classes of closed loop problems.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Generalized Optimal-State-Constraint Extended Kalman Filter (OSC-EKF)
2017-02-01
ARL-TR-7948• FEB 2017 US Army Research Laboratory GeneralizedOptimal-State-Constraint ExtendedKalman Filter (OSC-EKF) by James M Maley, Kevin...originator. ARL-TR-7948• FEB 2017 US Army Research Laboratory GeneralizedOptimal-State-Constraint ExtendedKalman Filter (OSC-EKF) by James M Maley Weapons and...
The Effects of Academic Optimism on Elementary Reading Achievement
ERIC Educational Resources Information Center
Bevel, Raymona K.; Mitchell, Roxanne M.
2012-01-01
Purpose: The purpose of this paper is to explore the relationship between academic optimism (AO) and elementary reading achievement (RA). Design/methodology/approach: Using correlation and hierarchical linear regression, the authors examined school-level effects of AO on fifth grade reading achievement in 29 elementary schools in Alabama.…
Academic Optimism, Organizational Citizenship Behaviors, and Student Achievement at Charter Schools
ERIC Educational Resources Information Center
Guvercin, Mustafa
2013-01-01
The purpose of this study was to examine the relationship among academic optimism, Organizational Citizenship Behaviors (OCBs), and student achievement in college preparatory charter schools. A purposeful sample of elementary school teachers from college preparatory charter schools (N = 226) in southeast Texas was solicited to complete the…
Protein dielectric constants determined from NMR chemical shift perturbations.
Kukic, Predrag; Farrell, Damien; McIntosh, Lawrence P; García-Moreno E, Bertrand; Jensen, Kristine Steen; Toleikis, Zigmantas; Teilum, Kaare; Nielsen, Jens Erik
2013-11-13
Understanding the connection between protein structure and function requires a quantitative understanding of electrostatic effects. Structure-based electrostatic calculations are essential for this purpose, but their use has been limited by a long-standing discussion on which value to use for the dielectric constants (ε(eff) and ε(p)) required in Coulombic and Poisson-Boltzmann models. The currently used values for ε(eff) and ε(p) are essentially empirical parameters calibrated against thermodynamic properties that are indirect measurements of protein electric fields. We determine optimal values for ε(eff) and ε(p) by measuring protein electric fields in solution using direct detection of NMR chemical shift perturbations (CSPs). We measured CSPs in 14 proteins to get a broad and general characterization of electric fields. Coulomb's law reproduces the measured CSPs optimally with a protein dielectric constant (ε(eff)) from 3 to 13, with an optimal value across all proteins of 6.5. However, when the water-protein interface is treated with finite difference Poisson-Boltzmann calculations, the optimal protein dielectric constant (ε(p)) ranged from 2 to 5 with an optimum of 3. It is striking how similar this value is to the dielectric constant of 2-4 measured for protein powders and how different it is from the ε(p) of 6-20 used in models based on the Poisson-Boltzmann equation when calculating thermodynamic parameters. Because the value of ε(p) = 3 is obtained by analysis of NMR chemical shift perturbations instead of thermodynamic parameters such as pK(a) values, it is likely to describe only the electric field and thus represent a more general, intrinsic, and transferable ε(p) common to most folded proteins.
Identifying the optimal segmentors for mass classification in mammograms
NASA Astrophysics Data System (ADS)
Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.
2015-03-01
In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.
Novoseltsev, V N; Arking, R; Novoseltseva, J A; Yashin, A I
2002-06-01
The general purpose of the paper is to test evolutionary optimality theories with experimental data on reproduction, energy consumption, and longevity in a particular Drosophila genotype. We describe the resource allocation in Drosophila females in terms of the oxygen consumption rates devoted to reproduction and to maintenance. The maximum ratio of the component spent on reproduction to the total rate of oxygen consumption, which can be realized by the female reproductive machinery, is called metabolic reproductive efficiency (MRE). We regard MRE as an evolutionary constraint. We demonstrate that MRE may be evaluated for a particular Drosophila phenotype given the fecundity pattern, the age-related pattern of oxygen consumption rate, and the longevity. We use a homeostatic model of aging to simulate a life history of a representative female fly, which describes the control strain in the long-term experiments with the Wayne State Drosophila genotype. We evaluate the theoretically optimal trade-offs in this genotype. Then we apply the Van Noordwijk-de Jong resource acquisition and allocation model, Kirkwood's disposable soma theory. and the Partridge-Barton optimality approach to test if the experimentally observed trade-offs may be regarded as close to the theoretically optimal ones. We demonstrate that the two approaches by Partridge-Barton and Kirkwood allow a positive answer to the question, whereas the Van Noordwijk-de Jong approach may be used to illustrate the optimality. We discuss the prospects of applying the proposed technique to various Drosophila experiments, in particular those including manipulations affecting fecundity.
Algorithms for bilevel optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.
Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Results Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. Conclusions The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS. PMID:28614846
Total generalized variation-regularized variational model for single image dehazing
NASA Astrophysics Data System (ADS)
Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen
2018-04-01
Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.
Application of NASA General-Purpose Solver to Large-Scale Computations in Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Storaasli, Olaf O.
2004-01-01
Of several iterative and direct equation solvers evaluated previously for computations in aeroacoustics, the most promising was the NASA-developed General-Purpose Solver (winner of NASA's 1999 software of the year award). This paper presents detailed, single-processor statistics of the performance of this solver, which has been tailored and optimized for large-scale aeroacoustic computations. The statistics, compiled using an SGI ORIGIN 2000 computer with 12 Gb available memory (RAM) and eight available processors, are the central processing unit time, RAM requirements, and solution error. The equation solver is capable of solving 10 thousand complex unknowns in as little as 0.01 sec using 0.02 Gb RAM, and 8.4 million complex unknowns in slightly less than 3 hours using all 12 Gb. This latter solution is the largest aeroacoustics problem solved to date with this technique. The study was unable to detect any noticeable error in the solution, since noise levels predicted from these solution vectors are in excellent agreement with the noise levels computed from the exact solution. The equation solver provides a means for obtaining numerical solutions to aeroacoustics problems in three dimensions.
Efficient Simulation of Secondary Fluorescence Via NIST DTSA-II Monte Carlo.
Ritchie, Nicholas W M
2017-06-01
Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence-PENEPMA and NIST DTSA-II a (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paegert, Martin; Stassun, Keivan G.; Burger, Dan M.
2014-08-01
We describe a new neural-net-based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as the Large Synoptic Survey Telescope. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together withmore » a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98% and a false-positive rate of 2% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes, including RR Lyrae, Mira, and delta Scuti. However, the classifier currently has difficulty discriminating between different sub-classes of eclipsing binaries, and suffers a relatively low (∼60%) retrieval rate for multi-mode delta Cepheid stars. We find that it is imperative to train the classifier's neural network with exemplars that include the full range of light curve quality to which the classifier will be expected to perform; the classifier performs well on noisy light curves only when trained with noisy exemplars. The classifier source code, ancillary programs, a trained neural net, and a guide for use, are provided.« less
Whole high-quality light environment for humans and plants.
Sharakshane, Anton
2017-11-01
Plants sharing a single light environment on a spaceship with a human being and bearing a decorative function should look as natural and attractive as possible. And consequently they can be illuminated only with white light with a high color rendering index. Can lighting optimized for a human eye be effective and appropriate for plants? Spectrum-based effects have been compared under artificial lighting of plants by high-pressure sodium lamps and general-purpose white LEDs. It has been shown that for the survey sample phytochrome photo-equilibria does not depend significantly on the parameters of white LED light, while the share of phytoactive blue light grows significantly as the color temperature increases. It has been revealed that yield photon flux is proportional to luminous efficacy and increases as the color temperature decreases, general color rendering index R a and the special color rendering index R 14 (green leaf) increase. General-purpose white LED lamps with a color temperature of 2700 K, R a > 90 and luminous efficacy of 100 lm/W are as efficient as the best high-pressure sodium lamps, and at a higher luminous efficacy their yield photon flux per joule is even bigger in proportion. Here we show that demand for high color rendering white LED light is not contradictory to the agro-technical objectives. Copyright © 2017. Published by Elsevier Ltd.
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
Ibarra, Ignacio L; Melo, Francisco
2010-07-01
Dynamic programming (DP) is a general optimization strategy that is successfully used across various disciplines of science. In bioinformatics, it is widely applied in calculating the optimal alignment between pairs of protein or DNA sequences. These alignments form the basis of new, verifiable biological hypothesis. Despite its importance, there are no interactive tools available for training and education on understanding the DP algorithm. Here, we introduce an interactive computer application with a graphical interface, for the purpose of educating students about DP. The program displays the DP scoring matrix and the resulting optimal alignment(s), while allowing the user to modify key parameters such as the values in the similarity matrix, the sequence alignment algorithm version and the gap opening/extension penalties. We hope that this software will be useful to teachers and students of bioinformatics courses, as well as researchers who implement the DP algorithm for diverse applications. The software is freely available at: http:/melolab.org/sat. The software is written in the Java computer language, thus it runs on all major platforms and operating systems including Windows, Mac OS X and LINUX. All inquiries or comments about this software should be directed to Francisco Melo at fmelo@bio.puc.cl.
Mid-sagittal plane and mid-sagittal surface optimization in brain MRI using a local symmetry measure
NASA Astrophysics Data System (ADS)
Stegmann, Mikkel B.; Skoglund, Karl; Ryberg, Charlotte
2005-04-01
This paper describes methods for automatic localization of the mid-sagittal plane (MSP) and mid-sagittal surface (MSS). The data used is a subset of the Leukoaraiosis And DISability (LADIS) study consisting of three-dimensional magnetic resonance brain data from 62 elderly subjects (age 66 to 84 years). Traditionally, the mid-sagittal plane is localized by global measures. However, this approach fails when the partitioning plane between the brain hemispheres does not coincide with the symmetry plane of the head. We instead propose to use a sparse set of profiles in the plane normal direction and maximize the local symmetry around these using a general-purpose optimizer. The plane is parameterized by azimuth and elevation angles along with the distance to the origin in the normal direction. This approach leads to solutions confirmed as the optimal MSP in 98 percent of the subjects. Despite the name, the mid-sagittal plane is not always planar, but a curved surface resulting in poor partitioning of the brain hemispheres. To account for this, this paper also investigates an optimization strategy which fits a thin-plate spline surface to the brain data using a robust least median of squares estimator. Albeit computationally more expensive, mid-sagittal surface fitting demonstrated convincingly better partitioning of curved brains into cerebral hemispheres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Hariswaran; Grout, Ray W
This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less
Connolly, Declan A J
2012-09-01
The purpose of this article is to assess the value of the anaerobic threshold for use in clinical populations with the intent to improve exercise adaptations and outcomes. The anaerobic threshold is generally poorly understood, improperly used, and poorly measured. It is rarely used in clinical settings and often reserved for athletic performance testing. Increased exercise participation within both clinical and other less healthy populations has increased our attention to optimizing exercise outcomes. Of particular interest is the optimization of lipid metabolism during exercise in order to improve numerous conditions such as blood lipid profile, insulin sensitivity and secretion, and weight loss. Numerous authors report on the benefits of appropriate exercise intensity in optimizing outcomes even though regulation of intensity has proved difficult for many. Despite limited use, selected exercise physiology markers have considerable merit in exercise-intensity regulation. The anaerobic threshold, and other markers such as heart rate, may well provide a simple and valuable mechanism for regulating exercising intensity. The use of the anaerobic threshold and accurate target heart rate to regulate exercise intensity is a valuable approach that is under-utilized across populations. The measurement of the anaerobic threshold can be simplified to allow clients to use nonlaboratory measures, for example heart rate, in order to self-regulate exercise intensity and improve outcomes.
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
A new implementation of the programming system for structural synthesis (PROSSS-2)
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.
1984-01-01
This new implementation of the PROgramming System for Structural Synthesis (PROSSS-2) combines a general-purpose finite element computer program for structural analysis, a state-of-the-art optimization program, and several user-supplied, problem-dependent computer programs. The results are flexibility of the optimization procedure, organization, and versatility of the formulation of constraints and design variables. The analysis-optimization process results in a minimized objective function, typically the mass. The analysis and optimization programs are executed repeatedly by looping through the system until the process is stopped by a user-defined termination criterion. However, some of the analysis, such as model definition, need only be one time and the results are saved for future use. The user must write some small, simple FORTRAN programs to interface between the analysis and optimization programs. One of these programs, the front processor, converts the design variables output from the optimizer into the suitable format for input into the analyzer. Another, the end processor, retrieves the behavior variables and, optionally, their gradients from the analysis program and evaluates the objective function and constraints and optionally their gradients. These quantities are output in a format suitable for input into the optimizer. These user-supplied programs are problem-dependent because they depend primarily upon which finite elements are being used in the model. PROSSS-2 differs from the original PROSSS in that the optimizer and front and end processors have been integrated into the finite element computer program. This was done to reduce the complexity and increase portability of the system, and to take advantage of the data handling features found in the finite element program.
7 CFR 2902.48 - General purpose household cleaners.
Code of Federal Regulations, 2010 CFR
2010-01-01
... PROCUREMENT Designated Items § 2902.48 General purpose household cleaners. (a) Definition. Products designed... procurement preference for qualifying biobased general purpose household cleaners. By that date, Federal... 7 Agriculture 15 2010-01-01 2010-01-01 false General purpose household cleaners. 2902.48 Section...
Computer-aided design of antenna structures and components
NASA Technical Reports Server (NTRS)
Levy, R.
1976-01-01
This paper discusses computer-aided design procedures for antenna reflector structures and related components. The primary design aid is a computer program that establishes cross sectional sizes of the structural members by an optimality criterion. Alternative types of deflection-dependent objectives can be selected for designs subject to constraints on structure weight. The computer program has a special-purpose formulation to design structures of the type frequently used for antenna construction. These structures, in common with many in other areas of application, are represented by analytical models that employ only the three translational degrees of freedom at each node. The special-purpose construction of the program, however, permits coding and data management simplifications that provide advantages in problem size and execution speed. Size and speed are essentially governed by the requirements of structural analysis and are relatively unaffected by the added requirements of design. Computation times to execute several design/analysis cycles are comparable to the times required by general-purpose programs for a single analysis cycle. Examples in the paper illustrate effective design improvement for structures with several thousand degrees of freedom and within reasonable computing times.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
... (NOA) for General Purpose Warehouse and Information Technology Center Construction (GPW/IT)--Tracy Site.... ACTION: Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center... FR 65300) announcing the publication of the General Purpose Warehouse and Information Technology...
Family medicine outpatient encounters are more complex than those of cardiology and psychiatry.
Katerndahl, David; Wood, Robert; Jaén, Carlos Roberto
2011-01-01
comparison studies suggest that the guideline-concordant care provided for specific medical conditions is less optimal in primary care compared with cardiology and psychiatry settings. The purpose of this study is to estimate the relative complexity of patient encounters in general/family practice, cardiology, and psychiatry settings. secondary analysis of the 2000 National Ambulatory Medical Care Survey data for ambulatory patients seen in general/family practice, cardiology, and psychiatry settings was performed. The complexity for each variable was estimated as the quantity weighted by variability and diversity. there is minimal difference in the unadjusted input and total encounter complexity of general/family practice and cardiology; psychiatry's input is less complex. Cardiology encounters involved more input quantitatively, but the diversity of general/family practice input eliminated the difference. Cardiology also involved more complex output. However, when the duration of visit is factored in, the complexity of care provided per hour in general/family practice is 33% more relative to cardiology and 5 times more relative to psychiatry. care during family physician visits is more complex per hour than the care during visits to cardiologists or psychiatrists. This may account for a lower rate of completion of process items measured for quality of care.
ERIC Educational Resources Information Center
Kulophas, Dhirapat; Hallinger, Philip; Ruengtrakul, Auyporn; Wongwanich, Suwimon
2018-01-01
Purpose: In the context of Thailand's progress towards education reform, scholars have identified a lack of effective school-level leadership as an impeding factor. The purpose of this paper is to develop and validate a theoretical model of authentic leadership effects on teacher academic optimism and work engagement. Authentic leadership was…
ERIC Educational Resources Information Center
Sutton, Stephen G.; Gyuris, Emma
2015-01-01
Purpose: The purpose of this study was twofold: first, to optimize the Environmental Attitudes Inventory (EAI) and second, to establish a baseline of the difference in environmental attitudes between first and final year students, taken at the start of a university's declaration of commitment to EfS. Design/methodology/approach: The…
Caffe con Troll: Shallow Ideas to Speed Up Deep Learning
Hadjis, Stefan; Abuzaid, Firas; Zhang, Ce; Ré, Christopher
2016-01-01
We present Caffe con Troll (CcT), a fully compatible end-to-end version of the popular framework Caffe with rebuilt internals. We built CcT to examine the performance characteristics of training and deploying general-purpose convolutional neural networks across different hardware architectures. We find that, by employing standard batching optimizations for CPU training, we achieve a 4.5× throughput improvement over Caffe on popular networks like CaffeNet. Moreover, with these improvements, the end-to-end training time for CNNs is directly proportional to the FLOPS delivered by the CPU, which enables us to efficiently train hybrid CPU-GPU systems for CNNs. PMID:27314106
Caffe con Troll: Shallow Ideas to Speed Up Deep Learning.
Hadjis, Stefan; Abuzaid, Firas; Zhang, Ce; Ré, Christopher
2015-01-01
We present Caffe con Troll (CcT), a fully compatible end-to-end version of the popular framework Caffe with rebuilt internals. We built CcT to examine the performance characteristics of training and deploying general-purpose convolutional neural networks across different hardware architectures. We find that, by employing standard batching optimizations for CPU training, we achieve a 4.5× throughput improvement over Caffe on popular networks like CaffeNet. Moreover, with these improvements, the end-to-end training time for CNNs is directly proportional to the FLOPS delivered by the CPU, which enables us to efficiently train hybrid CPU-GPU systems for CNNs.
Programs for analysis and resizing of complex structures. [computerized minimum weight design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Prasad, B.
1978-01-01
The paper describes the PARS (Programs for Analysis and Resizing of Structures) system. PARS is a user oriented system of programs for the minimum weight design of structures modeled by finite elements and subject to stress, displacement, flutter and thermal constraints. The system is built around SPAR - an efficient and modular general purpose finite element program, and consists of a series of processors that communicate through the use of a data base. An efficient optimizer based on the Sequence of Unconstrained Minimization Technique (SUMT) with an extended interior penalty function and Newton's method is used. Several problems are presented for demonstration of the system capabilities.
An expert system environment for the Generic VHSIC Spaceborne Computer (GVSC)
NASA Astrophysics Data System (ADS)
Cockerham, Ann; Labhart, Jay; Rowe, Michael; Skinner, James
The authors describe a Phase II Phillips Laboratory Small Business Innovative Research (SBIR) program being performed to implement a flexible and general-purpose inference environment for embedded space and avionics applications. This inference environment is being developed in Ada and takes special advantage of the target architecture, the GVSC. The GVSC implements the MIL-STD-1750A ISA and contains enhancements to allow access of up to 8 MBytes of memory. The inference environment makes use of the Merit Enhanced Traversal Engine (METE) algorithm, which employs the latest inference and knowledge representation strategies to optimize both run-time speed and memory utilization.
INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN
The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...
NASA Astrophysics Data System (ADS)
Schröder, Markus; Brown, Alex
2009-10-01
We present a modified version of a previously published algorithm (Gollub et al 2008 Phys. Rev. Lett.101 073002) for obtaining an optimized laser field with more general restrictions on the search space of the optimal field. The modification leads to enforcement of the constraints on the optimal field while maintaining good convergence behaviour in most cases. We demonstrate the general applicability of the algorithm by imposing constraints on the temporal symmetry of the optimal fields. The temporal symmetry is used to reduce the number of transitions that have to be optimized for quantum gate operations that involve inversion (NOT gate) or partial inversion (Hadamard gate) of the qubits in a three-dimensional model of ammonia.
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
Ant Colony Optimization for Mapping, Scheduling and Placing in Reconfigurable Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrandi, Fabrizio; Lanzi, Pier Luca; Pilato, Christian
Modern heterogeneous embedded platforms, com- posed of several digital signal, application specific and general purpose processors, also include reconfigurable devices support- ing partial dynamic reconfiguration. These devices can change the behavior of some of their parts during execution, allowing hardware acceleration of more sections of the applications. Never- theless, partial dynamic reconfiguration imposes severe overheads in terms of latency. For such systems, a critical part of the design phase is deciding on which processing elements (mapping) and when (scheduling) executing a task, but also how to place them on the reconfigurable device to guarantee the most efficient reuse of themore » programmable logic. In this paper we propose an algorithm based on Ant Colony Optimization (ACO) that simultaneously executes the scheduling, the mapping and the linear placing of tasks, hiding reconfiguration overheads through prefetching. Our heuristic gradually constructs solutions and then searches around the best ones, cutting out non-promising areas of the design space. We show how to consider the partial dynamic reconfiguration constraints in the scheduling, placing and mapping problems and compare our formulation to other heuristics that address the same problems. We demonstrate that our proposal is more general and robust, and finds better solutions (16.5% in average) with respect to competing solutions.« less
Generally astigmatic Gaussian beam representation and optimization using skew rays
NASA Astrophysics Data System (ADS)
Colbourne, Paul D.
2014-12-01
Methods are presented of using skew rays to optimize a generally astigmatic optical system to obtain the desired Gaussian beam focus and minimize aberrations, and to calculate the propagating generally astigmatic Gaussian beam parameters at any point. The optimization method requires very little computation beyond that of a conventional ray optimization, and requires no explicit calculation of the properties of the propagating Gaussian beam. Unlike previous methods, the calculation of beam parameters does not require matrix calculations or the introduction of non-physical concepts such as imaginary rays.
Antecedent and Consequence of School Academic Optimism and Teachers' Academic Optimism Model
ERIC Educational Resources Information Center
Hong, Fu-Yuan
2017-01-01
The main purpose of this research was to examine the relationships among school principals' transformational leadership, school academic optimism, teachers' academic optimism and teachers' professional commitment. This study conducted a questionnaire survey on 367 teachers from 20 high schools in Taiwan by random sampling, using principals'…
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Seawell, Asani H.; Cutrona, Carolyn E.; Russell, Daniel W.
2012-01-01
The present longitudinal study examined the role of general and tailored social support in mitigating the deleterious impact of racial discrimination on depressive symptoms and optimism in a large sample of African American women. Participants were 590 African American women who completed measures assessing racial discrimination, general social support, tailored social support for racial discrimination, depressive symptoms, and optimism at two time points (2001–2002 and 2003–2004). Our results indicated that higher levels of general and tailored social support predicted optimism one year later; changes in both types of support also predicted changes in optimism over time. Although initial levels of neither measure of social support predicted depressive symptoms over time, changes in tailored support predicted changes in depressive symptoms. We also sought to determine whether general and tailored social support “buffer” or diminish the negative effects of racial discrimination on depressive symptoms and optimism. Our results revealed a classic buffering effect of tailored social support, but not general support on depressive symptoms for women experiencing high levels of discrimination. PMID:24443614
Impact of sampling strategy on stream load estimates in till landscape of the Midwest
Vidon, P.; Hubbard, L.E.; Soyeux, E.
2009-01-01
Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.
Design of synthetic biological logic circuits based on evolutionary algorithm.
Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei
2013-08-01
The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.
Advanced Architectures for Astrophysical Supercomputing
NASA Astrophysics Data System (ADS)
Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.
2010-12-01
Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.
Characteristics of the General Physics student population.
NASA Astrophysics Data System (ADS)
Hunt, Gary L.
2006-12-01
Are pre-medical students different than the other students in a General physics class? They often appear to be different, based on how often they seek help from the instructor or how nervous they are about 2 points on a lab report. But are these students different in a measurable characteristic? The purpose of this study is to better understand the characteristics of the students in the introductory physics classes. This is the first step toward improving the instruction. By better understanding the students the classroom, the organization and pedagogy can be adjusted to optimize student learning. The characteristics to be investigated during this study are: · student epistemological structure, · student attitudes, · science course preparation prior to this course, · study techniques used, · physics concepts gained during the class · performance in the class. The data will be analyzed to investigate differences between groups. The groups investigated will be major, gender, and traditional/nontraditional students.
Asteroid models from photometry and complementary data sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaasalainen, Mikko
I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze,more » and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.« less
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
Lyakh, Dmitry I.
2015-01-05
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Hodges, Dewey H.
1990-01-01
A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.
NASA Astrophysics Data System (ADS)
Huang, Lingzhi; Xiao, Yong; Wen, Jihong; Zhang, Hao; Wen, Xisen
2018-07-01
Acoustic coatings with periodically arranged internal cavities have been successfully applied in submarines for the purpose of decoupling water from vibration of underwater structures, and thus reducing underwater sound radiation. Previous publications on decoupling acoustic coatings with cavities are mainly focused on the case of coatings with specific shaped cavities, including cylindrical and conical cavities. To explore better decoupling performance, an optimal design of acoustic coating with complex shaped cavities is attempted in this paper. An equivalent fluid model is proposed to characterize coatings with general axisymmetrical cavities. By employing the equivalent fluid model, an analytical vibroacoustic model is further developed for the prediction of sound radiation from an infinite plate covered with an equivalent fluid layer (as a replacement of original coating) and immersed in water. Numerical examples are provided to verify the equivalent fluid model. Based on a combining use of the analytical vibroacoustic model and a differential evolution algorithm, optimal designs for acoustic coatings with cavities are conducted. Numerical results demonstrate that the decoupling performance of acoustic coating can be significantly improved by employing special axisymmetrical cavities as compared to traditional cylindrical cavities.
An algorithm for the optimal collection of wet waste.
Laureri, Federica; Minciardi, Riccardo; Robba, Michela
2016-02-01
This work refers to the development of an approach for planning wet waste (food waste and other) collection at a metropolitan scale. Some specific modeling features distinguish this specific waste collection problem from the other ones. For instance, there may be significant differences as regards the values of the parameters (such as weight and volume) characterizing the various collection points. As it happens for classical waste collection planning, even in the case of wet waste, one has to deal with difficult combinatorial problems, where the determination of an optimal solution may require a very large computational effort, in the case of problem instances having a noticeable dimensionality. For this reason, in this work, a heuristic procedure for the optimal planning of wet waste is developed and applied to problem instances drawn from a real case study. The performances that can be obtained by applying such a procedure are evaluated by a comparison with those obtainable via a general-purpose mathematical programming software package, as well as those obtained by applying very simple decision rules commonly used in practice. The considered case study consists in an area corresponding to the historical center of the Municipality of Genoa. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…
ERIC Educational Resources Information Center
Wu, Jason H.; Hoy, Wayne K.; Tarter, C. John
2013-01-01
Purpose: The purpose of this research is twofold: to test a theory of academic optimism in Taiwan elementary schools and to expand the theory by adding new variables, collective responsibility and enabling school structure, to the model. Design/methodology/approach: Structural equation modeling was used to test, refine, and expand an…
EUD-based biological optimization for carbon ion therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brüningk, Sarah C., E-mail: sarah.brueningk@icr.ac.uk; Kamp, Florian; Wilkens, Jan J.
2015-11-15
Purpose: Treatment planning for carbon ion therapy requires an accurate modeling of the biological response of each tissue to estimate the clinical outcome of a treatment. The relative biological effectiveness (RBE) accounts for this biological response on a cellular level but does not refer to the actual impact on the organ as a whole. For photon therapy, the concept of equivalent uniform dose (EUD) represents a simple model to take the organ response into account, yet so far no formulation of EUD has been reported that is suitable to carbon ion therapy. The authors introduce the concept of an equivalentmore » uniform effect (EUE) that is directly applicable to both ion and photon therapies and exemplarily implemented it as a basis for biological treatment plan optimization for carbon ion therapy. Methods: In addition to a classical EUD concept, which calculates a generalized mean over the RBE-weighted dose distribution, the authors propose the EUE to simplify the optimization process of carbon ion therapy plans. The EUE is defined as the biologically equivalent uniform effect that yields the same probability of injury as the inhomogeneous effect distribution in an organ. Its mathematical formulation is based on the generalized mean effect using an effect-volume parameter to account for different organ architectures and is thus independent of a reference radiation. For both EUD concepts, quadratic and logistic objective functions are implemented into a research treatment planning system. A flexible implementation allows choosing for each structure between biological effect constraints per voxel and EUD constraints per structure. Exemplary treatment plans are calculated for a head-and-neck patient for multiple combinations of objective functions and optimization parameters. Results: Treatment plans optimized using an EUE-based objective function were comparable to those optimized with an RBE-weighted EUD-based approach. In agreement with previous results from photon therapy, the optimization by biological objective functions resulted in slightly superior treatment plans in terms of final EUD for the organs at risk (OARs) compared to voxel-based optimization approaches. This observation was made independent of the underlying objective function metric. An absolute gain in OAR sparing was observed for quadratic objective functions, whereas intersecting DVHs were found for logistic approaches. Even for considerable under- or overestimations of the used effect- or dose–volume parameters during the optimization, treatment plans were obtained that were of similar quality as the results of a voxel-based optimization. Conclusions: EUD-based optimization with either of the presented concepts can successfully be applied to treatment plan optimization. This makes EUE-based optimization for carbon ion therapy a useful tool to optimize more specifically in the sense of biological outcome while voxel-to-voxel variations of the biological effectiveness are still properly accounted for. This may be advantageous in terms of computational cost during treatment plan optimization but also enables a straight forward comparison of different fractionation schemes or treatment modalities.« less
Design of freeze-drying processes for pharmaceuticals: practical advice.
Tang, Xiaolin; Pikal, Michael J
2004-02-01
Design of freeze-drying processes is often approached with a "trial and error" experimental plan or, worse yet, the protocol used in the first laboratory run is adopted without further attempts at optimization. Consequently, commercial freeze-drying processes are often neither robust nor efficient. It is our thesis that design of an "optimized" freeze-drying process is not particularly difficult for most products, as long as some simple rules based on well-accepted scientific principles are followed. It is the purpose of this review to discuss the scientific foundations of the freeze-drying process design and then to consolidate these principles into a set of guidelines for rational process design and optimization. General advice is given concerning common stability issues with proteins, but unusual and difficult stability issues are beyond the scope of this review. Control of ice nucleation and crystallization during the freezing step is discussed, and the impact of freezing on the rest of the process and final product quality is reviewed. Representative freezing protocols are presented. The significance of the collapse temperature and the thermal transition, denoted Tg', are discussed, and procedures for the selection of the "target product temperature" for primary drying are presented. Furthermore, guidelines are given for selection of the optimal shelf temperature and chamber pressure settings required to achieve the target product temperature without thermal and/or mass transfer overload of the freeze dryer. Finally, guidelines and "rules" for optimization of secondary drying and representative secondary drying protocols are presented.
NASA Astrophysics Data System (ADS)
Yang, Kai Ke; Zhu, Ji Hong; Wang, Chuang; Jia, Dong Sheng; Song, Long Long; Zhang, Wei Hong
2018-05-01
The purpose of this paper is to investigate the structures achieved by topology optimization and their fabrications by 3D printing considering the particular features of material microstructures and macro mechanical performances. Combining Digital Image Correlation and Optical Microscope, this paper experimentally explored the anisotropies of stiffness and strength existing in the 3D printed polymer material using Stereolithography (SLA) and titanium material using Selective Laser Melting (SLM). The standard specimens and typical structures obtained by topology optimization were fabricated along different building directions. On the one hand, the experimental results of these SLA produced structures showed stable properties and obviously anisotropic rules in stiffness, ultimate strengths and places of fractures. Further structural designs were performed using topology optimization when the particular mechanical behaviors of SLA printed materials were considered, which resulted in better structural performances compared to the optimized designs using `ideal' isotropic material model. On the other hand, this paper tested the mechanical behaviors of SLM printed multiscale lattice structures which were fabricated using the same metal powder and the same machine. The structural stiffness values are generally similar while the strength behaviors show a difference, which are mainly due to the irregular surface quality of the tiny structural branches of the lattice. The above evidences clearly show that the consideration of the particular behaviors of 3D printed materials is therefore indispensable for structural design and optimization in order to improve the structural performance and strengthen their practical significance.
Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.
Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina
2016-08-25
The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.
NASA Astrophysics Data System (ADS)
Yang, Kai Ke; Zhu, Ji Hong; Wang, Chuang; Jia, Dong Sheng; Song, Long Long; Zhang, Wei Hong
2018-02-01
The purpose of this paper is to investigate the structures achieved by topology optimization and their fabrications by 3D printing considering the particular features of material microstructures and macro mechanical performances. Combining Digital Image Correlation and Optical Microscope, this paper experimentally explored the anisotropies of stiffness and strength existing in the 3D printed polymer material using Stereolithography (SLA) and titanium material using Selective Laser Melting (SLM). The standard specimens and typical structures obtained by topology optimization were fabricated along different building directions. On the one hand, the experimental results of these SLA produced structures showed stable properties and obviously anisotropic rules in stiffness, ultimate strengths and places of fractures. Further structural designs were performed using topology optimization when the particular mechanical behaviors of SLA printed materials were considered, which resulted in better structural performances compared to the optimized designs using `ideal' isotropic material model. On the other hand, this paper tested the mechanical behaviors of SLM printed multiscale lattice structures which were fabricated using the same metal powder and the same machine. The structural stiffness values are generally similar while the strength behaviors show a difference, which are mainly due to the irregular surface quality of the tiny structural branches of the lattice. The above evidences clearly show that the consideration of the particular behaviors of 3D printed materials is therefore indispensable for structural design and optimization in order to improve the structural performance and strengthen their practical significance.
On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts
NASA Astrophysics Data System (ADS)
Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal
2011-05-01
In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xinfeng; Prior, Phillip; Chen, Guangpei
Purpose: The purpose of the study is to investigate the dose effects of electron-return-effect (ERE) at air-tissue and lung-tissue interfaces under a 1.5T transverse-magnetic-field (TMF). Methods: IMRT and VMAT plans for representative pancreas, lung, breast and head & neck (H&N) cases were generated following clinical dose volume (DV) criteria. The air-cavity walls, as well as the lung wall, were delineated to examine the ERE. In each case, the original plan generated without TMF is compared with the reconstructed plan (generated by recalculating the original plan with the presence of TMF) and the optimized plan (generated by a full optimization withmore » TMF), using a variety of DV parameters, including V100%, D95% and dose heterogeneity index for PTV, Dmax, and D1cc for OARs (organs at risk) and tissue interface. Results: The dose recalculation under TMF showed the presence of the 1.5 T TMF can slightly reduce V100% and D95% for PTV, with the differences being less than 4% for all but lung case studied. The TMF results in considerable increases in Dmax and D1cc on the skin in all cases, mostly between 10-35%. The changes in Dmax and D1cc on air cavity walls are dependent upon site, geometry, and size, with changes ranging up to 15%. In general, the VMAT plans lead to much smaller dose effects from ERE compared to fixed-beam IMRT. When the TMF is considered in the plan optimization, the dose effects of the TMF at tissue interfaces are significantly reduced in most cases. Conclusion: The doses on tissue interfaces can be significantly changed by the presence of a 1.5T TMF during MR-guided RT when the TMF is not included in plan optimization. These changes can be substantially reduced or even removed during VMAT/IMRT optimization that specifically considers the TMF, without deteriorating overall plan quality.« less
Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel
2011-01-01
The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less
Data mining to support simulation modeling of patient flow in hospitals.
Isken, Mark W; Rajagopalan, Balaji
2002-04-01
Spiraling health care costs in the United States are driving institutions to continually address the challenge of optimizing the use of scarce resources. One of the first steps towards optimizing resources is to utilize capacity effectively. For hospital capacity planning problems such as allocation of inpatient beds, computer simulation is often the method of choice. One of the more difficult aspects of using simulation models for such studies is the creation of a manageable set of patient types to include in the model. The objective of this paper is to demonstrate the potential of using data mining techniques, specifically clustering techniques such as K-means, to help guide the development of patient type definitions for purposes of building computer simulation or analytical models of patient flow in hospitals. Using data from a hospital in the Midwest this study brings forth several important issues that researchers need to address when applying clustering techniques in general and specifically to hospital data.
Geometrical and topological methods in optimal control theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vakhrameev, S.A.
1995-10-05
The present article will appear 30 years after Hermann`s report was published; in that report, the foundations of a new direction in optimal control theory, later called geometrical, were laid. The main purpose of this article is to present an overview of some of the basic results obtained in this direction. Each survey is subjective, and our work is no exception: the choice of themes and the degree of detail of their presentation are determined mainly by the author`s own interests (and by his own knowledge); the brief exposition, or, in general, the neglect of some aspects of the theorymore » does not reflect their significance. As some compensation for these gaps (which refer mainly to discrete-time systems, to algebraic aspects of the theory, and, partially, to structural theory) there is a rather long reference list presented in the article (it goes up to 1993 and consists, basically, of papers reviewed in the review journal {open_quotes}Matematika{close_quotes} during last 30 years).« less
Global gene expression analysis by combinatorial optimization.
Ameur, Adam; Aurell, Erik; Carlsson, Mats; Westholm, Jakub Orzechowski
2004-01-01
Generally, there is a trade-off between methods of gene expression analysis that are precise but labor-intensive, e.g. RT-PCR, and methods that scale up to global coverage but are not quite as quantitative, e.g. microarrays. In the present paper, we show how how a known method of gene expression profiling (K. Kato, Nucleic Acids Res. 23, 3685-3690 (1995)), which relies on a fairly small number of steps, can be turned into a global gene expression measurement by advanced data post-processing, with potentially little loss of accuracy. Post-processing here entails solving an ancillary combinatorial optimization problem. Validation is performed on in silico experiments generated from the FANTOM data base of full-length mouse cDNA. We present two variants of the method. One uses state-of-the-art commercial software for solving problems of this kind, the other a code developed by us specifically for this purpose, released in the public domain under GPL license.
Visualization for Hyper-Heuristics. Front-End Graphical User Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroenung, Lauren
Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. While such automated design has great advantages, it can often be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address thesemore » issues of usability by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics to support practitioners, as well as scientific visualization of the produced automated designs. My contributions to this project are exhibited in the user-facing portion of the developed system and the detailed scientific visualizations created from back-end data.« less
Visualization for Hyper-Heuristics: Back-End Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, Luke
Modern society is faced with increasingly complex problems, many of which can be formulated as generate-and-test optimization problems. Yet, general-purpose optimization algorithms may sometimes require too much computational time. In these instances, hyperheuristics may be used. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario, finding the solution significantly faster than its predecessor. However, it may be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address these issues by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics and an easy-to-understand scientific visualizationmore » for the produced solutions. To support the development of this GUI, my portion of the research involved developing algorithms that would allow for parsing of the data produced by the hyper-heuristics. This data would then be sent to the front-end, where it would be displayed to the end user.« less
Horváth, Krisztián; Felinger, Attila
2015-08-14
The applicability of core-shell phases in preparative separations was studied by a modeling approach. The preparative separations were optimized for two compounds having bi-Langmuir isotherms. The differential mass balance equation of chromatography was solved by the Rouchon algorithm. The results show that as the size of the core increases, larger particles can be used in separations, resulting in higher applicable flow rates, shorter cycle times. Due to the decreasing volume of porous layer, the loadability of the column dropped significantly. As a result, the productivity and economy of the separation decreases. It is shown that if it is possible to optimize the size of stationary phase particles for the given separation task, the use of core-shell phases are not beneficial. The use of core-shell phases proved to be advantageous when the goal is to build preparative column for general purposes (e.g. for purification of different products) in small scale separations. Copyright © 2015 Elsevier B.V. All rights reserved.
Khalique, Omar K; Pulerwitz, Todd C; Halliburton, Sandra S; Kodali, Susheel K; Hahn, Rebecca T; Nazif, Tamim M; Vahl, Torsten P; George, Isaac; Leon, Martin B; D'Souza, Belinda; Einstein, Andrew J
2016-01-01
Transcatheter aortic valve replacement (TAVR) is performed frequently in patients with severe, symptomatic aortic stenosis who are at high risk or inoperable for open surgical aortic valve replacement. Computed tomography angiography (CTA) has become the gold standard imaging modality for pre-TAVR cardiac anatomic and vascular access assessment. Traditionally, cardiac CTA has been most frequently used for assessment of coronary artery stenosis, and scanning protocols have generally been tailored for this purpose. Pre-TAVR CTA has different goals than coronary CTA and the high prevalence of chronic kidney disease in the TAVR patient population creates a particular need to optimize protocols for a reduction in iodinated contrast volume. This document reviews details which allow the physician to tailor CTA examinations to maximize image quality and minimize harm, while factoring in multiple patient and scanner variables which must be considered in customizing a pre-TAVR protocol. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Optimization Issues with Complex Rotorcraft Comprehensive Analysis
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.
1998-01-01
This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.
Khan, M Nisa
2015-07-20
Light-emitting diode (LED) technologies are undergoing very fast developments to enable household lamp products with improved energy efficiency and lighting properties at lower cost. Although many LED replacement lamps are claimed to provide similar or better lighting quality at lower electrical wattage compared with general-purpose incumbent lamps, certain lighting characteristics important to human vision are neglected in this comparison, which include glare-free illumination and omnidirectional or sufficiently broad light distribution with adequate homogeneity. In this paper, we comprehensively investigate the thermal and lighting performance and trade-offs for several commercial LED replacement lamps for the most popular Edison incandescent bulb. We present simulations and analyses for thermal and optical performance trade-offs for various LED lamps at the chip and module granularity levels. In addition, we present a novel, glare-free, and production-friendly LED lamp design optimized to produce very desirable light distribution properties as demonstrated by our simulation results, some of which are verified by experiments.
Mancebo, Camino M; Merino, Cristina; Martínez, Mario M; Gómez, Manuel
2015-10-01
Gluten-free bread production requires gluten-free flours or starches. Rice flour and maize starch are two of the most commonly used raw materials. Over recent years, gluten-free wheat starch is available on the market. The aim of this research was to optimize mixtures of rice flour, maize starch and wheat starch using an experimental mixture design. For this purpose, dough rheology and its fermentation behaviour were studied. Quality bread parameters such as specific volume, texture, cell structure, colour and acceptability were also analysed. Generally, starch incorporation reduced G* and increased the bread specific volume and cell density, but the breads obtained were paler than the rice flour breads. Comparing the starches, wheat starch breads had better overall acceptability and had a greater volume than maize-starch bread. The highest value for sensorial acceptability corresponded to the bread produced with a mixture of rice flour (59 g/100 g) and wheat starch (41 g/100 g).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Wei; Reddy, T. A.; Gurian, Patrick
2007-01-31
A companion paper to Jiang and Reddy that presents a general and computationally efficient methodology for dyanmic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach.
A Robust Design Methodology for Optimal Microscale Secondary Flow Control in Compact Inlet Diffusers
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Keller, Dennis J.
2001-01-01
It is the purpose of this study to develop an economical Robust design methodology for microscale secondary flow control in compact inlet diffusers. To illustrate the potential of economical Robust Design methodology, two different mission strategies were considered for the subject inlet, namely Maximum Performance and Maximum HCF Life Expectancy. The Maximum Performance mission maximized total pressure recovery while the Maximum HCF Life Expectancy mission minimized the mean of the first five Fourier harmonic amplitudes, i.e., 'collectively' reduced all the harmonic 1/2 amplitudes of engine face distortion. Each of the mission strategies was subject to a low engine face distortion constraint, i.e., DC60<0.10, which is a level acceptable for commercial engines. For each of these missions strategies, an 'Optimal Robust' (open loop control) and an 'Optimal Adaptive' (closed loop control) installation was designed over a twenty degree angle-of-incidence range. The Optimal Robust installation used economical Robust Design methodology to arrive at a single design which operated over the entire angle-of-incident range (open loop control). The Optimal Adaptive installation optimized all the design parameters at each angle-of-incidence. Thus, the Optimal Adaptive installation would require a closed loop control system to sense a proper signal for each effector and modify that effector device, whether mechanical or fluidic, for optimal inlet performance. In general, the performance differences between the Optimal Adaptive and Optimal Robust installation designs were found to be marginal. This suggests, however, that Optimal Robust open loop installation designs can be very competitive with Optimal Adaptive close loop designs. Secondary flow control in inlets is inherently robust, provided it is optimally designed. Therefore, the new methodology presented in this paper, combined array 'Lower Order' approach to Robust DOE, offers the aerodynamicist a very viable and economical way of exploring the concept of Robust inlet design, where the mission variables are brought directly into the inlet design process and insensitivity or robustness to the mission variables becomes a design objective.
A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.
Rong, Xing; Frey, Eric C
2013-08-01
Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.
Solvent vapour monitoring in work space by solid phase micro extraction.
Li, K; Santilli, A; Goldthorp, M; Whiticar, S; Lambert, P; Fingas, M
2001-05-07
Solid phase micro extraction (SPME) is a fast, solvent-less alternative to conventional charcoal tube sampling/carbon disulfide extraction for volatile organic compounds (VOC). In this work, SPME was compared to the active sampling technique in a typical lab atmosphere. Two different types of fibre coatings were evaluated for solvent vapour at ambient concentration. A general purpose 100 microm film polydimethylsiloxane (PDMS) fibre was found to be unsuitable for VOC work, despite the thick coating. The mixed-phase carboxen/PDMS fibre was found to be suitable. Sensitivity of the SPME was far greater than charcoal sorbent tube method. Calibration studies using typical solvent such as dichloromethane (DCM), benzene (B) and toluene (T) showed an optimal exposure time of 5 min, with a repeatability of less than 20% for a broad spectrum of organic vapour. Minimum detectable amount for DCM is in the range of 0.01 microg/l (0.003 ppmv). Variation among different fibres was generally within 30% at a vapour concentration of 1 microg DCM/l, which was more than adequate for field monitoring purpose. Adsorption characteristics and calibration procedures were studied. An actual application of SPME was carried out to measure background level of solvent vapour at a bench where DCM was used extensively. Agreement between the SPME and the charcoal sampling method was generally within a factor of two. No DCM concentration was found to be above the regulatory limit of 50 ppmv.
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.
2017-10-01
We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.
24 CFR 902.1 - Purpose and general description.
Code of Federal Regulations, 2010 CFR
2010-04-01
... URBAN DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM General Provisions § 902.1 Purpose and general description. (a) Purpose. The purpose of the Public Housing Assessment System (PHAS) is to improve the delivery of services in public housing and enhance trust in the public housing system among public housing...
22 CFR 309.1 - General purpose.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-tax debts owed to Peace Corps and to the United States. ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true General purpose. 309.1 Section 309.1 Foreign Relations PEACE CORPS DEBT COLLECTION General Provisions § 309.1 General purpose. This part prescribes the...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2013 CFR
2013-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2014 CFR
2014-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2011 CFR
2011-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2012 CFR
2012-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2010 CFR
2010-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Mason, Emanuele; Castelletti, Andrea; Pianosi, Francesca
2014-05-01
The optimal operation of water resources systems is a wide and challenging problem due to non-linearities in the model and the objectives, high dimensional state-control space, and strong uncertainties in the hydroclimatic regimes. The application of classical optimization techniques (e.g., SDP, Q-learning, gradient descent-based algorithms) is strongly limited by the dimensionality of the system and by the presence of multiple, conflicting objectives. This study presents a novel approach which combines Direct Policy Search (DPS) and Multi-Objective Evolutionary Algorithms (MOEAs) to solve high-dimensional state and control space problems involving multiple objectives. DPS, also known as parameterization-simulation-optimization in the water resources literature, is a simulation-based approach where the reservoir operating policy is first parameterized within a given family of functions and, then, the parameters optimized with respect to the objectives of the management problem. The selection of a suitable class of functions to which the operating policy belong to is a key step, as it might restrict the search for the optimal policy to a subspace of the decision space that does not include the optimal solution. In the water reservoir literature, a number of classes have been proposed. However, many of these rules are based largely on empirical or experimental successes and they were designed mostly via simulation and for single-purpose reservoirs. In a multi-objective context similar rules can not easily inferred from the experience and the use of universal function approximators is generally preferred. In this work, we comparatively analyze two among the most common universal approximators: artificial neural networks (ANN) and radial basis functions (RBF) under different problem settings to estimate their scalability and flexibility in dealing with more and more complex problems. The multi-purpose HoaBinh water reservoir in Vietnam, accounting for hydropower production and flood control, is used as a case study. Preliminary results show that the RBF policy parametrization is more effective than the ANN one. In particular, the approximated Pareto front obtained with RBF control policies successfully explores the full tradeoff space between the two conflicting objectives, while most of the ANN solutions results to be Pareto-dominated by the RBF ones.
Perceived Parenting Styles on College Students' Optimism
ERIC Educational Resources Information Center
Baldwin, Debora R.; McIntyre, Anne; Hardaway, Elizabeth
2007-01-01
The purpose of this study was to examine the relationship between perceived parenting styles and levels of optimism in undergraduate college students. Sixty-three participants were administered surveys measuring dispositional optimism and perceived parental Authoritative and Authoritarian styles. Multiple regression analysis revealed that both…
Comparative study of gas-analyzing systems designed for continuous monitoring of TPP emissions
NASA Astrophysics Data System (ADS)
Kondrat'eva, O. E.; Roslyakov, P. V.
2017-06-01
Determining the composition of combustion products is important in terms of both control of emissions into the atmosphere from thermal power plants and optimization of fuel combustion processes in electric power plants. For this purpose, the concentration of oxygen, carbon monoxide, nitrogen, and sulfur oxides in flue gases is monitored; in case of solid fuel combustion, fly ash concentration is monitored as well. According to the new nature conservation law in Russia, all large TPPs shall be equipped with continuous emission monitoring and measurement systems (CEMMS) into the atmosphere. In order to ensure the continuous monitoring of pollutant emissions, direct round-the-clock measurements are conducted with the use of either domestically produced or imported gas analyzers and analysis systems, the operation of which is based on various physicochemical methods and which can be generally used when introducing CEMMS. Depending on the type and purposes of measurement, various kinds of instruments having different features may be used. This article represents a comparative study of gas-analysis systems for measuring the content of polluting substances in exhaust gases based on various physical and physicochemical analysis methods. It lists basic characteristics of the methods commonly applied in the area of gas analysis. It is proven that, considering the necessity of the long-term, continuous operation of gas analyzers for monitoring and measurement of pollutant emissions into the atmosphere, as well as the requirements for reliability and independence from aggressive components and temperature of the gas flow, it is preferable to use optical gas analyzers for the aforementioned purposes. In order to reduce the costs of equipment comprising a CEMMS at a TPP and optimize the combustion processes, electrochemical and thermomagnetic gas analyzers may also be used.
NASA Astrophysics Data System (ADS)
Aloisio, A.; Cavaliere, S.; Cevenini, F.; Della Volpe, D.; Merola, L.; Anastasio, A.; Fiore, D. J.
KLOE is a general purpose detector optimized to observe CP violation in K0 decays. This detector will be installed at the DAΦNE Φ-factory, in Frascati (Italy) and it is expected to run at the end of 1997. The KLOE DAQ system can be divided mainly into the front-end fast readout section (the Level 1 DAQ), the FDDI Switch and the processor farm. The total bandwidth requirement is estimated to be of the order of 50 Mbyte/s. In this paper, we describe the Level 1 DAQ section, which is based on custom protocols and hardware controllers, developed to achieve high data transfer rates and event building capabilities without software overhead.
Integrating normal and abnormal personality structure: a proposal for DSM-V.
Widiger, Thomas A
2011-06-01
The personality disorders section of the American Psychiatric Association's fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) is currently being developed. The purpose of the current paper is to encourage the authors of DSM-V to integrate normal and abnormal personality structure within a common, integrative model, and to suggest that the optimal choice for such an integration would be the five-factor model (FFM) of general personality structure. A proposal for the classification of personality disorder from the perspective of the FFM is provided. Discussed as well are implications and issues associated with an FFM of personality disorder, including validity, coverage, feasibility, clinical utility, and treatment implications.
Design of a monitor and simulation terminal (master) for space station telerobotics and telescience
NASA Technical Reports Server (NTRS)
Lopez, L.; Konkel, C.; Harmon, P.; King, S.
1989-01-01
Based on Space Station and planetary spacecraft communication time delays and bandwidth limitations, it will be necessary to develop an intelligent, general purpose ground monitor terminal capable of sophisticated data display and control of on-orbit facilities and remote spacecraft. The basic elements that make up a Monitor and Simulation Terminal (MASTER) include computer overlay video, data compression, forward simulation, mission resource optimization and high level robotic control. Hardware and software elements of a MASTER are being assembled for testbed use. Applications of Neural Networks (NNs) to some key functions of a MASTER are also discussed. These functions are overlay graphics adjustment, object correlation and kinematic-dynamic characterization of the manipulator.
A fast ultrasonic simulation tool based on massively parallel implementations
NASA Astrophysics Data System (ADS)
Lambert, Jason; Rougeron, Gilles; Lacassagne, Lionel; Chatillon, Sylvain
2014-02-01
This paper presents a CIVA optimized ultrasonic inspection simulation tool, which takes benefit of the power of massively parallel architectures: graphical processing units (GPU) and multi-core general purpose processors (GPP). This tool is based on the classical approach used in CIVA: the interaction model is based on Kirchoff, and the ultrasonic field around the defect is computed by the pencil method. The model has been adapted and parallelized for both architectures. At this stage, the configurations addressed by the tool are : multi and mono-element probes, planar specimens made of simple isotropic materials, planar rectangular defects or side drilled holes of small diameter. Validations on the model accuracy and performances measurements are presented.
Use telecommunications for real-time process control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zilberman, I.; Bigman, J.; Sela, I.
1996-05-01
Process operators design real-time accurate information to monitor and control product streams and to optimize unit operations. The challenge is how to cost-effectively install sophisticated analytical equipment in harsh environments such as process areas and maintain system reliability. Incorporating telecommunications technology with near infrared (NIR) spectroscopy may be the bridge to help operations achieve their online control goals. Coupling communications fiber optics with NIR analyzers enables the probe and sampling system to remain in the field and crucial analytical equipment to be remotely located in a general purpose area without specialized protection provisions. The case histories show how two refineriesmore » used NIR spectroscopy online to track octane levels for reformate streams.« less
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 1 General Provisions 1 2011-01-01 2011-01-01 false Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 1 General Provisions 1 2014-01-01 2012-01-01 true Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 1 General Provisions 1 2012-01-01 2012-01-01 false Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 1 General Provisions 1 2013-01-01 2012-01-01 true Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
New perspectives on the dynamics of AC and DC plasma arcs exposed to cross-fields
NASA Astrophysics Data System (ADS)
Abdo, Youssef; Rohani, Vandad; Cauneau, François; Fulcheri, Laurent
2017-02-01
Interactions between an arc and external fields are crucially important for the design and the optimization of modern plasma torches. Multiple studies have been conducted to help better understand the behavior of DC and AC current arcs exposed to external and ‘self-induced’ magnetic fields, but the theoretical foundations remain very poorly explored. An analytical investigation has therefore been carried out in order to study the general behavior of DC and AC arcs under the effect of random cross-fields. A simple differential equation describing the general behavior of a planar DC or AC arc has been obtained. Several dimensionless numbers that depend primarily on arc and field parameters and the main arc characteristics (temperature, electric field strength) have also been determined. Their magnitude indicates the general tendency pattern of the arc evolution. The analytical results for many case studies have been validated using an MHD numerical model. The main purpose of this investigation was deriving a practical analytical model for the electric arc, rendering possible its stabilization and control, and the enhancement of the plasma torch power.
Hrabovský, Miroslav
2014-01-01
The purpose of the study is to show a proposal of an extension of a one-dimensional speckle correlation method, which is primarily intended for determination of one-dimensional object's translation, for detection of general in-plane object's translation. In that view, a numerical simulation of a displacement of the speckle field as a consequence of general in-plane object's translation is presented. The translation components a x and a y representing the projections of a vector a of the object's displacement onto both x- and y-axes in the object plane (x, y) are evaluated separately by means of the extended one-dimensional speckle correlation method. Moreover, one can perform a distinct optimization of the method by reduction of intensity values representing detected speckle patterns. The theoretical relations between the translation components a x and a y of the object and the displacement of the speckle pattern for selected geometrical arrangement are mentioned and used for the testifying of the proposed method's rightness. PMID:24592180
Optimism and Physical Health: A Meta-analytic Review
Rasmussen, Heather N.; Greenhouse, Joel B.
2010-01-01
Background Prior research links optimism to physical health, but the strength of the association has not been systematically evaluated. Purpose The purpose of this study is to conduct a meta-analytic review to determine the strength of the association between optimism and physical health. Methods The findings from 83 studies, with 108 effect sizes (ESs), were included in the analyses, using random-effects models. Results Overall, the mean ES characterizing the relationship between optimism and physical health outcomes was 0.17, p<.001. ESs were larger for studies using subjective (versus objective) measures of physical health. Subsidiary analyses were also conducted grouping studies into those that focused solely on mortality, survival, cardiovascular outcomes, physiological markers (including immune function), immune function only, cancer outcomes, outcomes related to pregnancy, physical symptoms, or pain. In each case, optimism was a significant predictor of health outcomes or markers, all p<.001. Conclusions Optimism is a significant predictor of positive physical health outcomes. PMID:19711142
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z
2015-06-15
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less
Duckworth, Angela L.; Yeager, David Scott
2016-01-01
There has been perennial interest in personal qualities other than cognitive ability that determine success, including self-control, grit, growth mindset, and many others. Attempts to measure such qualities for the purposes of educational policy and practice, however, are more recent. In this article, we identify serious challenges to doing so. We first address confusion over terminology, including the descriptor “non-cognitive.” We conclude that debate over the optimal name for this broad category of personal qualities obscures substantial agreement about the specific attributes worth measuring. Next, we discuss advantages and limitations of different measures. In particular, we compare self-report questionnaires, teacher-report questionnaires, and performance tasks, using self-control as an illustrative case study to make the general point that each approach is imperfect in its own way. Finally, we discuss how each measure’s imperfections can affect its suitability for program evaluation, accountability, individual diagnosis, and practice improvement. For example, we do not believe any available measure is suitable for between-school accountability judgments. In addition to urging caution among policymakers and practitioners, we highlight medium-term innovations that may make measures of these personal qualities more suitable for educational purposes. PMID:27134288
Next-generation acceleration and code optimization for light transport in turbid media using GPUs
Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar
2010-01-01
A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498
Bus, Sicco A.; Haspels, Rob; Busch-Westbroek, Tessa E.
2011-01-01
OBJECTIVE Therapeutic footwear for diabetic foot patients aims to reduce the risk of ulceration by relieving mechanical pressure on the foot. However, footwear efficacy is generally not assessed in clinical practice. The purpose of this study was to assess the value of in-shoe plantar pressure analysis to evaluate and optimize the pressure-reducing effects of diabetic therapeutic footwear. RESEARCH DESIGN AND METHODS Dynamic in-shoe plantar pressure distribution was measured in 23 neuropathic diabetic foot patients wearing fully customized footwear. Regions of interest (with peak pressure >200 kPa) were selected and targeted for pressure optimization by modifying the shoe or insole. After each of a maximum of three rounds of modifications, the effect on in-shoe plantar pressure was measured. Successful optimization was achieved with a peak pressure reduction of >25% (criterion A) or below an absolute level of 200 kPa (criterion B). RESULTS In 35 defined regions, mean peak pressure was significantly reduced from 303 (SD 77) to 208 (46) kPa after an average 1.6 rounds of footwear modifications (P < 0.001). This result constitutes a 30.2% pressure relief (range 18–50% across regions). All regions were successfully optimized: 16 according to criterion A, 7 to criterion B, and 12 to criterion A and B. Footwear optimization lasted on average 53 min. CONCLUSIONS These findings suggest that in-shoe plantar pressure analysis is an effective and efficient tool to evaluate and guide footwear modifications that significantly reduce pressure in the neuropathic diabetic foot. This result provides an objective approach to instantly improve footwear quality, which should reduce the risk for pressure-related plantar foot ulcers. PMID:21610125
Olugbara, Oludayo
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369
Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.
7 CFR 226.1 - General purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 226.1 Section 226.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS CHILD AND ADULT CARE FOOD PROGRAM General § 226.1 General purpose and scope. This part announces the...
7 CFR 225.1 - General purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 225.1 Section 225.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS SUMMER FOOD SERVICE PROGRAM General § 225.1 General purpose and scope. This part establishes the regulations...
Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.
Liu, Meiqin
2009-09-01
This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Y; Liu, B; Kalra, M
Purpose: X-rays from CT scans can increase cancer risk to patients. Lifetime Attributable Risk of Cancer Incidence for adult patients has been investigated and shown to decrease as patient age. However, a new risk model shows an increasing risk trend for several radiosensitive organs for middle age patients. This study investigates the feasibility of a general method for optimizing tube current modulation (TCM) functions to minimize risk by reducing radiation dose to radiosensitive organs of patients. Methods: Organ-based TCM has been investigated in literature for eye lens dose and breast dose. Adopting the concept in organ-based TCM, this study seeksmore » to find an optimized tube current for minimal total risk to breasts and lungs by reducing dose to these organs. The contributions of each CT view to organ dose are determined through simulations of CT scan view-by-view using a GPU-based fast Monte Carlo code, ARCHER. A Linear Programming problem is established for tube current optimization, with Monte Carlo results as weighting factors at each view. A pre-determined dose is used as upper dose boundary, and tube current of each view is optimized to minimize the total risk. Results: An optimized tube current is found to minimize the total risk of lungs and breasts: compared to fixed current, the risk is reduced by 13%, with breast dose reduced by 38% and lung dose reduced by 7%. The average tube current is maintained during optimization to maintain image quality. In addition, dose to other organs in chest region is slightly affected, with relative change in dose smaller than 10%. Conclusion: Optimized tube current plans can be generated to minimize cancer risk to lungs and breasts while maintaining image quality. In the future, various risk models and greater number of projections per rotation will be simulated on phantoms of different gender and age. National Institutes of Health R01EB015478.« less
NASA Astrophysics Data System (ADS)
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-01
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-16
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Teo, Troy P; Ahmed, Syed Bilal; Kawalec, Philip; Alayoubi, Nadia; Bruce, Neil; Lyn, Ethan; Pistorius, Stephen
2018-02-01
The accurate prediction of intrafraction lung tumor motion is required to compensate for system latency in image-guided adaptive radiotherapy systems. The goal of this study was to identify an optimal prediction model that has a short learning period so that prediction and adaptation can commence soon after treatment begins, and requires minimal reoptimization for individual patients. Specifically, the feasibility of predicting tumor position using a combination of a generalized (i.e., averaged) neural network, optimized using historical patient data (i.e., tumor trajectories) obtained offline, coupled with the use of real-time online tumor positions (obtained during treatment delivery) was examined. A 3-layer perceptron neural network was implemented to predict tumor motion for a prediction horizon of 650 ms. A backpropagation algorithm and batch gradient descent approach were used to train the model. Twenty-seven 1-min lung tumor motion samples (selected from a CyberKnife patient dataset) were sampled at a rate of 7.5 Hz (0.133 s) to emulate the frame rate of an electronic portal imaging device (EPID). A sliding temporal window was used to sample the data for learning. The sliding window length was set to be equivalent to the first breathing cycle detected from each trajectory. Performing a parametric sweep, an averaged error surface of mean square errors (MSE) was obtained from the prediction responses of seven trajectories used for the training of the model (Group 1). An optimal input data size and number of hidden neurons were selected to represent the generalized model. To evaluate the prediction performance of the generalized model on unseen data, twenty tumor traces (Group 2) that were not involved in the training of the model were used for the leave-one-out cross-validation purposes. An input data size of 35 samples (4.6 s) and 20 hidden neurons were selected for the generalized neural network. An average sliding window length of 28 data samples was used. The average initial learning period prior to the availability of the first predicted tumor position was 8.53 ± 1.03 s. Average mean absolute error (MAE) of 0.59 ± 0.13 mm and 0.56 ± 0.18 mm were obtained from Groups 1 and 2, respectively, giving an overall MAE of 0.57 ± 0.17 mm. Average root-mean-square-error (RMSE) of 0.67 ± 0.36 for all the traces (0.76 ± 0.34 mm, Group 1 and 0.63 ± 0.36 mm, Group 2), is comparable to previously published results. Prediction errors are mainly due to the irregular periodicities between cycles. Since the errors from Groups 1 and 2 are within the same range, it demonstrates that this model can generalize and predict on unseen data. This is a first attempt to use an averaged MSE error surface (obtained from the prediction of different patients' tumor trajectories) to determine the parameters of a generalized neural network. This network could be deployed as a plug-and-play predictor for tumor trajectory during treatment delivery, eliminating the need for optimizing individual networks with pretreatment patient data. © 2017 American Association of Physicists in Medicine.
Burke, F J Trevor; Crisp, Russell J; James, A; Mackenzie, L; Pal, A; Sands, P; Thompson, O; Palin, W M
2011-07-01
A novel resin composite system, Filtek Silorane (3M ESPE) with reduced polymerization shrinkage has recently been introduced. The resin contains an oxygen-containing ring molecule ('oxirane') and cures via a cationic ring-opening reaction rather than a linear chain reaction associated with conventional methacrylates and results in a volumetric shrinkage of ∼1%. The purpose of this study was to review the literature on a recently introduced resin composite material, Filtek Silorane, and evaluate the clinical outcome of restorations formed in this material. Filtek Silorane restorations were placed where indicated in loadbearing situations in the posterior teeth of patients attending five UK dental practices. These were evaluated, after two years, using modified USPHS criteria. A total of 100 restorations, of mean age 25.7 months, in 64 patients, were examined, comprised of 30 Class I and 70 Class II. All restorations were found to be present and intact, there was no secondary caries. Ninety-seven per cent of the restorations were rated optimal for anatomic form, 84% were rated optimal for marginal integrity, 77% were rated optimal for marginal discoloration, 99% were rated optimal for color match, and 93%% of the restorations were rated optimal for surface quality. No restoration was awarded a "fail" grade. No staining of the restoration surfaces was recorded and no patients complained of post-operative sensitivity. It is concluded that, within the limitations of the study, the two year assessment of 100 restorations placed in Filtek Silorane has indicated satisfactory clinical performance. Copyright © 2011. Published by Elsevier Ltd.
Optimal Design for Informative Protocols in Xenograft Tumor Growth Inhibition Experiments in Mice.
Lestini, Giulia; Mentré, France; Magni, Paolo
2016-09-01
Tumor growth inhibition (TGI) models are increasingly used during preclinical drug development in oncology for the in vivo evaluation of antitumor effect. Tumor sizes are measured in xenografted mice, often only during and shortly after treatment, thus preventing correct identification of some TGI model parameters. Our aims were (i) to evaluate the importance of including measurements during tumor regrowth and (ii) to investigate the proportions of mice included in each arm. For these purposes, optimal design theory based on the Fisher information matrix implemented in PFIM4.0 was applied. Published xenograft experiments, involving different drugs, schedules, and cell lines, were used to help optimize experimental settings and parameters using the Simeoni TGI model. For each experiment, a two-arm design, i.e., control versus treatment, was optimized with or without the constraint of not sampling during tumor regrowth, i.e., "short" and "long" studies, respectively. In long studies, measurements could be taken up to 6 g of tumor weight, whereas in short studies the experiment was stopped 3 days after the end of treatment. Predicted relative standard errors were smaller in long studies than in corresponding short studies. Some optimal measurement times were located in the regrowth phase, highlighting the importance of continuing the experiment after the end of treatment. In the four-arm designs, the results showed that the proportions of control and treated mice can differ. To conclude, making measurements during tumor regrowth should become a general rule for informative preclinical studies in oncology, especially when a delayed drug effect is suspected.
Optimal design for informative protocols in xenograft tumor growth inhibition experiments in mice
Lestini, Giulia; Mentré, France; Magni, Paolo
2016-01-01
Tumor growth inhibition (TGI) models are increasingly used during preclinical drug development in oncology for the in vivo evaluation of antitumor effect. Tumor sizes are measured in xenografted mice, often only during and shortly after treatment, thus preventing correct identification of some TGI model parameters. Our aims were i) to evaluate the importance of including measurements during tumor regrowth; ii) to investigate the proportions of mice included in each arm. For these purposes, optimal design theory based on the Fisher information matrix implemented in PFIM4.0 was applied. Published xenograft experiments, involving different drugs, schedules and cell lines, were used to help optimize experimental settings and parameters using the Simeoni TGI model. For each experiment, a two-arm design, i.e. control vs treatment, was optimized with or without the constraint of not sampling during tumor regrowth, i.e. “short” and “long” studies, respectively. In long studies, measurements could be taken up to 6 grams of tumor weight, whereas in short studies the experiment was stopped three days after the end of treatment. Predicted relative standard errors were smaller in long studies than in corresponding short studies. Some optimal measurement times were located in the regrowth phase, highlighting the importance of continuing the experiment after the end of treatment. In the four-arm designs, the results showed that the proportions of control and treated mice can differ. To conclude, making measurements during tumor regrowth should become a general rule for informative preclinical studies in oncology, especially when a delayed drug effect is suspected. PMID:27306546
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
Code of Federal Regulations, 2010 CFR
2010-07-01
... buildings, including land incidental thereto, suitable for the general use of Government agencies, including...) Special-purpose space is space in buildings, including land incidental thereto, wholly or predominantly utilized for the special purposes of an agency, and not generally suitable for general-purpose use...
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2011 CFR
2011-04-01
.... (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
Derivation of optimal joint operating rules for multi-purpose multi-reservoir water-supply system
NASA Astrophysics Data System (ADS)
Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wang, Chao; Lei, Xiao-hui; Xiong, Yi-song; Zhang, Wei
2017-08-01
The derivation of joint operating policy is a challenging task for a multi-purpose multi-reservoir system. This study proposed an aggregation-decomposition model to guide the joint operation of multi-purpose multi-reservoir system, including: (1) an aggregated model based on the improved hedging rule to ensure the long-term water-supply operating benefit; (2) a decomposed model to allocate the limited release to individual reservoirs for the purpose of maximizing the total profit of the facing period; and (3) a double-layer simulation-based optimization model to obtain the optimal time-varying hedging rules using the non-dominated sorting genetic algorithm II, whose objectives were to minimize maximum water deficit and maximize water supply reliability. The water-supply system of Li River in Guangxi Province, China, was selected for the case study. The results show that the operating policy proposed in this study is better than conventional operating rules and aggregated standard operating policy for both water supply and hydropower generation due to the use of hedging mechanism and effective coordination among multiple objectives.
An approach to modeling and optimization of integrated renewable energy system (ires)
NASA Astrophysics Data System (ADS)
Maheshwari, Zeel
The purpose of this study was to cost optimize electrical part of IRES (Integrated Renewable Energy Systems) using HOMER and maximize the utilization of resources using MATLAB programming. IRES is an effective and a viable strategy that can be employed to harness renewable energy resources to energize remote rural areas of developing countries. The resource- need matching, which is the basis for IRES makes it possible to provide energy in an efficient and cost effective manner. Modeling and optimization of IRES for a selected study area makes IRES more advantageous when compared to hybrid concepts. A remote rural area with a population of 700 in 120 households and 450 cattle is considered as an example for cost analysis and optimization. Mathematical models for key components of IRES such as biogas generator, hydropower generator, wind turbine, PV system and battery banks are developed. A discussion of the size of water reservoir required is also presented. Modeling of IRES on the basis of need to resource and resource to need matching is pursued to help in optimum use of resources for the needs. Fixed resources such as biogas and water are used in prioritized order whereas movable resources such as wind and solar can be used simultaneously for different priorities. IRES is cost optimized for electricity demand using HOMER software that is developed by the NREL (National Renewable Energy Laboratory). HOMER optimizes configuration for electrical demand only and does not consider other demands such as biogas for cooking and water for domestic and irrigation purposes. Hence an optimization program based on the need-resource modeling of IRES is performed in MATLAB. Optimization of the utilization of resources for several needs is performed. Results obtained from MATLAB clearly show that the available resources can fulfill the demand of the rural areas. Introduction of IRES in rural communities has many socio-economic implications. It brings about improvement in living environment and community welfare by supplying the basic needs such as biogas for cooking, water for domestic and irrigation purposes and electrical energy for lighting, communication, cold storage, educational and small- scale industrial purposes.
System Synthesis in Preliminary Aircraft Design using Statistical Methods
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.
1996-01-01
This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).
Hematopoietic stem cell transplantation for acquired aplastic anemia
Georges, George E.; Storb, Rainer
2016-01-01
Purpose of review There has been steady improvement in outcomes with allogeneic bone marrow transplantation (BMT) for severe aplastic anemia (SAA), due to progress in optimization of the conditioning regimens, donor hematopoietic cell source and supportive care. Here we review recently published data that highlight the improvements and current issues in the treatment of SAA. Recent findings Approximately one-third of AA patients treated with immune suppression therapy (IST) have acquired mutations in myeloid cancer candidate genes. Because of the greater probability for eventual failure of IST, human leukocyte antigen (HLA)-matched sibling donor BMT is the first-line of treatment for SAA. HLA-matched unrelated donor (URD) BMT is generally recommended for patients who have failed IST. However, in younger patients for whom a 10/10-HLA-allele matched URD can be rapidly identified, there is a strong rationale to proceed with URD BMT as first-line therapy. HLA-haploidentical BMT using post-transplant cyclophosphamide (PT-CY) conditioning regimens, is now a reasonable second-line treatment for patients who failed IST. Summary Improved outcomes have led to an increased first-line role of BMT for treatment of SAA. The optimal cell source from an HLA-matched donor is bone marrow. Additional studies are needed to determine the optimal conditioning regimen for HLA-haploidentical donors. PMID:27607445
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2014 CFR
2014-04-01
... distributed. (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2013 CFR
2013-04-01
... distributed. (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2012 CFR
2012-04-01
... distributed. (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
Evolutionary and biological metaphors for engineering design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakiela, M.
1994-12-31
Since computing became generally available, there has been strong interest in using computers to assist and automate engineering design processes. Specifically, for design optimization and automation, nonlinear programming and artificial intelligence techniques have been extensively studied. New computational techniques, based upon the natural processes of evolution, adaptation, and learing, are showing promise because of their generality and robustness. This presentation will describe the use of two such techniques, genetic algorithms and classifier systems, for a variety of engineering design problems. Structural topology optimization, meshing, and general engineering optimization are shown as example applications.
SU-E-I-23: A General KV Constrained Optimization of CNR for CT Abdominal Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weir, V; Zhang, J
Purpose: While Tube current modulation has been well accepted for CT dose reduction, kV adjusting in clinical settings is still at its early stage. This is mainly due to the limited kV options of most current CT scanners. kV adjusting can potentially reduce radiation dose and optimize image quality. This study is to optimize CT abdomen imaging acquisition based on the assumption of a continuous kV, with the goal to provide the best contrast to noise ratio (CNR). Methods: For a given dose (CTDIvol) level, the CNRs at different kV and pitches were measured with an ACR GAMMEX phantom. Themore » phantom was scanned in a Siemens Sensation 64 scanner and a GE VCT 64 scanner. A constrained mathematical optimization was used to find the kV which led to the highest CNR for the anatomy and pitch setting. Parametric equations were obtained from polynomial fitting of plots of kVs vs CNRs. A suitable constraint region for optimization was chosen. Subsequent optimization yielded a peak CNR at a particular kV for different collimations and pitch setting. Results: The constrained mathematical optimization approach yields kV of 114.83 and 113.46, with CNRs of 1.27 and 1.11 at the pitch of 1.2 and 1.4, respectively, for the Siemens Sensation 64 scanner with the collimation of 32 x 0.625mm. An optimized kV of 134.25 and 1.51 CNR is obtained for a GE VCT 64 slice scanner with a collimation of 32 x 0.625mm and a pitch of 0.969. At 0.516 pitch and 32 x 0.625 mm an optimized kV of 133.75 and a CNR of 1.14 was found for the GE VCT 64 slice scanner. Conclusion: CNR in CT image acquisition can be further optimized with a continuous kV option instead of current discrete or fixed kV settings. A continuous kV option is a key for individualized CT protocols.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Dan; Ruan, Dan; O’Connor, Daniel
Purpose: To deliver high quality intensity modulated radiotherapy (IMRT) using a novel generalized sparse orthogonal collimators (SOCs), the authors introduce a novel direct aperture optimization (DAO) approach based on discrete rectangular representation. Methods: A total of seven patients—two glioblastoma multiforme, three head & neck (including one with three prescription doses), and two lung—were included. 20 noncoplanar beams were selected using a column generation and pricing optimization method. The SOC is a generalized conventional orthogonal collimators with N leaves in each collimator bank, where N = 1, 2, or 4. SOC degenerates to conventional jaws when N = 1. For SOC-basedmore » IMRT, rectangular aperture optimization (RAO) was performed to optimize the fluence maps using rectangular representation, producing fluence maps that can be directly converted into a set of deliverable rectangular apertures. In order to optimize the dose distribution and minimize the number of apertures used, the overall objective was formulated to incorporate an L2 penalty reflecting the difference between the prescription and the projected doses, and an L1 sparsity regularization term to encourage a low number of nonzero rectangular basis coefficients. The optimization problem was solved using the Chambolle–Pock algorithm, a first-order primal–dual algorithm. Performance of RAO was compared to conventional two-step IMRT optimization including fluence map optimization and direct stratification for multileaf collimator (MLC) segmentation (DMS) using the same number of segments. For the RAO plans, segment travel time for SOC delivery was evaluated for the N = 1, N = 2, and N = 4 SOC designs to characterize the improvement in delivery efficiency as a function of N. Results: Comparable PTV dose homogeneity and coverage were observed between the RAO and the DMS plans. The RAO plans were slightly superior to the DMS plans in sparing critical structures. On average, the maximum and mean critical organ doses were reduced by 1.94% and 1.44% of the prescription dose. The average number of delivery segments was 12.68 segments per beam for both the RAO and DMS plans. The N = 2 and N = 4 SOC designs were, on average, 1.56 and 1.80 times more efficient than the N = 1 SOC design to deliver. The mean aperture size produced by the RAO plans was 3.9 times larger than that of the DMS plans. Conclusions: The DAO and dose domain optimization approach enabled high quality IMRT plans using a low-complexity collimator setup. The dosimetric quality is comparable or slightly superior to conventional MLC-based IMRT plans using the same number of delivery segments. The SOC IMRT delivery efficiency can be significantly improved by increasing the leaf numbers, but the number is still significantly lower than the number of leaves in a typical MLC.« less
Development of a fast and feasible spectrum modeling technique for flattening filter free beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Woong; Bush, Karl; Mok, Ed
Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less
Use of general purpose graphics processing units with MODFLOW
Hughes, Joseph D.; White, Jeremy T.
2013-01-01
To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.
Optimal control of anthracnose using mixed strategies.
Fotsa Mbogne, David Jaures; Thron, Christopher
2015-11-01
In this paper we propose and study a spatial diffusion model for the control of anthracnose disease in a bounded domain. The model is a generalization of the one previously developed in [15]. We use the model to simulate two different types of control strategies against anthracnose disease. Strategies that employ chemical fungicides are modeled using a continuous control function; while strategies that rely on cultivational practices (such as pruning and removal of mummified fruits) are modeled with a control function which is discrete in time (though not in space). For comparative purposes, we perform our analyses for a spatially-averaged model as well as the space-dependent diffusion model. Under weak smoothness conditions on parameters we demonstrate the well-posedness of both models by verifying existence and uniqueness of the solution for the growth inhibition rate for given initial conditions. We also show that the set [0, 1] is positively invariant. We first study control by impulsive strategies, then analyze the simultaneous use of mixed continuous and pulse strategies. In each case we specify a cost functional to be minimized, and we demonstrate the existence of optimal control strategies. In the case of pulse-only strategies, we provide explicit algorithms for finding the optimal control strategies for both the spatially-averaged model and the space-dependent model. We verify the algorithms for both models via simulation, and discuss properties of the optimal solutions. Copyright © 2015 Elsevier Inc. All rights reserved.
Decision Support Model for Optimal Management of Coastal Gate
NASA Astrophysics Data System (ADS)
Ditthakit, Pakorn; Chittaladakorn, Suwatana
2010-05-01
The coastal areas are intensely settled by human beings owing to their fertility of natural resources. However, at present those areas are facing with water scarcity problems: inadequate water and poor water quality as a result of saltwater intrusion and inappropriate land-use management. To solve these problems, several measures have been exploited. The coastal gate construction is a structural measure widely performed in several countries. This manner requires the plan for suitably operating coastal gates. Coastal gate operation is a complicated task and usually concerns with the management of multiple purposes, which are generally conflicted one another. This paper delineates the methodology and used theories for developing decision support modeling for coastal gate operation scheduling. The developed model was based on coupling simulation and optimization model. The weighting optimization technique based on Differential Evolution (DE) was selected herein for solving multiple objective problems. The hydrodynamic and water quality models were repeatedly invoked during searching the optimal gate operations. In addition, two forecasting models:- Auto Regressive model (AR model) and Harmonic Analysis model (HA model) were applied for forecasting water levels and tide levels, respectively. To demonstrate the applicability of the developed model, it was applied to plan the operations for hypothetical system of Pak Phanang coastal gate system, located in Nakhon Si Thammarat province, southern part of Thailand. It was found that the proposed model could satisfyingly assist decision-makers for operating coastal gates under various environmental, ecological and hydraulic conditions.
Artifacts in digital coincidence timing
Moses, W. W.; Peng, Q.
2014-10-16
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Clustering methods for the optimization of atomic cluster structure
NASA Astrophysics Data System (ADS)
Bagattini, Francesco; Schoen, Fabio; Tigli, Luca
2018-04-01
In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.
An ACOR-Based Multi-Objective WSN Deployment Example for Lunar Surveying.
López-Matencio, Pablo
2016-02-06
Wireless sensor networks (WSNs) can gather in situ real data measurements and work unattended for long periods, even in remote, rough places. A critical aspect of WSN design is node placement, as this determines sensing capacities, network connectivity, network lifetime and, in short, the whole operational capabilities of the WSN. This paper proposes and studies a new node placement algorithm that focus on these aspects. As a motivating example, we consider a network designed to describe the distribution of helium-3 (³He), a potential enabling element for fusion reactors, on the Moon. ³He is abundant on the Moon's surface, and knowledge of its distribution is essential for future harvesting purposes. Previous data are inconclusive, and there is general agreement that on-site measurements, obtained over a long time period, are necessary to better understand the mechanisms involved in the distribution of this element on the Moon. Although a mission of this type is extremely complex, it allows us to illustrate the main challenges involved in a multi-objective WSN placement problem, i.e., selection of optimal observation sites and maximization of the lifetime of the network. To tackle optimization, we use a recent adaptation of the ant colony optimization (ACOR) metaheuristic, extended to continuous domains. Solutions are provided in the form of a Pareto frontier that shows the optimal equilibria. Moreover, we compared our scheme with the four-directional placement (FDP) heuristic, which was outperformed in all cases.
Artifacts in digital coincidence timing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, W. W.; Peng, Q.
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
21 CFR 864.4010 - General purpose reagent.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false General purpose reagent. 864.4010 Section 864.4010 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Specimen Preparation Reagents § 864.4010 General purpose...
21 CFR 864.4010 - General purpose reagent.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false General purpose reagent. 864.4010 Section 864.4010 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Specimen Preparation Reagents § 864.4010 General purpose...
21 CFR 864.4010 - General purpose reagent.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false General purpose reagent. 864.4010 Section 864.4010 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Specimen Preparation Reagents § 864.4010 General purpose...
Elementary Students' Accounts of Optimal Challenge in Physical Education
ERIC Educational Resources Information Center
Mandigo, James L.; Holt, Nicholas L.
2006-01-01
The purpose of this study was to examine elementary school students' accounts of optimal challenge. Twenty-seven children (aged 7-9 years) participated in semi-structured interviews during which they were shown a video-recording of their participation in a physical education class and invited to describe their experiences of optimally challenging…
Numerical approach of collision avoidance and optimal control on robotic manipulators
NASA Technical Reports Server (NTRS)
Wang, Jyhshing Jack
1990-01-01
Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.
Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization
NASA Technical Reports Server (NTRS)
Witzberger, Kevin E.; Zeiler, Tom
2012-01-01
This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.
Streamflow record extension using power transformations and application to sediment transport
NASA Astrophysics Data System (ADS)
Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.
1999-01-01
To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.
Automated vehicle guidance using discrete reference markers. [road surface steering techniques
NASA Technical Reports Server (NTRS)
Johnston, A. R.; Assefi, T.; Lai, J. Y.
1979-01-01
Techniques for providing steering control for an automated vehicle using discrete reference markers fixed to the road surface are investigated analytically. Either optical or magnetic approaches can be used for the sensor, which generates a measurement of the lateral offset of the vehicle path at each marker to form the basic data for steering control. Possible mechanizations of sensor and controller are outlined. Techniques for handling certain anomalous conditions, such as a missing marker, or loss of acquisition, and special maneuvers, such as u-turns and switching, are briefly discussed. A general analysis of the vehicle dynamics and the discrete control system is presented using the state variable formulation. Noise in both the sensor measurement and in the steering servo are accounted for. An optimal controller is simulated on a general purpose computer, and the resulting plots of vehicle path are presented. Parameters representing a small multipassenger tram were selected, and the simulation runs show response to an erroneous sensor measurement and acquisition following large initial path errors.
Generalized gradient algorithm for trajectory optimization
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Bryson, A. E.; Slattery, R.
1990-01-01
The generalized gradient algorithm presented and verified as a basis for the solution of trajectory optimization problems improves the performance index while reducing path equality constraints, and terminal equality constraints. The algorithm is conveniently divided into two phases, of which the first, 'feasibility' phase yields a solution satisfying both path and terminal constraints, while the second, 'optimization' phase uses the results of the first phase as initial guesses.
NASA Astrophysics Data System (ADS)
Miyauchi, T.; Machimura, T.
2013-12-01
In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.
Saat, Mohd Rapik; Barkan, Christopher P L
2011-05-15
North America railways offer safe and generally the most economical means of long distance transport of hazardous materials. Nevertheless, in the event of a train accident releases of these materials can pose substantial risk to human health, property or the environment. The majority of railway shipments of hazardous materials are in tank cars. Improving the safety design of these cars to make them more robust in accidents generally increases their weight thereby reducing their capacity and consequent transportation efficiency. This paper presents a generalized tank car safety design optimization model that addresses this tradeoff. The optimization model enables evaluation of each element of tank car safety design, independently and in combination with one another. We present the optimization model by identifying a set of Pareto-optimal solutions for a baseline tank car design in a bicriteria decision problem. This model provides a quantitative framework for a rational decision-making process involving tank car safety design enhancements to reduce the risk of transporting hazardous materials. Copyright © 2011 Elsevier B.V. All rights reserved.
1980-06-01
COMPUTERIZED GENERAL PURPOSE INFORMATION MANAGEMENT SYSTEM (SELGE.M) TO KEDICALLY IMPORTANT ARTHROPODS (DIPTERA: CULICIDAE) Annual Report Terry L. Erwin June...GENERAL PURPOSE INFORMATION MANAGEMENT SYSTEM Annual--1 September 1979- (SEIGEM) TO MEDICALLY ThWORTANT ARTHROPODS 30 May 1980 (DIPTERA: CULICIDAE) 6
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
Application of a distributed network in computational fluid dynamic simulations
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish
1994-01-01
A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters.
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Serrano, Alejandro; Godoy, Jorge; Martínez-Álvarez, Antonio; Villagra, Jorge
2017-11-11
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle.
Therapeutic Substance Abuse Treatment for Incarcerated Women
Finfgeld-Connett, Deborah; Johnson, E. Diane
2011-01-01
The purpose of this qualitative systematic review was to explicate attributes of optimal therapeutic strategies for treating incarcerated women who have a history of substance abuse. An expansive search of electronic databases for qualitative research reports relating to substance abuse treatment for incarcerated women was conducted. Nine qualitative research reports comprised the sample for this review. Findings from these reports were extracted, placed into a data analysis matrix, coded, and categorized. Memos were written, and strategies for treating incarcerated women with alcohol problems were identified. Therapeutic effects of treatment programs for incarcerated women with substance-abuse problems appear to be enhanced when trust-based relationships are established, individualized and just care is provided, and treatment facilities are separate from the general prison environment. PMID:21771929
Finite Element Analysis of a NASA National Transonic Facility Wind Tunnel Balance
NASA Technical Reports Server (NTRS)
Lindell, Michael C.
1996-01-01
This paper presents the results of finite element analyses and correlation studies performed on a NASA National Transonic Facility (NTF) Wind Tunnel balance. In the past NASA has relied primarily on classical hand analyses, coupled with relatively large safety factors, for predicting maximum stresses in wind tunnel balances. Now, with the significant advancements in computer technology and sophistication of general purpose analysis codes, it is more reasonable to pursue finite element analyses of these balances. The correlation studies of the present analyses show very good agreement between the analyses and data measured with strain gages and therefore the studies give higher confidence for using finite element analyses to analyze and optimize balance designs in the future.
Finite Element Analysis of a NASA National Transonic Facility Wide Tunnel Balance
NASA Technical Reports Server (NTRS)
Lindell, Michael C. (Editor)
1999-01-01
This paper presents the results of finite element analyses and correlation studies performed on a NASA National Transonic Facility (NTF) Wind Tunnel balance. In the past NASA has relied primarily on classical hand analyses, coupled with relatively large safety factors, for predicting maximum stresses in wind tunnel balances. Now, with the significant advancements in computer technology and sophistication of general purpose analysis codes, it is more reasonable to pursue finite element analyses of these balances. The correlation studies of the present analyses show very good agreement between the analyses and data measured with strain gages and therefore the studies give higher confidence for using finite element analyses to analyze and optimize balance designs in the future.
Studies of HZE particle interactions and transport for space radiation protection purposes
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Wilson, John W.; Schimmerling, Walter; Wong, Mervyn
1987-01-01
The main emphasis is on developing general methods for accurately predicting high-energy heavy ion (HZE) particle interactions and transport for use by researchers in mission planning studies, in evaluating astronaut self-shielding factors, and in spacecraft shield design and optimization studies. The two research tasks are: (1) to develop computationally fast and accurate solutions to the Boltzmann (transport) equation; and (2) to develop accurate HZE interaction models, from fundamental physical considerations, for use as inputs into these transport codes. Accurate solutions to the HZE transport problem have been formulated through a combination of analytical and numerical techniques. In addition, theoretical models for the input interaction parameters are under development: stopping powers, nuclear absorption cross sections, and fragmentation parameters.
Deist, T M; Gorissen, B L
2016-02-07
High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.
Modeling Payload Stowage Impacts on Fire Risks On-Board the International Space Station
NASA Technical Reports Server (NTRS)
Anton, Kellie e.; Brown, Patrick F.
2010-01-01
The purpose of this presentation is to determine the risks of fire on-board the ISS due to non-standard stowage. ISS stowage is constantly being reexamined for optimality. Non-standard stowage involves stowing items outside of rack drawers, and fire risk is a key concern and is heavily mitigated. A Methodology is needed to account for fire risk due to non-standard stowage to capture the risk. The contents include: 1) Fire Risk Background; 2) General Assumptions; 3) Modeling Techniques; 4) Event Sequence Diagram (ESD); 5) Qualitative Fire Analysis; 6) Sample Qualitative Results for Fire Risk; 7) Qualitative Stowage Analysis; 8) Sample Qualitative Results for Non-Standard Stowage; and 9) Quantitative Analysis Basic Event Data.
On optimal soft-decision demodulation. [in digital communication system
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1976-01-01
A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.
Optimization of snow removal in Vermont.
DOT National Transportation Integrated Search
2013-08-01
The purpose of this report is to document the research activi : ties performed under project SPR : - : RAC : - : 727 for the : State of Vermont, Agency of Transportation, Materials & Research Section entitled Optimization of Snow Removal : in Verm...
Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner
2017-11-01
Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Towards Quantum Cybernetics:. Optimal Feedback Control in Quantum Bio Informatics
NASA Astrophysics Data System (ADS)
Belavkin, V. P.
2009-02-01
A brief account of the quantum information dynamics and dynamical programming methods for the purpose of optimal control in quantum cybernetics with convex constraints and cońcave cost and bequest functions of the quantum state is given. Consideration is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme with continuous observations we exploit the separation theorem of filtering and control aspects for quantum stochastic micro-dynamics of the total system. This allows to start with the Belavkin quantum filtering equation and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to only Hamiltonian terms in the filtering equation. A controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one.
Identifying functionally informative evolutionary sequence profiles.
Gil, Nelson; Fiser, Andras
2018-04-15
Multiple sequence alignments (MSAs) can provide essential input to many bioinformatics applications, including protein structure prediction and functional annotation. However, the optimal selection of sequences to obtain biologically informative MSAs for such purposes is poorly explored, and has traditionally been performed manually. We present Selection of Alignment by Maximal Mutual Information (SAMMI), an automated, sequence-based approach to objectively select an optimal MSA from a large set of alternatives sampled from a general sequence database search. The hypothesis of this approach is that the mutual information among MSA columns will be maximal for those MSAs that contain the most diverse set possible of the most structurally and functionally homogeneous protein sequences. SAMMI was tested to select MSAs for functional site residue prediction by analysis of conservation patterns on a set of 435 proteins obtained from protein-ligand (peptides, nucleic acids and small substrates) and protein-protein interaction databases. Availability and implementation: A freely accessible program, including source code, implementing SAMMI is available at https://github.com/nelsongil92/SAMMI.git. andras.fiser@einstein.yu.edu. Supplementary data are available at Bioinformatics online.
Khan, M Nisa
2016-02-10
We expansively investigate thermal behaviors of various general-purpose light-emitting diode (LED) lamps and apply our measured results, validated by simulation, to establish lamp design rules for optimizing their optical and thermal properties. These design rules provide the means to minimize lumen depreciation over time by minimizing the periods for lamps to reach thermal steady-state while maintaining their high luminous efficacy and omnidirectional light distribution capability. While it is well known that minimizing the junction temperature of an LED leads to a longer lifetime and an increased lumen output, our study demonstrates, for the first time, to the best of our knowledge, that it is also important to minimize the time it takes to reach thermal equilibrium because doing so minimizes lumen depreciation and enhances light output and color stability during operation. Specifically, we have found that, in addition to inadequate heat-sink fin areas for a lamp configuration, LEDs mounted on multiple boards, as opposed to a single board, lead to longer periods for reaching thermal equilibrium contributing to larger lumen depreciation.
Some aspects of SR beamline alignment
NASA Astrophysics Data System (ADS)
Gaponov, Yu. A.; Cerenius, Y.; Nygaard, J.; Ursby, T.; Larsson, K.
2011-09-01
Based on the Synchrotron Radiation (SR) beamline optical element-by-element alignment with analysis of the alignment results an optimized beamline alignment algorithm has been designed and developed. The alignment procedures have been designed and developed for the MAX-lab I911-4 fixed energy beamline. It has been shown that the intermediate information received during the monochromator alignment stage can be used for the correction of both monochromator and mirror without the next stages of alignment of mirror, slits, sample holder, etc. Such an optimization of the beamline alignment procedures decreases the time necessary for the alignment and becomes useful and helpful in the case of any instability of the beamline optical elements, storage ring electron orbit or the wiggler insertion device, which could result in the instability of angular and positional parameters of the SR beam. A general purpose software package for manual, semi-automatic and automatic SR beamline alignment has been designed and developed using the developed algorithm. The TANGO control system is used as the middle-ware between the stand-alone beamline control applications BLTools, BPMonitor and the beamline equipment.
User-oriented design strategies for a Lunar base
NASA Astrophysics Data System (ADS)
Jukola, Paivi
'Form follows function can be translated, among other, to communicate a desire to prioritize functional objectives for a particular design task. Thus it is less likely that a design program for a multi-functional habitat, for an all-purpose vehicle, or for a general community, will lead to most optimal, cost-effective and sustainable solutions. A power plant, a factory, a farm and a research center have over centuries had different logistical and functional requirements, despite of the local culture on various parts around the planet Earth. 'The same size fits all' concept is likely to lead to less user-friendly solutions. The paper proposes to rethink and to investigate alternative strategies to formulate objectives for a Lunar base. Diverse scientific experiments and potential future research programs for the Moon have a number of functional requirements that differ from each other. A crew of 4-6 may not be optimal for the most innovative research. The discussion is based on research of Human Factors and Design for visiting professor lectures for a Lunar base project with Howard University and NASA Marshall Space Center 2009-2010.
Single-Scale Retinex Using Digital Signal Processors
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2005-01-01
The Retinex is an image enhancement algorithm that improves the brightness, contrast and sharpness of an image. It performs a non-linear spatial/spectral transform that provides simultaneous dynamic range compression and color constancy. It has been used for a wide variety of applications ranging from aviation safety to general purpose photography. Many potential applications require the use of Retinex processing at video frame rates. This is difficult to achieve with general purpose processors because the algorithm contains a large number of complex computations and data transfers. In addition, many of these applications also constrain the potential architectures to embedded processors to save power, weight and cost. Thus we have focused on digital signal processors (DSPs) and field programmable gate arrays (FPGAs) as potential solutions for real-time Retinex processing. In previous efforts we attained a 21 (full) frame per second (fps) processing rate for the single-scale monochromatic Retinex with a TMS320C6711 DSP operating at 150 MHz. This was achieved after several significant code improvements and optimizations. Since then we have migrated our design to the slightly more powerful TMS320C6713 DSP and the fixed point TMS320DM642 DSP. In this paper we briefly discuss the Retinex algorithm, the performance of the algorithm executing on the TMS320C6713 and the TMS320DM642, and compare the results with the TMS320C6711.
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.
Crush Can Behaviour as an Energy Absorber in a Frontal Impact
NASA Astrophysics Data System (ADS)
Bhuyan, Atanu; Ganilova, Olga
2012-08-01
The work presented is devoted to the investigation of a state-of-the-art technological solution for the design of a crush-can characterized by optimal energy absorbing properties. The work is focused on the theoretical background of the square tubes, circular tubes and inverbucktube performance under impact with the purpose of design of a novel optimized structure. The main system under consideration is based on the patent US 2008/0185851 A1 and includes a base flange with elongated crush boxes and back straps for stabilization of the crush boxes with the purpose of improvement of the energy-absorbing functionality. The modelling of this system is carried out applying both a theoretical approach and finite element analysis concentrating on the energy absorbing abilities of the crumple zones. The optimization process is validated under dynamic and quasi-static loading conditions whilst considering various modes of deformation and stress distribution along the tubular components. Energy absorbing behaviour of the crush-cans is studied concentrating on their geometrical properties and their diamond or concertina modes of deformation. Moreover, structures made of different materials, steel, aluminium and polymer composites are considered for the material effect analysis and optimization through their combination. Optimization of the crush-can behaviour is done within the limits of the frontal impact scenario with the purpose of improvement of the structural performance in the Euro NCAP tests.
Razavi, Sonia M; Gonzalez, Marcial; Cuitiño, Alberto M
2015-04-30
We propose a general framework for determining optimal relationships for tensile strength of doubly convex tablets under diametrical compression. This approach is based on the observation that tensile strength is directly proportional to the breaking force and inversely proportional to a non-linear function of geometric parameters and materials properties. This generalization reduces to the analytical expression commonly used for flat faced tablets, i.e., Hertz solution, and to the empirical relationship currently used in the pharmaceutical industry for convex-faced tablets, i.e., Pitt's equation. Under proper parametrization, optimal tensile strength relationship can be determined from experimental results by minimizing a figure of merit of choice. This optimization is performed under the first-order approximation that a flat faced tablet and a doubly curved tablet have the same tensile strength if they have the same relative density and are made of the same powder, under equivalent manufacturing conditions. Furthermore, we provide a set of recommendations and best practices for assessing the performance of optimal tensile strength relationships in general. Based on these guidelines, we identify two new models, namely the general and mechanistic models, which are effective and predictive alternatives to the tensile strength relationship currently used in the pharmaceutical industry. Copyright © 2015 Elsevier B.V. All rights reserved.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
Fast Optimization for Aircraft Descent and Approach Trajectory
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John
2017-01-01
We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.
41 CFR 109-38.5103 - Motor vehicle utilization standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... are established for DOE as objectives for those motor vehicles operated generally for those purposes for which acquired: (1) Sedans and station wagons, general purpose use—12,000 miles per year. (2) Light trucks (4×2's) and general purpose vehicles, one ton and under (less than 12,500 GVWR)—10,000...
41 CFR 109-38.5103 - Motor vehicle utilization standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... are established for DOE as objectives for those motor vehicles operated generally for those purposes for which acquired: (1) Sedans and station wagons, general purpose use—12,000 miles per year. (2) Light trucks (4×2's) and general purpose vehicles, one ton and under (less than 12,500 GVWR)—10,000...
76 FR 76954 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-09
... Bombs, 1000 BLU-117 2000lb General Purpose Bombs, 600 BLU-109 2000lb Hard Target Penetrator Bombs, and four BDU-50C inert bombs, fuzes, weapons integration, munitions trainers, personnel training and... kits, 3300 BLU-111 500lb General Purpose Bombs, 1000 BLU-117 2000lb General Purpose Bombs, 600 BLU-109...
41 CFR 60-741.40 - General purpose and applicability of the affirmative action program requirement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false General purpose and... Property Management Other Provisions Relating to Public Contracts OFFICE OF FEDERAL CONTRACT COMPLIANCE... requirement. (a) General purpose. An affirmative action program is a management tool designed to ensure equal...
1982-07-01
GENERAL PURPOSE INFORMATION MANAGEMENT SYSTEM (SELGEM) TO MEDICALLY 0 IMPORTANT ARTHROPODS (DIPTERA: CULICIDAE) oAnnual Report Terry L. Erwin July...APPLICATION OF A COMPUTERIZED GENERAL PURPOSE Annual Report INFORMATION MANAGEMENT SYSTEM (SELGEM) TO July 1981 to June 1982 MEDICALLY IMPORTANT ARTHROPODS
Tang, Liyang
2013-04-04
The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
A New Version of Optimism for Education
ERIC Educational Resources Information Center
Bojesen, Emile
2018-01-01
The primary purpose of this paper is to outline the conceptual means by which it is possible to be optimistic about education. To provide this outline I turn to Ian Hunter and David Blacker, after a brief introduction to Nietzsche's conceptions of optimism and pessimism, to show why certain forms of optimism in education are either intellectually…
Chest Tube Drainage of the Pleural Space: A Concise Review for Pulmonologists.
Porcel, José M
2018-04-01
Chest tube insertion is a common procedure usually done for the purpose of draining accumulated air or fluid in the pleural cavity. Small-bore chest tubes (≤14F) are generally recommended as the first-line therapy for spontaneous pneumothorax in non-ventilated patients and pleural effusions in general, with the possible exception of hemothoraces and malignant effusions (for which an immediate pleurodesis is planned). Large-bore chest drains may be useful for very large air leaks, as well as post-ineffective trial with small-bore drains. Chest tube insertion should be guided by imaging, either bedside ultrasonography or, less commonly, computed tomography. The so-called trocar technique must be avoided. Instead, blunt dissection (for tubes >24F) or the Seldinger technique should be used. All chest tubes are connected to a drainage system device: flutter valve, underwater seal, electronic systems or, for indwelling pleural catheters (IPC), vacuum bottles. The classic, three-bottle drainage system requires either (external) wall suction or gravity ("water seal") drainage (the former not being routinely recommended unless the latter is not effective). The optimal timing for tube removal is still a matter of controversy; however, the use of digital drainage systems facilitates informed and prudent decision-making in that area. A drain-clamping test before tube withdrawal is generally not advocated. Pain, drain blockage and accidental dislodgment are common complications of small-bore drains; the most dreaded complications include organ injury, hemothorax, infections, and re-expansion pulmonary edema. IPC represent a first-line palliative therapy of malignant pleural effusions in many centers. The optimal frequency of drainage, for IPC, has not been formally agreed upon or otherwise officially established. Copyright©2018. The Korean Academy of Tuberculosis and Respiratory Diseases.
Chest Tube Drainage of the Pleural Space: A Concise Review for Pulmonologists
2018-01-01
Chest tube insertion is a common procedure usually done for the purpose of draining accumulated air or fluid in the pleural cavity. Small-bore chest tubes (≤14F) are generally recommended as the first-line therapy for spontaneous pneumothorax in non-ventilated patients and pleural effusions in general, with the possible exception of hemothoraces and malignant effusions (for which an immediate pleurodesis is planned). Large-bore chest drains may be useful for very large air leaks, as well as post-ineffective trial with small-bore drains. Chest tube insertion should be guided by imaging, either bedside ultrasonography or, less commonly, computed tomography. The so-called trocar technique must be avoided. Instead, blunt dissection (for tubes >24F) or the Seldinger technique should be used. All chest tubes are connected to a drainage system device: flutter valve, underwater seal, electronic systems or, for indwelling pleural catheters (IPC), vacuum bottles. The classic, three-bottle drainage system requires either (external) wall suction or gravity (“water seal”) drainage (the former not being routinely recommended unless the latter is not effective). The optimal timing for tube removal is still a matter of controversy; however, the use of digital drainage systems facilitates informed and prudent decision-making in that area. A drain-clamping test before tube withdrawal is generally not advocated. Pain, drain blockage and accidental dislodgment are common complications of small-bore drains; the most dreaded complications include organ injury, hemothorax, infections, and re-expansion pulmonary edema. IPC represent a first-line palliative therapy of malignant pleural effusions in many centers. The optimal frequency of drainage, for IPC, has not been formally agreed upon or otherwise officially established. PMID:29372629
Remediation Optimization: Definition, Scope and Approach
This document provides a general definition, scope and approach for conducting optimization reviews within the Superfund Program and includes the fundamental principles and themes common to optimization.
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.
Nonlinear Model Predictive Control for Cooperative Control and Estimation
NASA Astrophysics Data System (ADS)
Ru, Pengkai
Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.
NASA Astrophysics Data System (ADS)
Libraro, Paola
The general electric propulsion orbit-raising maneuver of a spacecraft must contend with four main limiting factors: the longer time of flight, multiple eclipses prohibiting continuous thrusting, long exposure to radiation from the Van Allen belt and high power requirement of the electric engines. In order to optimize a low-thrust transfer with respect to these challenges, the choice of coordinates and corresponding equations of motion used to describe the kinematical and dynamical behavior of the satellite is of critical importance. This choice can potentially affect the numerical optimization process as well as limit the set of mission scenarios that can be investigated. To increase the ability to determine the feasible set of mission scenarios able to address the challenges of an all-electric orbit-raising, a set of equations free of any singularities is required to consider a completely arbitrary injection orbit. For this purpose a new quaternion-based formulation of a spacecraft translational dynamics that is globally nonsingular has been developed. The minimum-time low-thrust problem has been solved using the new set of equations of motion inside a direct optimization scheme in order to investigate optimal low-thrust trajectories over the full range of injection orbit inclinations between 0 and 90 degrees with particular focus on high-inclinations. The numerical results consider a specific mission scenario in order to analyze three key aspects of the problem: the effect of the initial guess on the shape and duration of the transfer, the effect of Earth oblateness on transfer time and the role played by, radiation damage and power degradation in all-electric minimum-time transfers. Finally trade-offs between mass and cost savings are introduced through a test case.
Robust, Optimal Water Infrastructure Planning Under Deep Uncertainty Using Metamodels
NASA Astrophysics Data System (ADS)
Maier, H. R.; Beh, E. H. Y.; Zheng, F.; Dandy, G. C.; Kapelan, Z.
2015-12-01
Optimal long-term planning plays an important role in many water infrastructure problems. However, this task is complicated by deep uncertainty about future conditions, such as the impact of population dynamics and climate change. One way to deal with this uncertainty is by means of robustness, which aims to ensure that water infrastructure performs adequately under a range of plausible future conditions. However, as robustness calculations require computationally expensive system models to be run for a large number of scenarios, it is generally computationally intractable to include robustness as an objective in the development of optimal long-term infrastructure plans. In order to overcome this shortcoming, an approach is developed that uses metamodels instead of computationally expensive simulation models in robustness calculations. The approach is demonstrated for the optimal sequencing of water supply augmentation options for the southern portion of the water supply for Adelaide, South Australia. A 100-year planning horizon is subdivided into ten equal decision stages for the purpose of sequencing various water supply augmentation options, including desalination, stormwater harvesting and household rainwater tanks. The objectives include the minimization of average present value of supply augmentation costs, the minimization of average present value of greenhouse gas emissions and the maximization of supply robustness. The uncertain variables are rainfall, per capita water consumption and population. Decision variables are the implementation stages of the different water supply augmentation options. Artificial neural networks are used as metamodels to enable all objectives to be calculated in a computationally efficient manner at each of the decision stages. The results illustrate the importance of identifying optimal staged solutions to ensure robustness and sustainability of water supply into an uncertain long-term future.
Run-time scheduling and execution of loops on message passing machines
NASA Technical Reports Server (NTRS)
Crowley, Kay; Saltz, Joel; Mirchandaney, Ravi; Berryman, Harry
1989-01-01
Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.
Run-time scheduling and execution of loops on message passing machines
NASA Technical Reports Server (NTRS)
Saltz, Joel; Crowley, Kathleen; Mirchandaney, Ravi; Berryman, Harry
1990-01-01
Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.
Cognitive success: instrumental justifications of normative systems of reasoning.
Schurz, Gerhard
2014-01-01
In the first part of the paper (sec. 1-4), I argue that Elqayam and Evan's (2011) distinction between normative and instrumental conceptions of cognitive rationality corresponds to deontological vs. teleological accounts in meta-ethics. I suggest that Elqayam and Evans' distinction be replaced by the distinction between a-priori intuition-based vs. a-posteriori success-based accounts of cognitive rationality. The value of cognitive success lies in its instrumental rationality for almost-all practical purposes. In the second part (sec. 5-7), I point out that the Elqayam and Evans's distinction between normative and instrumental rationality is coupled with a second distinction: between logically general vs. locally adaptive accounts of rationality. I argue that these are two independent distinctions that should be treated as independent dimensions. I also demonstrate that logically general systems of reasoning can be instrumentally justified. However, such systems can only be cognitively successful if they are paired with successful inductive reasoning, which is the area where the program of adaptive (ecological) rationality emerged, because there are no generally optimal inductive reasoning methods. I argue that the practical necessity of reasoning under changing environments constitutes a dilemma for ecological rationality, which I attempt to solve within a dual account of rationality.
Taxation of United States general aviation
NASA Astrophysics Data System (ADS)
Sobieralski, Joseph Bernard
General aviation in the United States has been an important part of the economy and American life. General aviation is defined as all flying excluding military and scheduled airline operations, and is utilized in many areas of our society. The majority of aircraft operations and airports in the United States are categorized as general aviation, and general aviation contributes more than one percent to the United States gross domestic product each year. Despite the many benefits of general aviation, the lead emissions from aviation gasoline consumption are of great concern. General aviation emits over half the lead emissions in the United States or over 630 tons in 2005. The other significant negative externality attributed to general aviation usage is aircraft accidents. General aviation accidents have caused over 8000 fatalities over the period 1994-2006. A recent Federal Aviation Administration proposed increase in the aviation gasoline tax from 19.4 to 70.1 cents per gallon has renewed interest in better understanding the implications of such a tax increase as well as the possible optimal rate of taxation. Few studies have examined aviation fuel elasticities and all have failed to study general aviation fuel elasticities. Chapter one fills that gap and examines the elasticity of aviation gasoline consumption in United States general aviation. Utilizing aggregate time series and dynamic panel data, the price and income elasticities of demand are estimated. The price elasticity of demand for aviation gasoline is estimated to range from -0.093 to -0.185 in the short-run and from -0.132 to -0.303 in the long-run. These results prove to be similar in magnitude to automobile gasoline elasticities and therefore tax policies could more closely mirror those of automobile tax policies. The second chapter examines the costs associated with general aviation accidents. Given the large number of general aviation operations as well as the large number of fatalities and injuries attributed to general aviation accidents in the United States, understanding the costs to society is of great importance. This chapter estimates the direct and indirect costs associated with general aviation accidents in the United States. The indirect costs are estimated via the human capital approach in addition to the willingness-to-pay approach. The average annual accident costs attributed to general aviation are found to be 2.32 billion and 3.81 billion (2006 US) utilizing the human capital approach and willingness-to-pay approach, respectively. These values appear to be fairly robust when subjected to a sensitivity analysis. These costs highlight the large societal benefits from accident and fatality reduction. The final chapter derives a second-best optimal aviation gasoline tax developed from previous general equilibrium frameworks. This optimal tax reflects both the lead pollution and accident externalities, as well as the balance between excise taxes and labor taxes to finance government spending. The calculated optimal tax rate is 4.07 per gallon, which is over 20 times greater than the current tax rate and 5 times greater than the Federal Aviation Administration proposed tax rate. The calculated optimal tax rate is also over 3 times greater than automobile gasoline optimal tax rates calculated by previous studies. The Pigovian component is 1.36, and we observe that the accident externality is taxed more severely than the pollution externality. The largest component of the optimal tax rate is the Ramsey component. At 2.70, the Ramsey component reflects the ability of the government to raise revenue aviation gasoline which is price inelastic. The calculated optimal tax is estimated to reduce lead emissions by over 10 percent and reduce accidents by 20 percent. Although unlikely to be adopted by policy makers, the optimal tax benefits are apparent and it sheds light on the need to reduce these negative externalities via policy changes.
Optimized aggregates gradations for portland cement concrete mix designs evaluation.
DOT National Transportation Integrated Search
2008-01-01
This research main purpose was to optimize aggregate blends utilizing more locally available materials. With the industry collaboration and partnership, the Department embraced a change that impacts a specification implemented in the 1947 for Class o...
77 FR 9899 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... Medium Range Air-to-Air Missiles, 42 GBU-49 Enhanced PAVEWAY II 500 lb Bombs, 200 GBU-54 (2000 lb) Laser Joint Direct Attack Munitions (JDAM) Bombs, 642 BLU-111 (500 lb) General Purpose Bombs, 127 MK-82 (500 lb) General Purpose Bombs, 80 BLU-117 (2000 lb) General Purpose Bombs, 4 MK-84 (2000 lb) Inert...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
... Treatment Under the Generalized System of Preferences and for Other Purposes #0; #0; #0; Presidential... Modify Duty-Free Treatment Under the Generalized System of Preferences and for Other Purposes By the... competitive need limitations on the preferential treatment afforded under the GSP to eligible articles. 4...
Enhancing Polyhedral Relaxations for Global Optimization
ERIC Educational Resources Information Center
Bao, Xiaowei
2009-01-01
During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.
ERIC Educational Resources Information Center
Raymond, Margaret; And Others
1983-01-01
Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 23 Highways 1 2013-04-01 2013-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 23 Highways 1 2014-04-01 2014-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Purpose. 1002.1 Section 1002.1 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) OFFICIAL SEAL AND DISTINGUISHING FLAG General § 1002.1 Purpose. The purpose of this part is to describe the official seal and distinguishing flag of the Department of Energy, and to...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 23 Highways 1 2012-04-01 2012-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Optimality of general lattice transformations with applications to the Bain strain in steel
NASA Astrophysics Data System (ADS)
Koumatos, K.; Muehlemann, A.
2016-04-01
This article provides a rigorous proof of a conjecture by E. C. Bain in 1924 on the optimality of the so-called Bain strain based on a criterion of least atomic movement. A general framework that explores several such optimality criteria is introduced and employed to show the existence of optimal transformations between any two Bravais lattices. A precise algorithm and a graphical user interface to determine this optimal transformation is provided. Apart from the Bain conjecture concerning the transformation from face-centred cubic to body-centred cubic, applications include the face-centred cubic to body-centred tetragonal transition as well as the transformation between two triclinic phases of terephthalic acid.
Optimality of general lattice transformations with applications to the Bain strain in steel
Koumatos, K.
2016-01-01
This article provides a rigorous proof of a conjecture by E. C. Bain in 1924 on the optimality of the so-called Bain strain based on a criterion of least atomic movement. A general framework that explores several such optimality criteria is introduced and employed to show the existence of optimal transformations between any two Bravais lattices. A precise algorithm and a graphical user interface to determine this optimal transformation is provided. Apart from the Bain conjecture concerning the transformation from face-centred cubic to body-centred cubic, applications include the face-centred cubic to body-centred tetragonal transition as well as the transformation between two triclinic phases of terephthalic acid. PMID:27274692
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
On Born's Conjecture about Optimal Distribution of Charges for an Infinite Ionic Crystal
NASA Astrophysics Data System (ADS)
Bétermin, Laurent; Knüpfer, Hans
2018-04-01
We study the problem for the optimal charge distribution on the sites of a fixed Bravais lattice. In particular, we prove Born's conjecture about the optimality of the rock salt alternate distribution of charges on a cubic lattice (and more generally on a d-dimensional orthorhombic lattice). Furthermore, we study this problem on the two-dimensional triangular lattice and we prove the optimality of a two-component honeycomb distribution of charges. The results hold for a class of completely monotone interaction potentials which includes Coulomb-type interactions for d≥3 . In a more general setting, we derive a connection between the optimal charge problem and a minimization problem for the translated lattice theta function.
General purpose force doctrine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weltman, J.J.
In contemporary American strategic parlance, the general purpose forces have come to mean those forces intended for conflict situations other than nuclear war with the Soviet Union. As with all military forces, the general purpose forces are powerfully determined by prevailing conceptions of the problems they must meet and by institutional biases as to the proper way to deal with those problems. This paper deals with the strategic problems these forces are intended to meet, the various and often conflicting doctrines and organizational structures which have been generated in order to meet those problems, and the factors which will influencemore » general purpose doctrine and structure in the future. This paper does not attempt to prescribe technological solutions to the needs of the general purpose forces. Rather, it attempts to display the doctrinal and institutional context within which new technologies must operate, and which will largely determine whether these technologies are accepted into the force structure or not.« less
Optimization of Wireless Power Transfer Systems Enhanced by Passive Elements and Metasurfaces
NASA Astrophysics Data System (ADS)
Lang, Hans-Dieter; Sarris, Costas D.
2017-10-01
This paper presents a rigorous optimization technique for wireless power transfer (WPT) systems enhanced by passive elements, ranging from simple reflectors and intermedi- ate relays all the way to general electromagnetic guiding and focusing structures, such as metasurfaces and metamaterials. At its core is a convex semidefinite relaxation formulation of the otherwise nonconvex optimization problem, of which tightness and optimality can be confirmed by a simple test of its solutions. The resulting method is rigorous, versatile, and general -- it does not rely on any assumptions. As shown in various examples, it is able to efficiently and reliably optimize such WPT systems in order to find their physical limitations on performance, optimal operating parameters and inspect their working principles, even for a large number of active transmitters and passive elements.
Neighboring Optimal Aircraft Guidance in a General Wind Environment
NASA Technical Reports Server (NTRS)
Jardin, Matthew R. (Inventor)
2003-01-01
Method and system for determining an optimal route for an aircraft moving between first and second waypoints in a general wind environment. A selected first wind environment is analyzed for which a nominal solution can be determined. A second wind environment is then incorporated; and a neighboring optimal control (NOC) analysis is performed to estimate an optimal route for the second wind environment. In particular examples with flight distances of 2500 and 6000 nautical miles in the presence of constant or piecewise linearly varying winds, the difference in flight time between a nominal solution and an optimal solution is 3.4 to 5 percent. Constant or variable winds and aircraft speeds can be used. Updated second wind environment information can be provided and used to obtain an updated optimal route.
[Surgery for thoracic tuberculosis].
Kilani, T; Boudaya, M S; Zribi, H; Ouerghi, S; Marghli, A; Mestiri, T; Mezni, F
2015-01-01
Tuberculosis is mainly a medical disease. Surgery has been the unique therapeutic tool for a long time before the advent of specific antituberculous drugs, and the role of surgery was then confined to the treatment of the sequelae of tuberculosis and their complications. The resurgence of tuberculosis and the emergence of multidrug-resistant TB combined to immunosuppressed patients represent a new challenge for tuberculosis surgery. Surgery may be indicated for a diagnostic purpose in patients with pulmonary, pleural, mediastinal or thoracic wall involvement, or with a therapeutic purpose (drainage, resection, residual cavity obliteration). Modern imaging techniques and the advent of video-assisted thoracic surgery allowed a new approach of this pathology; the majority of diagnostic interventions and selected cases requiring lung resection can be performed through a mini-invasive approach. Patients proposed for aggressive surgery may be treated with the best results thanks to a good evaluation of the thoracic lesions, of the patients' nutritional, infectious and general status combined with a good coordination between the specialized medical team for an optimal preparation to surgery. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Stanford Hardware Development Program
NASA Technical Reports Server (NTRS)
Peterson, A.; Linscott, I.; Burr, J.
1986-01-01
Architectures for high performance, digital signal processing, particularly for high resolution, wide band spectrum analysis were developed. These developments are intended to provide instrumentation for NASA's Search for Extraterrestrial Intelligence (SETI) program. The real time signal processing is both formal and experimental. The efficient organization and optimal scheduling of signal processing algorithms were investigated. The work is complemented by efforts in processor architecture design and implementation. A high resolution, multichannel spectrometer that incorporates special purpose microcoded signal processors is being tested. A general purpose signal processor for the data from the multichannel spectrometer was designed to function as the processing element in a highly concurrent machine. The processor performance required for the spectrometer is in the range of 1000 to 10,000 million instructions per second (MIPS). Multiple node processor configurations, where each node performs at 100 MIPS, are sought. The nodes are microprogrammable and are interconnected through a network with high bandwidth for neighboring nodes, and medium bandwidth for nodes at larger distance. The implementation of both the current mutlichannel spectrometer and the signal processor as Very Large Scale Integration CMOS chip sets was commenced.
Optimizing dosing of oncology drugs.
Minasian, L; Rosen, O; Auclair, D; Rahman, A; Pazdur, R; Schilsky, R L
2014-11-01
The purpose of this article is to acknowledge the challenges in optimizing the dosing of oncology drugs and to propose potential approaches to address these challenges in order to optimize effectiveness, minimize toxicity, and promote adherence in patients. These approaches could provide better opportunities to understand the sources of variability in drug exposure and clinical outcomes during the development and premarketing evaluation of investigational new drugs.
Power system modeling and optimization methods vis-a-vis integrated resource planning (IRP)
NASA Astrophysics Data System (ADS)
Arsali, Mohammad H.
1998-12-01
The state-of-the-art restructuring of power industries is changing the fundamental nature of retail electricity business. As a result, the so-called Integrated Resource Planning (IRP) strategies implemented on electric utilities are also undergoing modifications. Such modifications evolve from the imminent considerations to minimize the revenue requirements and maximize electrical system reliability vis-a-vis capacity-additions (viewed as potential investments). IRP modifications also provide service-design bases to meet the customer needs towards profitability. The purpose of this research as deliberated in this dissertation is to propose procedures for optimal IRP intended to expand generation facilities of a power system over a stretched period of time. Relevant topics addressed in this research towards IRP optimization are as follows: (1) Historical prospective and evolutionary aspects of power system production-costing models and optimization techniques; (2) A survey of major U.S. electric utilities adopting IRP under changing socioeconomic environment; (3) A new technique designated as the Segmentation Method for production-costing via IRP optimization; (4) Construction of a fuzzy relational database of a typical electric power utility system for IRP purposes; (5) A genetic algorithm based approach for IRP optimization using the fuzzy relational database.
TMS combined with EEG in genetic generalized epilepsy: A phase II diagnostic accuracy study.
Kimiskidis, Vasilios K; Tsimpiris, Alkiviadis; Ryvlin, Philippe; Kalviainen, Reetta; Koutroumanidis, Michalis; Valentin, Antonio; Laskaris, Nikolaos; Kugiumtzis, Dimitris
2017-02-01
(A) To develop a TMS-EEG stimulation and data analysis protocol in genetic generalized epilepsy (GGE). (B) To investigate the diagnostic accuracy of TMS-EEG in GGE. Pilot experiments resulted in the development and optimization of a paired-pulse TMS-EEG protocol at rest, during hyperventilation (HV), and post-HV combined with multi-level data analysis. This protocol was applied in 11 controls (C) and 25 GGE patients (P), further dichotomized into responders to antiepileptic drugs (R, n=13) and non-responders (n-R, n=12).Features (n=57) extracted from TMS-EEG responses after multi-level analysis were given to a feature selection scheme and a Bayesian classifier, and the accuracy of assigning participants into the classes P-C and R-nR was computed. On the basis of the optimal feature subset, the cross-validated accuracy of TMS-EEG for the classification P-C was 0.86 at rest, 0.81 during HV and 0.92 at post-HV, whereas for R-nR the corresponding figures are 0.80, 0.78 and 0.65, respectively. Applying a fusion approach on all conditions resulted in an accuracy of 0.84 for the classification P-C and 0.76 for the classification R-nR. TMS-EEG can be used for diagnostic purposes and for assessing the response to antiepileptic drugs. TMS-EEG holds significant diagnostic potential in GGE. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Leung, Janni; Atherton, Iain; Kyle, Richard G; Hubbard, Gill; McLaughlin, Deirdre
2016-04-01
The aim of this study is to examine the association between optimism and psychological distress in women with breast cancer after taking into account their self-rated general health. Data were aggregated from the Scottish Health Survey (2008 to 2011) to derive a nationally representative sample of 12,255 women (11,960 cancer-free controls, and 295 breast cancer cases identified from linked cancer registry data). The explanatory variables were optimism and general health, and the outcome variable was symptoms of psychological distress. Logistic regression analyses were conducted, with optimism entered in step 1 and general health entered in step 2. In an unadjusted model, higher levels of optimism were associated with lower odds of psychological distress in both the control group (OR = 0. 57, 95 % CI = 0.51-0.60) and breast cancer group (OR = 0. 64, 95 % CI = 0.47-0.88). However, in a model adjusting for general health, optimism was associated with lower odds of psychological distress only in the control group (OR = 0.50, 95 % CI = 0.44-0.57), but not significantly in the breast cancer group (OR = 1.15, 95 % CI = 0.32-4.11). In the breast cancer group, poor general health was a stronger associate of psychological distress (OR = 4. 98, 95 % CI = 1.32-18.75). Results were consistent after adjusting for age, years since breast cancer diagnosis, survey year, socioeconomic status, education, marital status, body mass index, smoking status, and alcohol consumption. This research confirms the value of multicomponent supportive care interventions for women with breast cancer. Specifically, it suggests that following breast cancer diagnosis, health care professionals need to provide advice and signpost to services that assist women to maintain or improve both their psychological and general health.
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
Adaptation, Growth, and Resilience in Biological Distribution Networks
NASA Astrophysics Data System (ADS)
Ronellenfitsch, Henrik; Katifori, Eleni
Highly optimized complex transport networks serve crucial functions in many man-made and natural systems such as power grids and plant or animal vasculature. Often, the relevant optimization functional is nonconvex and characterized by many local extrema. In general, finding the global, or nearly global optimum is difficult. In biological systems, it is believed that such an optimal state is slowly achieved through natural selection. However, general coarse grained models for flow networks with local positive feedback rules for the vessel conductivity typically get trapped in low efficiency, local minima. We show how the growth of the underlying tissue, coupled to the dynamical equations for network development, can drive the system to a dramatically improved optimal state. This general model provides a surprisingly simple explanation for the appearance of highly optimized transport networks in biology such as plant and animal vasculature. In addition, we show how the incorporation of spatially collective fluctuating sources yields a minimal model of realistic reticulation in distribution networks and thus resilience against damage.
Synthesis of multi-loop automatic control systems by the nonlinear programming method
NASA Astrophysics Data System (ADS)
Voronin, A. V.; Emelyanova, T. A.
2017-01-01
The article deals with the problem of calculation of the multi-loop control systems optimal tuning parameters by numerical methods and nonlinear programming methods. For this purpose, in the paper the Optimization Toolbox of Matlab is used.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false General policy considerations, purpose and scope of rules relating to open Commission meetings. 147.1 Section 147.1 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION OPEN COMMISSION MEETINGS § 147.1 General policy considerations, purpose and scope of rules...
Generalized Differential Calculus and Applications to Optimization
NASA Astrophysics Data System (ADS)
Rector, Robert Blake Hayden
This thesis contains contributions in three areas: the theory of generalized calculus, numerical algorithms for operations research, and applications of optimization to problems in modern electric power systems. A geometric approach is used to advance the theory and tools used for studying generalized notions of derivatives for nonsmooth functions. These advances specifically pertain to methods for calculating subdifferentials and to expanding our understanding of a certain notion of derivative of set-valued maps, called the coderivative, in infinite dimensions. A strong understanding of the subdifferential is essential for numerical optimization algorithms, which are developed and applied to nonsmooth problems in operations research, including non-convex problems. Finally, an optimization framework is applied to solve a problem in electric power systems involving a smart solar inverter and battery storage system providing energy and ancillary services to the grid.
NASA Astrophysics Data System (ADS)
Choi, Kyungah; Suk, Hyeon-Jeong
2015-01-01
The purpose of this study is to investigate the differences in the psychophysical judgment of mobile display color appearances between Europeans and Asians. A total of 50 participants, comprising 20 Europeans (9 French, 6 Swedish, 3 Norwegians, and 2 Germans) and 30 Asians (30 Koreans) participated in this experiment. A total of 18 display stimuli with different correlated color temperatures were presented, varying from 2,470 to 18,330 K. Each stimulus was viewed under 11 illuminants ranging from 2,530 to 19,760 K, while their illuminance was consistent around 500 lux. The subjects were asked to assess the optimal level of the display stimuli under different illuminants. In general, confirming the previous studies on color reproduction, we found a positive correlation in the correlated color temperatures between the illuminants and optimal displays. However, Europeans preferred a lower color temperature compared to Asians along the entire range of the illuminants. Two regression equations were derived to predict the optimal display color temperature (y) under varying illuminants (x) as follows: y = α + β*log(x), where α = -8770.37 and β = 4279.29 for European (R2 = 0.95, p < .05), and α = -16076.35 and β = 6388.41 for Asian (R2 = 0.85, p < .05). The findings provide the theoretical basis from which manufacturers can take a cultural-sensitive approach to enhancing their products' appeal in the global markets.
A multiobjective optimization framework for multicontaminant industrial water network design.
Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge
2011-07-01
The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. Copyright © 2011 Elsevier Ltd. All rights reserved.
Efficient retrieval of landscape Hessian: Forced optimal covariance adaptive learning
NASA Astrophysics Data System (ADS)
Shir, Ofer M.; Roslund, Jonathan; Whitley, Darrell; Rabitz, Herschel
2014-06-01
Knowledge of the Hessian matrix at the landscape optimum of a controlled physical observable offers valuable information about the system robustness to control noise. The Hessian can also assist in physical landscape characterization, which is of particular interest in quantum system control experiments. The recently developed landscape theoretical analysis motivated the compilation of an automated method to learn the Hessian matrix about the global optimum without derivative measurements from noisy data. The current study introduces the forced optimal covariance adaptive learning (FOCAL) technique for this purpose. FOCAL relies on the covariance matrix adaptation evolution strategy (CMA-ES) that exploits covariance information amongst the control variables by means of principal component analysis. The FOCAL technique is designed to operate with experimental optimization, generally involving continuous high-dimensional search landscapes (≳30) with large Hessian condition numbers (≳104). This paper introduces the theoretical foundations of the inverse relationship between the covariance learned by the evolution strategy and the actual Hessian matrix of the landscape. FOCAL is presented and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and quantum control experiments, which are observed to possess nonseparable, nonquadratic search landscapes. The recovered Hessian forms were corroborated by physical knowledge of the systems. The implications of FOCAL extend beyond the investigated studies to potentially cover other physically motivated multivariate landscapes.
Clemen, Christof B; Benderoth, Günther E K; Schmidt, Andreas; Hübner, Frank; Vogl, Thomas J; Silber, Gerhard
2017-01-01
In this study, useful methods for active human skeletal muscle material parameter determination are provided. First, a straightforward approach to the implementation of a transversely isotropic hyperelastic continuum mechanical material model in an invariant formulation is presented. This procedure is found to be feasible even if the strain energy is formulated in terms of invariants other than those predetermined by the software's requirements. Next, an appropriate experimental setup for the observation of activation-dependent material behavior, corresponding data acquisition, and evaluation is given. Geometry reconstruction based on magnetic resonance imaging of different deformation states is used to generate realistic, subject-specific finite element models of the upper arm. Using the deterministic SIMPLEX optimization strategy, a convenient quasi-static passive-elastic material characterization is pursued; the results of this approach used to characterize the behavior of human biceps in vivo indicate the feasibility of the illustrated methods to identify active material parameters comprising multiple loading modes. A comparison of a contact simulation incorporating the optimized parameters to a reconstructed deformed geometry of an indented upper arm shows the validity of the obtained results regarding deformation scenarios perpendicular to the effective direction of the nonactivated biceps. However, for a valid, activatable, general-purpose material characterization, the material model needs some modifications as well as a multicriteria optimization of the force-displacement data for different loading modes. Copyright © 2016 Elsevier Ltd. All rights reserved.
CPU-GPU hybrid accelerating the Zuker algorithm for RNA secondary structure prediction applications.
Lei, Guoqing; Dou, Yong; Wan, Wen; Xia, Fei; Li, Rongchun; Ma, Meng; Zou, Dan
2012-01-01
Prediction of ribonucleic acid (RNA) secondary structure remains one of the most important research areas in bioinformatics. The Zuker algorithm is one of the most popular methods of free energy minimization for RNA secondary structure prediction. Thus far, few studies have been reported on the acceleration of the Zuker algorithm on general-purpose processors or on extra accelerators such as Field Programmable Gate-Array (FPGA) and Graphics Processing Units (GPU). To the best of our knowledge, no implementation combines both CPU and extra accelerators, such as GPUs, to accelerate the Zuker algorithm applications. In this paper, a CPU-GPU hybrid computing system that accelerates Zuker algorithm applications for RNA secondary structure prediction is proposed. The computing tasks are allocated between CPU and GPU for parallel cooperate execution. Performance differences between the CPU and the GPU in the task-allocation scheme are considered to obtain workload balance. To improve the hybrid system performance, the Zuker algorithm is optimally implemented with special methods for CPU and GPU architecture. Speedup of 15.93× over optimized multi-core SIMD CPU implementation and performance advantage of 16% over optimized GPU implementation are shown in the experimental results. More than 14% of the sequences are executed on CPU in the hybrid system. The system combining CPU and GPU to accelerate the Zuker algorithm is proven to be promising and can be applied to other bioinformatics applications.
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
NASA Astrophysics Data System (ADS)
Fang, Bao-Long; Yang, Zhen; Ye, Liu
2009-05-01
We propose a scheme for implementing a partial general quantum cloning machine with superconducting quantum-interference devices coupled to a nonresonant cavity. By regulating the time parameters, our system can perform optimal symmetric (asymmetric) universal quantum cloning, optimal symmetric (asymmetric) phase-covariant cloning, and optimal symmetric economical phase-covariant cloning. In the scheme the cavity is only virtually excited, thus, the cavity decay is suppressed during the cloning operations.
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
34 CFR 303.1 - Purpose of the early intervention program for infants and toddlers with disabilities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Purpose of the early intervention program for infants... EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS WITH DISABILITIES General Purpose, Eligibility, and Other General Provisions § 303.1 Purpose of the early intervention program for infants and...
34 CFR 303.1 - Purpose of the early intervention program for infants and toddlers with disabilities.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 2 2011-07-01 2010-07-01 true Purpose of the early intervention program for infants... EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS WITH DISABILITIES General Purpose, Eligibility, and Other General Provisions § 303.1 Purpose of the early intervention program for infants and...
Lacome, Mathieu; Piscione, Julien; Hager, Jean-Philippe; Carling, Christopher
2016-09-01
To investigate the patterns and performance of substitutions in 18 international 15-a-side men's rugby union matches. A semiautomatic computerized time-motion system compiled 750 performance observations for 375 players (422 forwards, 328 backs). Running and technical-performance measures included total distance run, high-intensity running (>18.0 km/h), number of individual ball possessions and passes, percentage of passes completed, and number of attempted and percentage of successful tackles. A total of 184 substitutions (85.2%) were attributed to tactical and 32 (14.8%) to injury purposes respectively. The mean period for non-injury-purpose substitutions in backs (17.7%) occurred between 70 and 75 min, while forward substitutions peaked equally between 50-55 and 60-65 min (16.4%). Substitutes generally demonstrated improved running performance compared with both starter players who completed games and players whom they replaced (small differences, ES -0.2 to 0.5) in both forwards and backs over their entire time played. There was also a trend for better running performance in forward and back substitutes over their first 10 min of play compared with the final 10 min for replaced players (small to moderate differences, ES 0.3-0.6). Finally, running performance in both forward and back substitutes was generally lower (ES -0.1 to 0.3, unclear or small differences) over their entire 2nd-half time played compared with their first 10 min of play. The impact of substitutes on technical performance was generally considered unclear. This information provides practitioners with practical data relating to the physical and technical contributions of substitutions that subsequently could enable optimization of their impact on match play.
Utilization of group theory in studies of molecular clusters
NASA Astrophysics Data System (ADS)
Ocak, Mahir E.
The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.
Ibrahim, Shewkar E; Sayed, Tarek; Ismail, Karim
2012-11-01
Several earlier studies have noted the shortcomings with existing geometric design guides which provide deterministic standards. In these standards the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from the standards. To mitigate these shortcomings, probabilistic geometric design has been advocated where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a mechanism for risk measurement to evaluate the safety impact of deviations from design standards. This paper applies reliability analysis for optimizing the safety of highway cross-sections. The paper presents an original methodology to select a suitable combination of cross-section elements with restricted sight distance to result in reduced collisions and consistent risk levels. The purpose of this optimization method is to provide designers with a proactive approach to the design of cross-section elements in order to (i) minimize the risk associated with restricted sight distance, (ii) balance the risk across the two carriageways of the highway, and (iii) reduce the expected collision frequency. A case study involving nine cross-sections that are parts of two major highway developments in British Columbia, Canada, was presented. The results showed that an additional reduction in collisions can be realized by incorporating the reliability component, P(nc) (denoting the probability of non-compliance), in the optimization process. The proposed approach results in reduced and consistent risk levels for both travel directions in addition to further collision reductions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Wu, Tiee-Jian; Huang, Ying-Hsueh; Li, Lung-An
2005-11-15
Several measures of DNA sequence dissimilarity have been developed. The purpose of this paper is 3-fold. Firstly, we compare the performance of several word-based or alignment-based methods. Secondly, we give a general guideline for choosing the window size and determining the optimal word sizes for several word-based measures at different window sizes. Thirdly, we use a large-scale simulation method to simulate data from the distribution of SK-LD (symmetric Kullback-Leibler discrepancy). These simulated data can be used to estimate the degree of dissimilarity beta between any pair of DNA sequences. Our study shows (1) for whole sequence similiarity/dissimilarity identification the window size taken should be as large as possible, but probably not >3000, as restricted by CPU time in practice, (2) for each measure the optimal word size increases with window size, (3) when the optimal word size is used, SK-LD performance is superior in both simulation and real data analysis, (4) the estimate beta of beta based on SK-LD can be used to filter out quickly a large number of dissimilar sequences and speed alignment-based database search for similar sequences and (5) beta is also applicable in local similarity comparison situations. For example, it can help in selecting oligo probes with high specificity and, therefore, has potential in probe design for microarrays. The algorithm SK-LD, estimate beta and simulation software are implemented in MATLAB code, and are available at http://www.stat.ncku.edu.tw/tjwu
Structure-based design of combinatorial mutagenesis libraries
Verma, Deeptak; Grigoryan, Gevorg; Bailey-Kellogg, Chris
2015-01-01
The development of protein variants with improved properties (thermostability, binding affinity, catalytic activity, etc.) has greatly benefited from the application of high-throughput screens evaluating large, diverse combinatorial libraries. At the same time, since only a very limited portion of sequence space can be experimentally constructed and tested, an attractive possibility is to use computational protein design to focus libraries on a productive portion of the space. We present a general-purpose method, called “Structure-based Optimization of Combinatorial Mutagenesis” (SOCoM), which can optimize arbitrarily large combinatorial mutagenesis libraries directly based on structural energies of their constituents. SOCoM chooses both positions and substitutions, employing a combinatorial optimization framework based on library-averaged energy potentials in order to avoid explicitly modeling every variant in every possible library. In case study applications to green fluorescent protein, β-lactamase, and lipase A, SOCoM optimizes relatively small, focused libraries whose variants achieve energies comparable to or better than previous library design efforts, as well as larger libraries (previously not designable by structure-based methods) whose variants cover greater diversity while still maintaining substantially better energies than would be achieved by representative random library approaches. By allowing the creation of large-scale combinatorial libraries based on structural calculations, SOCoM promises to increase the scope of applicability of computational protein design and improve the hit rate of discovering beneficial variants. While designs presented here focus on variant stability (predicted by total energy), SOCoM can readily incorporate other structure-based assessments, such as the energy gap between alternative conformational or bound states. PMID:25611189
Structure-based design of combinatorial mutagenesis libraries.
Verma, Deeptak; Grigoryan, Gevorg; Bailey-Kellogg, Chris
2015-05-01
The development of protein variants with improved properties (thermostability, binding affinity, catalytic activity, etc.) has greatly benefited from the application of high-throughput screens evaluating large, diverse combinatorial libraries. At the same time, since only a very limited portion of sequence space can be experimentally constructed and tested, an attractive possibility is to use computational protein design to focus libraries on a productive portion of the space. We present a general-purpose method, called "Structure-based Optimization of Combinatorial Mutagenesis" (SOCoM), which can optimize arbitrarily large combinatorial mutagenesis libraries directly based on structural energies of their constituents. SOCoM chooses both positions and substitutions, employing a combinatorial optimization framework based on library-averaged energy potentials in order to avoid explicitly modeling every variant in every possible library. In case study applications to green fluorescent protein, β-lactamase, and lipase A, SOCoM optimizes relatively small, focused libraries whose variants achieve energies comparable to or better than previous library design efforts, as well as larger libraries (previously not designable by structure-based methods) whose variants cover greater diversity while still maintaining substantially better energies than would be achieved by representative random library approaches. By allowing the creation of large-scale combinatorial libraries based on structural calculations, SOCoM promises to increase the scope of applicability of computational protein design and improve the hit rate of discovering beneficial variants. While designs presented here focus on variant stability (predicted by total energy), SOCoM can readily incorporate other structure-based assessments, such as the energy gap between alternative conformational or bound states. © 2015 The Protein Society.
Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin
2007-02-01
This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.
Englesbe, Michael J; Grenda, Dane R; Sullivan, June A; Derstine, Brian A; Kenney, Brooke N; Sheetz, Kyle H; Palazzolo, William C; Wang, Nicholas C; Goulson, Rebecca L; Lee, Jay S; Wang, Stewart C
2017-06-01
The Michigan Surgical Home and Optimization Program is a structured, home-based, preoperative training program targeting physical, nutritional, and psychological guidance. The purpose of this study was to determine if participation in this program was associated with reduced hospital duration of stay and health care costs. We conducted a retrospective, single center, cohort study evaluating patients who participated in the Michigan Surgical Home and Optimization Program and subsequently underwent major elective general and thoracic operative care between June 2014 and December 2015. Propensity score matching was used to match program participants to a control group who underwent operative care prior to program implementation. Primary outcome measures were hospital duration of stay and payer costs. Multivariate regression was used to determine the covariate-adjusted effect of program participation. A total of 641 patients participated in the program; 82% were actively engaged in the program, recording physical activity at least 3 times per week for the majority of the program; 182 patients were propensity matched to patients who underwent operative care prior to program implementation. Multivariate analysis demonstrated that participation in the Michigan Surgical Home and Optimization Program was associated with a 31% reduction in hospital duration of stay (P < .001) and 28% lower total costs (P < .001) after adjusting for covariates. A home-based, preoperative training program decreased hospital duration of stay, lowered costs of care, and was well accepted by patients. Further efforts will focus on broader implementation and linking participation to postoperative complications and rigorous patient-reported outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.
Xu, NeiLi; Zhao, Shuai; Xue, HongXia; Fu, WenYi; Liu, Li; Zhang, TianQi; Huang, Rui; Zhang, Ning
2017-01-01
Objective This study aimed to assess the association between perceived social support (PSS) and fatigue and the roles of hope, optimism, general self-efficacy and resilience as mediators or moderators on PSS-fatigue association among Rheumatoid Arthritis (RA) patients in China. Methods A multi-center, cross-sectional study was conducted withinpatients diagnosed with RA in northeast China, in which 305 eligible inpatients were enrolled. The Multidimensional Fatigue Inventory, Multidimensional Scale of Perceived Social Support, Herth Hope Index, Life Orientation Test Revised, General Self-Efficacy Scale and Ego-Resiliency Scale were completed. The associations of PSS, hope, optimism, general self-efficacy and resilience with fatigue and the moderating roles of these positive psychological constructs were tested by hierarchical linear regression. Asymptotic and resampling strategies were utilized to assess the mediating roles of hope, optimism, general self-efficacy and resilience. Results The mean score of the MFI was 57.88 (SD = 9.50). PSS, hope, optimism and resilience were negatively associated with RA-related fatigue, whereas DAS28-CRP was positively associated. Only resilience positively moderated the PSS-fatigue association (B = 0.03, β = 0.13, P<0.01). Hope, optimism and resilience may act as partial mediators in the association between PSS and fatigue symptoms (hope: a*b = -0.16, BCa 95%CI: -0.27, -0.03; optimism: a*b = -0.20, BCa 95%CI: -0.30, -0.10; resilience: a*b = -0.12, BCa 95%CI: -0.21–0.04). Conclusions Fatigue is a severe symptom among RA patients. Resilience may positively moderate the PSS-fatigue association. Hope, optimism and resilience may act as partial mediators in the association. PSS, hope, optimism and resilience may contribute as effective recourses to alleviate fatigue, upon which PSS probably has the greatest effect. PMID:28291837
Xu, NeiLi; Zhao, Shuai; Xue, HongXia; Fu, WenYi; Liu, Li; Zhang, TianQi; Huang, Rui; Zhang, Ning
2017-01-01
This study aimed to assess the association between perceived social support (PSS) and fatigue and the roles of hope, optimism, general self-efficacy and resilience as mediators or moderators on PSS-fatigue association among Rheumatoid Arthritis (RA) patients in China. A multi-center, cross-sectional study was conducted withinpatients diagnosed with RA in northeast China, in which 305 eligible inpatients were enrolled. The Multidimensional Fatigue Inventory, Multidimensional Scale of Perceived Social Support, Herth Hope Index, Life Orientation Test Revised, General Self-Efficacy Scale and Ego-Resiliency Scale were completed. The associations of PSS, hope, optimism, general self-efficacy and resilience with fatigue and the moderating roles of these positive psychological constructs were tested by hierarchical linear regression. Asymptotic and resampling strategies were utilized to assess the mediating roles of hope, optimism, general self-efficacy and resilience. The mean score of the MFI was 57.88 (SD = 9.50). PSS, hope, optimism and resilience were negatively associated with RA-related fatigue, whereas DAS28-CRP was positively associated. Only resilience positively moderated the PSS-fatigue association (B = 0.03, β = 0.13, P<0.01). Hope, optimism and resilience may act as partial mediators in the association between PSS and fatigue symptoms (hope: a*b = -0.16, BCa 95%CI: -0.27, -0.03; optimism: a*b = -0.20, BCa 95%CI: -0.30, -0.10; resilience: a*b = -0.12, BCa 95%CI: -0.21-0.04). Fatigue is a severe symptom among RA patients. Resilience may positively moderate the PSS-fatigue association. Hope, optimism and resilience may act as partial mediators in the association. PSS, hope, optimism and resilience may contribute as effective recourses to alleviate fatigue, upon which PSS probably has the greatest effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidtlein, CR; Beattie, B; Humm, J
2014-06-15
Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1stmore » order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved with clinical OSEM reconstructions.« less
DOT National Transportation Integrated Search
2012-11-01
The purpose of this project is to develop for the Intelligent Network Flow Optimization (INFLO), which is one collection (or bundle) of high-priority transformative applications identified by the United States Department of Transportation (USDOT) Mob...
Optimizing Conferencing Freeware
ERIC Educational Resources Information Center
Baggaley, Jon; Klaas, Jim; Wark, Norine; Depow, Jim
2005-01-01
The increasing range of options provided by two popular conferencing freeware products, "Yahoo Messenger" and "MSN Messenger," are discussed. Each tool contains features designed primarily for entertainment purposes, which can be customized for use in online education. This report provides suggestions for optimizing the educational potential of…
DOT National Transportation Integrated Search
2012-06-01
The purpose of this project is to develop for the Intelligent Network Flow Optimization (INFLO), which is one collection (or bundle) of high-priority transformative applications identified by the United States Department of Transportation (USDOT) Mob...
Satisfaction with the local service point for care: results of an evaluation study
Esslinger, Adelheid Susanne; Macco, Katrin; Schmidt, Katharina
2009-01-01
Purpose The market of care increases and is characterized by complexity. Therefore, service points, such as the ‘Zentrale Anlaufstelle Pflege (ZAPf)’ in Nuremberg, are helpful for clients to get orientation. The purpose of the presentation is to show the results of an evaluation study about the clients' satisfaction with the offers of ZAPf. Study Satisfaction with service may be measured with the SERVQUAL concept introduced by Parasuraman et al. (1988). They found out five dimensions of quality (tangibles, reliability, responsiveness, assurances and empathy). We took these dimensions in our study. The study focuses on the quality of service and the benefits recognized by clients. In spring 2007, we conducted 67 interviews by phone, based on a half standardized questionnaire. Statistical analysis was conducted using SPSS. Results The clients want to get information about care in general, financial and legal aspects, alternative care arrangement (e.g. ambulant, long-term care) and typical age-related diseases. They show a high satisfaction with the service provided. Their benefits are to get information and advice, to strengthen the ability of decision taking, to cope with changing situations in life, and to develop solutions. Conclusions The results show that the quality of service is on a high level. Critical success factors are the interdisciplinary cooperation at the service point, based on a regularly and open exchange of information. Every member focuses on an optimal individual solution for the client. Local professional service points act as networkers and brokers. They serve not only for the clients' needs but also support the effective and efficient provision of optimized care.
Ketenoğlu, Onur; Erdoğdu, Ferruh; Tekin, Aziz
2018-01-01
Oleic acid is a commercially valuable compound and has many positive health effects. Determining optimum conditions in a physical separation process is an industrially significant point due to environmental and health related concerns. Molecular distillation avoids the use of chemicals and adverse effects of high temperature application. The objective of this study was to determine the molecular distillation conditions for oleic acid to increase its purity and distillation yield in a model fatty acid mixture. For this purpose, a short-path evaporator column was used. Evaporation temperature ranged from 110 to 190℃, while absolute pressure was from 0.05 to 5 mmHg. Results showed that elevating temperature generally increased distillation yield until a maximum evaporation temperature. Vacuum application also affected the yield at a given temperature, and amount of distillate increased at higher vacuums except the case applied at 190℃. A multi-objective optimization procedure was then used for maximizing both yield and oleic acid amounts in distillate simultaneously, and an optimum point of 177.36℃ and 0.051 mmHg was determined for this purpose. Results also demonstrated that evaporation of oleic acid was also suppressed by a secondary dominant fatty acid of olive oil - palmitic acid, which tended to evaporate easier than oleic acid at lower evaporation temperatures, and increasing temperature achieved to transfer more oleic acid to distillate. At 110℃ and 0.05 mmHg, oleic and palmitic acid concentrations in distillate were 63.67% and 24.32%, respectively. Outcomes of this study are expected to be useful for industrial process conditions.
NASA Astrophysics Data System (ADS)
Micheli, Davide; Pastore, Roberto; Delfini, Andrea; Giusti, Alfonso; Vricella, Antonio; Santoni, Fabio; Marchetti, Mario; Tolochko, Oleg; Vasilyeva, Ekaterina
2017-05-01
In this work the electromagnetic characterization of composite materials reinforced with carbon and metallic nanoparticles is presented. In particular, the electric permittivity and the magnetic permeability as a function of the frequency are used to evaluate the electromagnetic absorption capability of the nanocomposites. The aim is the study of possible applications in advanced coating able to tune the electromagnetic reflectivity of satellite surfaces in specific frequency ranges, in a special way for those surfaces that for some reason could be exposed to the antenna radiation pattern. In fact, the interference caused by the spurious electromagnetic multipath due to good electric conductive satellite surface components could in turn affect the main radiation lobe of TLC and Telemetry antennas, thus modifying its main propagation directions and finally increasing the microwave channel pathloss. The work reports the analysis of different nanostructured materials in the 2-10 GHz frequency range. The employed nanopowders are of carbon nanotubes, cobalt, argent, titanium, nickel, zinc, copper, iron, boron, bismuth, hafnium, in different weight percentages versus the hosting polymeric matrix. The materials are classified as a function of their electromagnetic losses capability by taking into account of both electric and magnetic properties. The possibility to design multi-layered structures optimized to provide specific microwave response is finally analyzed by the aid of swam intelligence algorithm. This novel technique is in general interesting for metrological purpose and remote sensing purposes, and can be effectively used in aerospace field for frequency selective materials design, in order to reduce the aircraft/spacecraft radar observability at certain frequencies.
Novikov Engine with Fluctuating Heat Bath Temperature
NASA Astrophysics Data System (ADS)
Schwalbe, Karsten; Hoffmann, Karl Heinz
2018-04-01
The Novikov engine is a model for heat engines that takes the irreversible character of heat fluxes into account. Using this model, the maximum power output as well as the corresponding efficiency of the heat engine can be deduced, leading to the well-known Curzon-Ahlborn efficiency. The classical model assumes constant heat bath temperatures, which is not a reasonable assumption in the case of fluctuating heat sources. Therefore, in this article the influence of stochastic fluctuations of the hot heat bath's temperature on the optimal performance measures is investigated. For this purpose, a Novikov engine with fluctuating heat bath temperature is considered. Doing so, a generalization of the Curzon-Ahlborn efficiency is found. The results can help to quantify how the distribution of fluctuating quantities affects the performance measures of power plants.
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Godoy, Jorge; Martínez-Álvarez, Antonio
2017-01-01
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle. PMID:29137137
[Role of complementary medicine in type 1 diabetes mellitus in two Swiss centres].
Scheidegger, U A; Flück, C E; Scheidegger, K; Diem, P; Mullis, P E
2009-09-09
Insulin replacement is the only effective treatment of type 1 Diabetes mellitus (T1DM). Nevertheless, many complementary treatments are in use for T1DM. In this study we assessed by questionnaire that out of 342 patients with T1DM, 48 (14%; 13.4% adult, 18.5% paediatric; 20 male, 28 female) used complementary medicine (CM) in addition to their insulin therapy. The purpose of the use of CM was to improve general well-being, ameliorate glucose homeostasis, reduce blood glucose levels as well as insulin doses, improve physical fitness, reduce the frequency of hypoglycaemia, and control appetite. The modalities most frequently used are cinnamon, homeopathy, magnesium and special beverages (mainly teas). Thus, good collaboration between health care professionals will allow optimal patient care.
NASA Technical Reports Server (NTRS)
Peoples, J. A.
1975-01-01
Results are reported which were obtained from a mathematical model of a generalized piston steam engine configuration employing the uniflow principal. The model accounted for the effects of clearance volume, compression work, and release volume. A simple solution is presented which characterizes optimum performance of the steam engine, based on miles per gallon. Development of the mathematical model is presented. The relationship between efficiency and miles per gallon is developed. An approach to steam car analysis and design is presented which has purpose rather than lucky hopefulness. A practical engine design is proposed which correlates to the definition of the type engine used. This engine integrates several system components into the engine structure. All conclusions relate to the classical Rankine Cycle.
Collective intelligence for control of distributed dynamical systems
NASA Astrophysics Data System (ADS)
Wolpert, D. H.; Wheeler, K. R.; Tumer, K.
2000-03-01
We consider the El Farol bar problem, also known as the minority game (W. B. Arthur, The American Economic Review, 84 (1994) 406; D. Challet and Y. C. Zhang, Physica A, 256 (1998) 514). We view it as an instance of the general problem of how to configure the nodal elements of a distributed dynamical system so that they do not "work at cross purposes", in that their collective dynamics avoids frustration and thereby achieves a provided global goal. We summarize a mathematical theory for such configuration applicable when (as in the bar problem) the global goal can be expressed as minimizing a global energy function and the nodes can be expressed as minimizers of local free energy functions. We show that a system designed with that theory performs nearly optimally for the bar problem.
Tempest: Accelerated MS/MS database search software for heterogeneous computing platforms
Adamo, Mark E.; Gerber, Scott A.
2017-01-01
MS/MS database search algorithms derive a set of candidate peptide sequences from in-silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU generates peptide candidates that are asynchronously sent to a discrete GPU to be scored against experimental spectra in parallel (Milloy et al., 2012). The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. PMID:27603022
Playing biology's name game: identifying protein names in scientific text.
Hanisch, Daniel; Fluck, Juliane; Mevissen, Heinz-Theodor; Zimmer, Ralf
2003-01-01
A growing body of work is devoted to the extraction of protein or gene interaction information from the scientific literature. Yet, the basis for most extraction algorithms, i.e. the specific and sensitive recognition of protein and gene names and their numerous synonyms, has not been adequately addressed. Here we describe the construction of a comprehensive general purpose name dictionary and an accompanying automatic curation procedure based on a simple token model of protein names. We designed an efficient search algorithm to analyze all abstracts in MEDLINE in a reasonable amount of time on standard computers. The parameters of our method are optimized using machine learning techniques. Used in conjunction, these ingredients lead to good search performance. A supplementary web page is available at http://cartan.gmd.de/ProMiner/.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Code of Federal Regulations, 2010 CFR
2010-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
Code of Federal Regulations, 2012 CFR
2012-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
Code of Federal Regulations, 2013 CFR
2013-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
Code of Federal Regulations, 2011 CFR
2011-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
Code of Federal Regulations, 2014 CFR
2014-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
ERIC Educational Resources Information Center
Kapur, Nitin A.; Windish, Donna M.
2011-01-01
Contradictory data exist regarding optimal methods and instruments for intimate partner violence (IPV) screening in primary care settings. The purpose of this study was to determine the optimal method and screening instrument for IPV among men and women in a primary-care resident clinic. We conducted a cross-sectional study at an urban, academic,…
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
Vázquez, J. L.
2010-01-01
The goal of this paper is to state the optimal decay rate for solutions of the nonlinear fast diffusion equation and, in self-similar variables, the optimal convergence rates to Barenblatt self-similar profiles and their generalizations. It relies on the identification of the optimal constants in some related Hardy–Poincaré inequalities and concludes a long series of papers devoted to generalized entropies, functional inequalities, and rates for nonlinear diffusion equations. PMID:20823259
On simple aerodynamic sensitivity derivatives for use in interdisciplinary optimization
NASA Technical Reports Server (NTRS)
Doggett, Robert V., Jr.
1991-01-01
Low-aspect-ratio and piston aerodynamic theories are reviewed as to their use in developing aerodynamic sensitivity derivatives for use in multidisciplinary optimization applications. The basic equations relating surface pressure (or lift and moment) to normal wash are given and discussed briefly for each theory. The general means for determining selected sensitivity derivatives are pointed out. In addition, some suggestions in very general terms are included as to sample problems for use in studying the process of using aerodynamic sensitivity derivatives in optimization studies.
Framework for computationally efficient optimal irrigation scheduling using ant colony optimization
USDA-ARS?s Scientific Manuscript database
A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...
Structural optimization by generalized, multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B. B.; Riley, M. F.
1985-01-01
The developments toward a general multilevel optimization capability and results for a three-level structural optimization are described. The method partitions a structure into a number of substructuring levels where each substructure corresponds to a subsystem in the general case of an engineering system. The method is illustrated by a portal framework that decomposes into individual beams. Each beam is a box that can be further decomposed into stiffened plates. Substructuring for this example spans three different levels: (1) the bottom level of finite elements representing the plates; (2) an intermediate level of beams treated as substructures; and (3) the top level for the assembled structure. The three-level case is now considered to be qualitatively complete.
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Transformational leadership in the local police in Spain: a leader-follower distance approach.
Álvarez, Octavio; Lila, Marisol; Tomás, Inés; Castillo, Isabel
2014-01-01
Based on the transformational leadership theory (Bass, 1985), the aim of the present study was to analyze the differences in leadership styles according to the various leading ranks and the organizational follower-leader distance reported by a representative sample of 975 local police members (828 male and 147 female) from Valencian Community (Spain). Results showed differences by rank (p < .01), and by rank distance (p < .05). The general intendents showed the most optimal profile of leadership in all the variables examined (transformational-leadership behaviors, transactional-leadership behaviors, laissez-faire behaviors, satisfaction with the leader, extra effort by follower, and perceived leadership effectiveness). By contrast, the least optimal profiles were presented by intendents. Finally, the maximum distance (five ranks) generally yielded the most optimal profiles, whereas the 3-rank distance generally produced the least optimal profiles for all variables examined. Outcomes and practical implications for the workforce dimensioning are also discussed.
Educator Perceptions of the Optimal Professional Development Experience
ERIC Educational Resources Information Center
Pettet, Kent Lloyd
2013-01-01
The purpose of this quantitative study was to examine the educator's perception of the optimal professional development experience. Research studies have concluded that the biggest indicator to predict student achievement is teacher effectiveness (Aaronson, Barrow, & Sander, 2007; Marzano, 2003; Sanders & Horn, 1998; Wong 2001). Guskey…
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Purpose. 10.2 Section 10.2 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL COMMERCIAL MOBILE ALERT SYSTEM General Information § 10.2 Purpose. The rules in this part establish the requirements for participation in the voluntary Commercial...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY MARITIME SECURITY MARITIME SECURITY: GENERAL General § 101.100 Purpose. (a) The purpose of this subchapter is: (1) To implement portions of the maritime security regime required by the Maritime Transportation Security Act of 2002, as...
77 FR 77043 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... MK-84 2000 lb General Purpose Bombs; 1,725 MK-82 500 lb General Purpose Bombs; 1,725 BLU-109 Bombs; 3,450 GBU-39 Small Diameter Bombs; 11,500 FMU-139 Fuses; 11,500 FMU-143 Fuses; and 11,500 FMU-152 Fuses... and 1,725 KMU-572 (GBU-38) for MK-82 warheads); 3,450 MK-84 2000 lb General Purpose Bombs; 1,725 MK-82...
Optimization of the RF cavity heat load and trip rates for CEBAF at 12 GeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, He; Roblin, Yves R.; Freyberger, Arne P.
2017-05-01
The Continuous Electron Beam Accelerator Facility at JLab has 200 RF cavities in the north linac and the south linac respectively after the 12 GeV upgrade. The purpose of this work is to simultaneously optimize the heat load and the trip rate for the cavities and to reconstruct the pareto-optimal front in a timely manner when some of the cavities are turned down. By choosing an efficient optimizer and strategically creating the initial gradients, the pareto-optimal front for no more than 15 cavities down can be re-established within 20 seconds.
Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan
2012-01-01
The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Graph SLAM correction for single scanner MLS forest data under boreal forest canopy
NASA Astrophysics Data System (ADS)
Kukko, Antero; Kaijaluoto, Risto; Kaartinen, Harri; Lehtola, Ville V.; Jaakkola, Anttoni; Hyyppä, Juha
2017-10-01
Mobile laser scanning (MLS) provides kinematic means to collect three dimensional data from surroundings for various mapping and environmental analysis purposes. Vehicle based MLS has been used for road and urban asset surveys for about a decade. The equipment to derive the trajectory information for the point cloud generation from the laser data is almost without exception based on GNSS-IMU (Global Navigation Satellite System - Inertial Measurement Unit) technique. That is because of the GNSS ability to maintain global accuracy, and IMU to produce the attitude information needed to orientate the laser scanning and imaging sensor data. However, there are known challenges in maintaining accurate positioning when GNSS signal is weak or even absent over long periods of time. The duration of the signal loss affects the severity of degradation of the positioning solution depending on the quality/performance level of the IMU in use. The situation could be improved to a certain extent with higher performance IMUs, but increasing system expenses make such approach unsustainable in general. Another way to tackle the problem is to attach additional sensors to the system to overcome the degrading position accuracy: such that observe features from the environment to solve for short term system movements accurately enough to prevent the IMU solution to drift. This results in more complex system integration with need for more calibration and synchronization of multiple sensors into an operational approach. In this paper we study operation of an ATV (All -terrain vehicle) mounted, GNSS-IMU based single scanner MLS system in boreal forest conditions. The data generated by RoamerR2 system is targeted for generating 3D terrain and tree maps for optimizing harvester operations and forest inventory purposes at individual tree level. We investigate a process-flow and propose a graph optimization based method which uses data from a single scanner MLS for correcting the post-processed GNSS-IMU trajectory for positional drift under mature boreal forest canopy conditions. The result shows that we can improve the internal conformity of the data significantly from 0.7 m to 1 cm based on tree stem feature location data. When the optimization result is compared to reference at plot level we reach down to 6 cm mean error in absolute tree stem locations. The approach can be generalized to any MLS point cloud data, and provides as such a remarkable contribution to harness MLS for practical forestry and high precision terrain and structural modeling in GNSS obstructed environments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES General § 970.100 Purpose. (a) General... recognition that the deep seabed mining industry is still evolving and that more information must be developed...
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES General § 970.100 Purpose. (a) General... recognition that the deep seabed mining industry is still evolving and that more information must be developed...