Innovating Method of Existing Mechanical Product Based on TRIZ Theory
NASA Astrophysics Data System (ADS)
Zhao, Cunyou; Shi, Dongyan; Wu, Han
Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.
NASA Astrophysics Data System (ADS)
Barlow, Steven J.
1986-09-01
The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Design sensitivity analysis with Applicon IFAD using the adjoint variable method
NASA Technical Reports Server (NTRS)
Frederick, Marjorie C.; Choi, Kyung K.
1984-01-01
A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.
A basic guide to overlay design using nondestructive testing equipment data
NASA Astrophysics Data System (ADS)
Turner, Vernon R.
1990-08-01
The purpose of this paper is to provide a basic and concise guide to designing asphalt concrete (AC) overlays over existing AC pavements. The basis for these designs is deflection data obtained from nondestructive testing (NDT) equipment. This data is used in design procedures which produce required overlay thickness or an estimate of remaining pavement life. This guide enables one to design overlays or better monitor the designs being performed by others. This paper will discuss three types of NDT equipment, the Asphalt Institute Overlay Designs by Deflection Analysis and by the effective thickness method as well as a method of estimating remaining pavement life, correlations between NDT equipment and recent correlations in Washington State. Asphalt overlays provide one of the most cost effective methods of improving existing pavements. Asphalt overlays can be used to strengthen existing pavements, to reduce maintenance costs, to increase pavement life, to provide a smoother ride, and to improve skid resistance.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
Methodological Issues in Questionnaire Design.
Song, Youngshin; Son, Youn Jung; Oh, Doonam
2015-06-01
The process of designing a questionnaire is complicated. Many questionnaires on nursing phenomena have been developed and used by nursing researchers. The purpose of this paper was to discuss questionnaire design and factors that should be considered when using existing scales. Methodological issues were discussed, such as factors in the design of questions, steps in developing questionnaires, wording and formatting methods for items, and administrations methods. How to use existing scales, how to facilitate cultural adaptation, and how to prevent socially desirable responding were discussed. Moreover, the triangulation method in questionnaire development was introduced. Steps were recommended for designing questions such as appropriately operationalizing key concepts for the target population, clearly formatting response options, generating items and confirming final items through face or content validity, sufficiently piloting the questionnaire using item analysis, demonstrating reliability and validity, finalizing the scale, and training the administrator. Psychometric properties and cultural equivalence should be evaluated prior to administration when using an existing questionnaire and performing cultural adaptation. In the context of well-defined nursing phenomena, logical and systematic methods will contribute to the development of simple and precise questionnaires.
NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
Using Mathematical Modeling and Set-Based Design Principles to Recommend an Existing CVL Design
2017-09-01
designs, it would be worth researching the feasibility of varying the launch method on some of the larger light aircraft carriers, such as the Liaoning...thesis examines the trade space in major design areas such as tonnage, aircraft launch method , propulsion, and performance in order to illustrate...future conflict. This thesis examines the trade space in major design areas such as tonnage, aircraft launch method , propulsion, and performance in
Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo
2016-11-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.
The potential of genetic algorithms for conceptual design of rotor systems
NASA Technical Reports Server (NTRS)
Crossley, William A.; Wells, Valana L.; Laananen, David H.
1993-01-01
The capabilities of genetic algorithms as a non-calculus based, global search method make them potentially useful in the conceptual design of rotor systems. Coupling reasonably simple analysis tools to the genetic algorithm was accomplished, and the resulting program was used to generate designs for rotor systems to match requirements similar to those of both an existing helicopter and a proposed helicopter design. This provides a comparison with the existing design and also provides insight into the potential of genetic algorithms in design of new rotors.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)
Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K
2011-01-01
To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069
What Informs Practice and What Is Valued in Corporate Instructional Design? A Mixed Methods Study
ERIC Educational Resources Information Center
Thompson-Sellers, Ingrid N.
2012-01-01
This study used a two-phased explanatory mixed-methods design to explore in-depth what factors are perceived by Instructional Design and Technology (IDT) professionals as impacting instructional design practice, how these factors are valued in the field, and what differences in perspectives exist between IDT managers and non-managers. For phase 1…
Minimum stiffness criteria for ring frame stiffeners of space launch vehicles
NASA Astrophysics Data System (ADS)
Friedrich, Linus; Schröder, Kai-Uwe
2016-12-01
Frame stringer-stiffened shell structures show high load carrying capacity in conjunction with low structural mass and are for this reason frequently used as primary structures of aerospace applications. Due to the great number of design variables, deriving suitable stiffening configurations is a demanding task and needs to be realized using efficient analysis methods. The structural design of ring frame stringer-stiffened shells can be subdivided into two steps. One, the design of a shell section between two ring frames. Two, the structural design of the ring frames such that a general instability mode is avoided. For sizing stringer-stiffened shell sections, several methods were recently developed, but existing ring frame sizing methods are mainly based on empirical relations or on smeared models. These methods do not mandatorily lead to reliable designs and in some cases the lightweight design potential of stiffened shell structures can thus not be exploited. In this paper, the explicit physical behaviour of ring frame stiffeners of space launch vehicles at the onset of panel instability is described using mechanical substitute models. Ring frame stiffeners of a stiffened shell structure are sized applying existing methods and the method suggested in this paper. To verify the suggested method and to demonstrate its potential, geometrically non-linear finite element analyses are performed using detailed finite element models.
Analytical Method to Evaluate Failure Potential During High-Risk Component Development
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)
2001-01-01
Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.
Design optimization of piezoresistive cantilevers for force sensing in air and water
Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.
2009-01-01
Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512
A Goal Oriented Approach for Modeling and Analyzing Security Trade-Offs
NASA Astrophysics Data System (ADS)
Elahi, Golnaz; Yu, Eric
In designing software systems, security is typically only one design objective among many. It may compete with other objectives such as functionality, usability, and performance. Too often, security mechanisms such as firewalls, access control, or encryption are adopted without explicit recognition of competing design objectives and their origins in stakeholder interests. Recently, there is increasing acknowledgement that security is ultimately about trade-offs. One can only aim for "good enough" security, given the competing demands from many parties. In this paper, we examine how conceptual modeling can provide explicit and systematic support for analyzing security trade-offs. After considering the desirable criteria for conceptual modeling methods, we examine several existing approaches for dealing with security trade-offs. From analyzing the limitations of existing methods, we propose an extension to the i* framework for security trade-off analysis, taking advantage of its multi-agent and goal orientation. The method was applied to several case studies used to exemplify existing approaches.
A Unified Approach to Modeling Multidisciplinary Interactions
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Bhatia, Kumar G.
2000-01-01
There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.
Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo
2016-01-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134
Inventing and improving ribozyme function: rational design versus iterative selection methods
NASA Technical Reports Server (NTRS)
Breaker, R. R.; Joyce, G. F.
1994-01-01
Two major strategies for generating novel biological catalysts exist. One relies on our knowledge of biopolymer structure and function to aid in the 'rational design' of new enzymes. The other, often called 'irrational design', aims to generate new catalysts, in the absence of detailed physicochemical knowledge, by using selection methods to search a library of molecules for functional variants. Both strategies have been applied, with considerable success, to the remodeling of existing ribozymes and the development of ribozymes with novel catalytic function. The two strategies are by no means mutually exclusive, and are best applied in a complementary fashion to obtain ribozymes with the desired catalytic properties.
Design of Education Methods in a Virtual Environment
ERIC Educational Resources Information Center
Yavich, Roman; Starichenko, Boris
2017-01-01
The purpose of the presented article is to review existing approaches to modern training methods design and to create a variant of its technology in virtual educational environments in order to develop general cultural and professional students' competence in pedagogical education. The conceptual modeling of a set of methods for students' training…
Horsetail matching: a flexible approach to optimization under uncertainty
NASA Astrophysics Data System (ADS)
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
Global Fleet Station: Station Ship Concept
2008-02-01
The basic ISO TEU containers can be designed for any number of configurations and provide many different capabilities. For example there are...Design Design Process The ship was designed using an iterative weight and volume balancing method . This method assigns a weight and volume to each...from existing merchant ships3. Different ship types are modeled in the algorithm though the selection of appropriate non-dimensional factors
Digital redesign of anti-wind-up controller for cascaded analog system.
Chen, Y S; Tsai, J S H; Shieh, L S; Moussighi, M M
2003-01-01
The cascaded conventional anti-wind-up (CAW) design method for integral controller is discussed. Then, the prediction-based digital redesign methodology is utilized to find the new pulse amplitude modulated (PAM) digital controller for effective digital control of the analog plant with input saturation constraint. The desired digital controller is determined from existing or pre-designed CAW analog controller. The proposed method provides a novel methodology for indirect digital design of a continuous-time unity output-feedback system with a cascaded analog controller as in the case of PID controllers for industrial control processes with the presence of actuator saturations. It enables us to implement an existing or pre-designed cascaded CAW analog controller via a digital controller effectively.
Optimal chroma-like channel design for passive color image splicing detection
NASA Astrophysics Data System (ADS)
Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin
2012-12-01
Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.
A new collage steganographic algorithm using cartoon design
NASA Astrophysics Data System (ADS)
Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip
2014-02-01
Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.
Luan, Xiaoli; Chen, Qiang; Liu, Fei
2014-09-01
This article presents a new scheme to design full matrix controller for high dimensional multivariable processes based on equivalent transfer function (ETF). Differing from existing ETF method, the proposed ETF is derived directly by exploiting the relationship between the equivalent closed-loop transfer function and the inverse of open-loop transfer function. Based on the obtained ETF, the full matrix controller is designed utilizing the existing PI tuning rules. The new proposed ETF model can more accurately represent the original processes. Furthermore, the full matrix centralized controller design method proposed in this paper is applicable to high dimensional multivariable systems with satisfactory performance. Comparison with other multivariable controllers shows that the designed ETF based controller is superior with respect to design-complexity and obtained performance. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Fasli, Mukaddes; Hassanpour, Badiossadat
2017-01-01
In this century, all educational efforts strive to achieve quality assurance standards. Therefore, it will be naive to deny the existence of problems in architectural education. The current design studio critique method has been developed upon generations of students and educators. Architectural education is changing towards educating critical…
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
Design for fish passage at roadway-stream crossings : synthesis report.
DOT National Transportation Integrated Search
2007-06-01
Cataloging and synthesizing existing methods for the design of roadway-stream crossings for fish passage began in : January 2005 with an extensive literature review covering the topics of culvert design and assessment to facilitate : fish passage. A ...
[Review of research design and statistical methods in Chinese Journal of Cardiology].
Zhang, Li-jun; Yu, Jin-ming
2009-07-01
To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.
A two-dimensional biased coin design for dual-agent dose-finding trials.
Sun, Zhichao; Braun, Thomas M
2015-12-01
Given the limited efficacy observed with single agents, there is growing interest in Phase I clinical trial designs that allow for identification of the maximum tolerated combination of two agents. Existing parametric designs may suffer from over- or under-parameterization. Thus, we have designed a nonparametric approach that can be easily understood and implemented for combination trials. We propose a two-stage adaptive biased coin design that extends existing methods for single-agent trials to dual-agent dose-finding trials. The basic idea of our design is to divide the entire trial into two stages and apply the biased coin design, with modification, in each stage. We compare the operating characteristics of our design to four competing parametric approaches via simulation in several numerical examples. Under all simulation scenarios we have examined, our method performs well in terms of identification of the maximum tolerated combination and allocation of patients relative to the performance of its competitors. In our design, stopping rule criteria and the distribution of the total sample size among the two stages are context-dependent, and both need careful consideration before adopting our design in practice. Efficacy is not a part of the dose-assignment algorithm, nor used to define the maximum tolerated combination. Our design inherits the favorable statistical properties of the biased coin design, is competitive with existing designs, and promotes patient safety by limiting patient exposure to toxic combinations whenever possible. © The Author(s) 2015.
Efficient Computing Budget Allocation for Finding Simplest Good Designs
Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung
2012-01-01
In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404
A visualization framework for design and evaluation
NASA Astrophysics Data System (ADS)
Blundell, Benjamin J.; Ng, Gary; Pettifer, Steve
2006-01-01
The creation of compelling visualisation paradigms is a craft often dominated by intuition and issues of aesthetics, with relatively few models to support good design. The majority of problem cases are approached by simply applying a previously evaluated visualisation technique. A large body of work exists covering the individual aspects of visualisation design such as the human cognition aspects visualisation methods for specific problem areas, psychology studies and so forth, yet most frameworks regarding visualisation are applied after-the-fact as an evaluation measure. We present an extensible framework for visualisation aimed at structuring the design process, increasing decision traceability and delineating the notions of function, aesthetics and usability. The framework can be used to derive a set of requirements for good visualisation design and evaluating existing visualisations, presenting possible improvements. Our framework achieves this by being both broad and general, built on top of existing works, with hooks for extensions and customizations. This paper shows how existing theories of information visualisation fit into the scheme, presents our experience in the application of this framework on several designs, and offers our evaluation of the framework and the designs studied.
Case-based reasoning in design: An apologia
NASA Technical Reports Server (NTRS)
Pulaski, Kirt
1990-01-01
Three positions are presented and defended: the process of generating solutions in problem solving is viewable as a design task; case-based reasoning is a strong method of problem solving; and a synergism exists between case-based reasoning and design problem solving.
Boston-Fleischhauer, Carol
2008-01-01
The design and implementation of efficient, effective, and safe processes are never-ending challenges in healthcare. Less than optimal performance levels and rising concerns about patient safety suggest that traditional process design methods are insufficient to meet design requirements. In this 2-part series, the author presents human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare. An examination of these theories, application approaches, and examples are presented.
The Exploration of Green Architecture Design Integration Teaching Mode
ERIC Educational Resources Information Center
Shuang, Liang; Yibin, Han
2016-01-01
With the deepening of the concept of green building design, the course of university education gradually exposed many problems in the teaching of architectural design theory; based on the existing mode of teaching and combined with the needs of architectural design practice it proposed the "integrated" method of green building design. It…
Design and Implementation of a Studio-Based General Chemistry Course
ERIC Educational Resources Information Center
Gottfried, Amy C.; Sweeder, Ryan D.; Bartolin, Jeffrey M.; Hessler, Jessica A.; Reynolds, Benjamin P.; Stewart, Ian C.; Coppola, Brian P.; Holl, Mark Banaszak M.
2007-01-01
The design and implementation of a new value-added general chemistry course, which could use the studio instructional method to incorporate the existing educational research is reviewed. These teaching methods and activities were woven into the course to provide the students with ways of learning chemical concepts and practicing scientific…
Geophysical methods for determining the geotechnical engineering properties of earth materials.
DOT National Transportation Integrated Search
2010-03-01
Surface and borehole geophysical methods exist to measure in-situ properties and structural : characteristics of earth materials. Application of such methods has demonstrated cost savings through : reduced design uncertainty and lower investigation c...
NASA Astrophysics Data System (ADS)
Maris, E.; Froelich, D.
The designers of products subject to the European regulations on waste have an obligation to improve the recyclability of their products from the very first design stages. The statutory texts refer to ISO standard 22 628, which proposes a method to calculate vehicle recyclability. There are several scientific studies that propose other calculation methods as well. Yet the feedback from the CREER club, a group of manufacturers and suppliers expert in ecodesign and recycling, is that the product recyclability calculation method proposed in this standard is not satisfactory, since only a mass indicator is used, the calculation scope is not clearly defined, and common data on the recycling industry does not exist to allow comparable calculations to be made for different products. For these reasons, it is difficult for manufacturers to have access to a method and common data for calculation purposes.
A knowledge-based design framework for airplane conceptual and preliminary design
NASA Astrophysics Data System (ADS)
Anemaat, Wilhelmus A. J.
The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.
Processes for manufacturing multifocal diffractive-refractive intraocular lenses
NASA Astrophysics Data System (ADS)
Iskakov, I. A.
2017-09-01
Manufacturing methods and design features of modern diffractive-refractive intraocular lenses are discussed. The implantation of multifocal intraocular lenses is the most optimal method of restoring the accommodative ability of the eye after removal of the natural lens. Diffractive-refractive intraocular lenses are the most widely used implantable multifocal lenses worldwide. Existing methods for manufacturing such lenses implement various design solutions to provide the best vision function after surgery. The wide variety of available diffractive-refractive intraocular lens designs reflects the demand for this method of vision correction in clinical practice and the importance of further applied research and development of new technologies for designing improved lens models.
METHODS FOR INTEGRATING ENVIRONMENTAL CONSIDERATIONS INTO CHEMICAL PROCESS DESIGN DECISIONS
The objective of this cooperative agreement was to postulate a means by which an engineer could routinely include environmental considerations in day-to-day conceptual design problems; a means that could easily integrate with existing design processes, and thus avoid massive retr...
An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models
NASA Technical Reports Server (NTRS)
Mack, Robert J.
2003-01-01
Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.
Structural Optimization of a Knuckle with Consideration of Stiffness and Durability Requirements
Kim, Geun-Yeon
2014-01-01
The automobile's knuckle is connected to the parts of the steering system and the suspension system and it is used for adjusting the direction of a rotation through its attachment to the wheel. This study changes the existing material made of GCD45 to Al6082M and recommends the lightweight design of the knuckle as the optimal design technique to be installed in small cars. Six shape design variables were selected for the optimization of the knuckle and the criteria relevant to stiffness and durability were considered as the design requirements during the optimization process. The metamodel-based optimization method that uses the kriging interpolation method as the optimization technique was applied. The result shows that all constraints for stiffness and durability are satisfied using A16082M, while reducing the weight of the knuckle by 60% compared to that of the existing GCD450. PMID:24995359
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
The Development of a Robot-Based Learning Companion: A User-Centered Design Approach
ERIC Educational Resources Information Center
Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong
2015-01-01
A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…
Trial Sequential Methods for Meta-Analysis
ERIC Educational Resources Information Center
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
Creative design inspired by biological knowledge: Technologies and methods
NASA Astrophysics Data System (ADS)
Tan, Runhua; Liu, Wei; Cao, Guozhong; Shi, Yuan
2018-05-01
Biological knowledge is becoming an important source of inspiration for developing creative solutions to engineering design problems and even has a huge potential in formulating ideas that can help firms compete successfully in a dynamic market. To identify the technologies and methods that can facilitate the development of biologically inspired creative designs, this research briefly reviews the existing biological-knowledge-based theories and methods and examines the application of biological-knowledge-inspired designs in various fields. Afterward, this research thoroughly examines the four dimensions of key technologies that underlie the biologically inspired design (BID) process. This research then discusses the future development trends of the BID process before presenting the conclusions.
Exchange inlet optimization by genetic algorithm for improved RBCC performance
NASA Astrophysics Data System (ADS)
Chorkawy, G.; Etele, J.
2017-09-01
A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.
A Method for the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.; Whitesides, John L.; Campbell, Richard L.; Mineck, Raymond E.
1996-01-01
A fully automated iterative design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. Drag reductions have been realized using the design method over a range of Mach numbers, Reynolds numbers and airfoil thicknesses. The thrusts of the method are its ability to calculate a target N-Factor distribution that forces the flow to undergo transition at the desired location; the target-pressure-N-Factor relationship that is used to reduce the N-Factors in order to prolong transition; and its ability to design airfoils to meet lift, pitching moment, thickness and leading-edge radius constraints while also being able to meet the natural laminar flow constraint. The method uses several existing CFD codes and can design a new airfoil in only a few days using a Silicon Graphics IRIS workstation.
Human Information Behaviour and Design, Development and Evaluation of Information Retrieval Systems
ERIC Educational Resources Information Center
Keshavarz, Hamid
2008-01-01
Purpose: The purpose of this paper is to introduce the concept of human information behaviour and to explore the relationship between information behaviour of users and the existing approaches dominating design and evaluation of information retrieval (IR) systems and also to describe briefly new design and evaluation methods in which extensive…
Reflections on Graduate Student PBL Experiences
ERIC Educational Resources Information Center
McDonald, Betty
2008-01-01
The study designed to contribute to existing research on Problem-Based Learning (PBL) chose a focus group comprising 16 MSc. Petroleum Engineering students (six females). Using PBL as the method of instruction, students examined a real-life petroleum engineering problem that highlighted numerous areas of their existing curriculum. They worked in…
Reporting Qualitative Research: Standards, Challenges, and Implications for Health Design.
Peditto, Kathryn
2018-04-01
This Methods column describes the existing reporting standards for qualitative research, their application to health design research, and the challenges to implementation. Intended for both researchers and practitioners, this article provides multiple perspectives on both reporting and evaluating high-quality qualitative research. Two popular reporting standards exist for reporting qualitative research-the Consolidated Criteria for Reporting Qualitative Research (COREQ) and the Standards for Reporting Qualitative Research (SRQR). Though compiled using similar procedures, they differ in their criteria and the methods to which they apply. Creating and applying reporting criteria is inherently difficult due to the undefined and fluctuating nature of qualitative research when compared to quantitative studies. Qualitative research is expansive and occasionally controversial, spanning many different methods of inquiry and epistemological approaches. A "one-size-fits-all" standard for reporting qualitative research can be restrictive, but COREQ and SRQR both serve as valuable tools for developing responsible qualitative research proposals, effectively communicating research decisions, and evaluating submissions. Ultimately, tailoring a set of standards specific to health design research and its frequently used methods would ensure quality research and aid reviewers in their evaluations.
Estimating flood hydrographs and volumes for Alabama streams
Olin, D.A.; Atkins, J.B.
1988-01-01
The hydraulic design of highway drainage structures involves an evaluation of the effect of the proposed highway structures on lives, property, and stream stability. Flood hydrographs and associated flood volumes are useful tools in evaluating these effects. For design purposes, the Alabama Highway Department needs information on flood hydrographs and volumes associated with flood peaks of specific recurrence intervals (design floods) at proposed or existing bridge crossings. This report will provide the engineer with a method to estimate flood hydrographs, volumes, and lagtimes for rural and urban streams in Alabama with drainage areas less than 500 sq mi. Existing computer programs and methods to estimate flood hydrographs and volumes for ungaged streams have been developed in Georgia. These computer programs and methods were applied to streams in Alabama. The report gives detailed instructions on how to estimate flood hydrographs for ungaged rural or urban streams in Alabama with drainage areas less than 500 sq mi, without significant in-channel storage or regulations. (USGS)
Design and Analysis Tools for Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.; Folk, Thomas C.
2009-01-01
Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Hall, Joel; Karelse, Robert N.
2017-11-01
Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.
ERIC Educational Resources Information Center
Johnson, Andrew; Kuglitsch, Rebecca; Bresnahan, Megan
2015-01-01
This study used participatory and service design methods to identify emerging research needs and existing perceptions of library services among science and engineering faculty, post-graduate, and graduate student researchers based at a satellite campus at the University of Colorado Boulder. These methods, and the results of the study, allowed us…
ERIC Educational Resources Information Center
Dania, Aspasia; Tyrovola, Vasiliki; Koutsouba, Maria
2017-01-01
The aim of this paper is to present the design and evaluate the impact of a Laban Notation-based method for Teaching Dance (LANTD) on novice dancers' performance, in the case of Greek traditional dance. In this research, traditional dance is conceived in its "second existence" as a kind of presentational activity performed outside its…
Aircraft family design using enhanced collaborative optimization
NASA Astrophysics Data System (ADS)
Roth, Brian Douglas
Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component commonality, but of appropriate fidelity for aircraft conceptual design. The thesis also introduces a new aircraft family concept. Unlike most families, the intent is not necessarily to produce all family members. Rather, the family includes members for immediate production and members that address potential future market conditions and/or environmental regulations. The result is a set of designs that yield a small performance penalty today in return for significant future flexibility to produce family members that respond to new market conditions and environmental regulations.
NASA software specification and evaluation system design, part 2
NASA Technical Reports Server (NTRS)
1976-01-01
A survey and analysis of the existing methods, tools and techniques employed in the development of software are presented along with recommendations for the construction of reliable software. Functional designs for software specification language, and the data base verifier are presented.
Utilizing ego-centric video to conduct naturalistic bicycling studies.
DOT National Transportation Integrated Search
2016-10-01
Existing data collection methods are mostly designed for videos captured by stationary cameras and are not designed to follow cyclists along a : route or to integrate other sensor data. The goals of this research are: a) to develop a platform to coll...
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
Investigation of aerodynamic design issues with regions of separated flow
NASA Technical Reports Server (NTRS)
Gally, Tom
1993-01-01
Existing aerodynamic design methods have generally concentrated on the optimization of airfoil or wing shapes to produce a minimum drag while satisfying some basic constraints such as lift, pitching moment, or thickness. Since the minimization of drag almost always precludes the existence of separated flow, the evaluation and validation of these design methods for their robustness and accuracy when separated flow is present has not been aggressively pursued. However, two new applications for these design tools may be expected to include separated flow and the issues of aerodynamic design with this feature must be addressed. The first application of the aerodynamic design tools is the design of airfoils or wings to provide an optimal performance over a wide range of flight conditions (multipoint design). While the definition of 'optimal performance' in the multipoint setting is currently being hashed out, it is recognized that given a wide range of flight conditions, it will not be possible to ensure a minimum drag constraint at all conditions, and in fact some amount of separated flow (presumably small) may have to be allowed at the more demanding flight conditions. Thus a multipoint design method must be tolerant of the existence of separated flow and may include some controls upon its extent. The second application is in the design of wings with extended high speed buffet boundaries of their flight envelopes. Buffet occurs on a wing when regions of flow separation have grown to the extent that their time varying pressures induce possible destructive effects upon the wing structure or adversely effect either the aircraft controllability or passenger comfort. A conservative approach to the expansion of the buffet flight boundary is to simply expand the flight envelope of nonseparated flow under the assumption that buffet will also thus be alleviated. However, having the ability to design a wing with separated flow and thus to control the location, extent and severity of the separated flow regions may allow aircraft manufacturers to gain an advantage in the early design stages of an aircraft, when configuration changes are relatively inexpensive to make. The goal of the summer research at NASA Langley Research Center (LaRC) was twofold: first, to investigate a particular airfoil design problem observed under conditions of strong shock induced flow separation on the upper surface of an airfoil at transonic conditions; and second, to suggest and investigate design methodologies for the prediction (or detection) and control of flow separation. The context of both investigations was to use an existing two dimensional Navier-Stokes flow solver and the constrained direct/iterative surface curvature (CDISC) design algorithm developed at LaRC. As a lead in to the primary task, it was necessary to gain a familiarity with both the design method and the computational analysis and to perform the FORTRAN coding needed to couple them together.
A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses
Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert
2011-01-01
Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Development of an integrated staircase lift for home access.
Mattie, Johanne L; Borisoff, Jaimie F; Leland, Danny; Miller, William C
2015-12-01
Stairways into buildings present a significant environmental barrier for those with mobility impairments, including older adults. A number of home access solutions that allow users to safely enter and exit the home exist, however these all have some limitations. The purpose of this work was to develop a novel, inclusive home access solution that integrates a staircase and a lift into one device. The development of an integrated staircase lift followed a structured protocol with stakeholders providing feedback at various stages in the design process, consistent with rehabilitation engineering design methods. A novel home access device was developed. The integrated staircase-lift has the following features: inclusivity, by a universal design that provides an option for either use of stairs or a lift; constant availability, with a lift platform always ready for use on either level; and potential aesthetic advantages when integrating the device into an existing home. The potential also exists for emergency descent during a power outage, and self-powered versions. By engaging stakeholders in a user centred design process, insight on the limitations of existing home access solutions and specific feedback on our design guided development of a novel home access device.
COMPARISONS OF BOATING AND WADING METHODS USED TO ASSESS THE STATUS OF FLOWING WATERS
This document has been designed to provide an overview of the biological, physical and chemical methods of selected stream biomonitoring and assessment programs. It was written to satisfy the need to identifiy current methods that exist for sampling large rivers. The primary focu...
Cost and benefits design optimization model for fault tolerant flight control systems
NASA Technical Reports Server (NTRS)
Rose, J.
1982-01-01
Requirements and specifications for a method of optimizing the design of fault-tolerant flight control systems are provided. Algorithms that could be used for developing new and modifying existing computer programs are also provided, with recommendations for follow-on work.
Efficient design of CMOS TSC checkers
NASA Technical Reports Server (NTRS)
Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling
1990-01-01
This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.
The presence of field geologists in Mars-like terrain
NASA Technical Reports Server (NTRS)
Mcgreevy, Michael W.
1992-01-01
Methods of ethnographic observation and analysis have been coupled with object-oriented analysis and design concepts to begin the development of a clear path from observations in the field to the design of virtual presence systems. The existence of redundancies in field geology and presence allowed for the application of methods for understanding complex systems. As a result of this study, some of these redundancies have been characterized. Those described are all classes of continuity relations, including the continuities of continuous existence, context-constituent continuities, and state-process continuities. The discussion of each includes statements of general relationships, logical consequences of these, and hypothetical situations in which the relationships would apply. These are meant to aid in the development of a theory of presence. The discussion also includes design considerations, providing guidance for the design of virtual planetary exploration systems and other virtual presence systems. Converging evidence regarding continuity in presence is found in the nature of psychological dissociation. Specific methodological refinements should enhance ecological validity in subsequent field studies, which are in progress.
Impact of diet on the design of waste processors in CELSS
NASA Technical Reports Server (NTRS)
Waleh, Ahmad; Kanevsky, Valery; Nguyen, Thoi K.; Upadhye, Ravi; Wydeven, Theodore
1991-01-01
The preliminary results of a design analysis for a waste processor which employs existing technologies and takes into account the constraints of human diet are presented. The impact of diet is determined by using a model and an algorithm developed for the control and management of diet in a Controlled Ecological Life Support System (CELSS). A material and energy balance model for thermal oxidation of waste is developed which is consistent with both physical/chemical methods of incineration and supercritical water oxidation. The two models yield quantitative analysis of the diet and waste streams and the specific design parameters for waste processors, respectively. The results demonstrate that existing technologies can meet the demands of waste processing, but the choice and design of the processors or processing methods will be sensitive to the constraints of diet. The numerical examples are chosen to display the nature and extent of the gap in the available experiment information about CELSS requirements.
Dualism-Based Design of the Introductory Chinese MOOC "Kit de contact en langue chinoise"
ERIC Educational Resources Information Center
Wang-Szilas, Jue; Bellassen, Joël
2017-01-01
This article reviews the existing Chinese language Massive Open Online Courses (MOOCs) and points out three problems in their design: the monism-based teaching method, the non-integration of cultural elements, and the lack of learner-learner interactions. It then presents the design principles of the Introductory Chinese MOOC in an attempt to…
User-Centered Design Guidelines for Collaborative Software for Intelligence Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean; Endert, Alexander N.
In this position paper we discuss the necessity of using User-Centered Design (UCD) methods in order to design collaborative software for the intelligence community. We present some standing issues in collaborative software based on existing work within the intelligence community. Based on this information we present opportunities to address some of these challenges.
Section Preequating under the Equivalent Groups Design without IRT
ERIC Educational Resources Information Center
Guo, Hongwen; Puhan, Gautam
2014-01-01
In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…
An XML-based method for astronomy software designing
NASA Astrophysics Data System (ADS)
Liao, Mingxue; Aili, Yusupu; Zhang, Jin
XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.
Wang, Xun; Sun, Beibei; Liu, Boyang; Fu, Yaping; Zheng, Pan
2017-01-01
Experimental design focuses on describing or explaining the multifactorial interactions that are hypothesized to reflect the variation. The design introduces conditions that may directly affect the variation, where particular conditions are purposely selected for observation. Combinatorial design theory deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. In this work, borrowing the concept of "balance" in combinatorial design theory, a novel method for multifactorial bio-chemical experiments design is proposed, where balanced templates in combinational design are used to select the conditions for observation. Balanced experimental data that covers all the influencing factors of experiments can be obtianed for further processing, such as training set for machine learning models. Finally, a software based on the proposed method is developed for designing experiments with covering influencing factors a certain number of times.
ERIC Educational Resources Information Center
Schneiderman, Deborah; Freihoefer, Kara
2012-01-01
Purpose: The purpose of this paper is to examine the integration of Okala curriculum into Interior Design coursework. Okala, as a teaching package, is utilized extensively in industrial design education. However, this study examines the expansion and insertion of Okala modules in an existing interior design curriculum. The Okala modules included…
NASA Astrophysics Data System (ADS)
Huang, Di; Duan, Zhisheng
2018-03-01
This paper addresses the multi-objective fault detection observer design problems for a hypersonic vehicle. Owing to the fact that parameters' variations, modelling errors and disturbances are inevitable in practical situations, system uncertainty is considered in this study. By fully utilising the orthogonal space information of output matrix, some new understandings are proposed for the construction of Lyapunov matrix. Sufficient conditions for the existence of observers to guarantee the fault sensitivity and disturbance robustness in infinite frequency domain are presented. In order to further relax the conservativeness, slack matrices are introduced to fully decouple the observer gain with the Lyapunov matrices in finite frequency range. Iterative linear matrix inequality algorithms are proposed to obtain the solutions. The simulation examples which contain a Monte Carlo campaign illustrate that the new methods can effectively reduce the design conservativeness compared with the existing methods.
2009-11-18
J.M. Schumacher, Finite -dimensional regulators for a class of infinite dimensional systems . Systems and Control Letters, 3 (1983), 7-12. [39J J.M...for the control of certain examples or system classes us- ing particular feedback design methods ([20, 21, 16, 17, 19, 18]). Still, the control of...long time existence and asymptotic behavior for certain examples or system classes using particular feedback design methods (see, e.g., [20, 21, 16, 17
Tele-existence and/or cybernetic interface studies in Japan
NASA Technical Reports Server (NTRS)
Tachi, Susumu
1991-01-01
Tele-existence aims at a natural and efficient remote control of robots by providing the operator with a real time sensation of presence. It is an advaced type of teleoperation system which enables a human operator at the controls to perform remote manipulation tasks dexterously with the feeling that he or she exists in one of the remote anthropomorphic robots in the remote environment, e.g., in a hostile environment such as those of nuclear radiation, high temperature, and deep space. In order to study the use of the tele-existence system in the artificially constructed environment, the visual tele-existence simulator has been designed, a pseudo-real-time binocular solid model robot simulator has been made, and its feasibility has been experimentally evaluated. An anthropomorphic robot mechanism with an arm having seven degrees of freedom has been designed and developed as a slave robot for feasibility experiments of teleoperation using the tele-existence method. An impedance controlled active display mechanism and a head mounted display have also been designed and developed as the display subsystem for the master. The robot's structural dimensions are set very close to those of humans.
An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information
NASA Astrophysics Data System (ADS)
Tsuruta, Masanobu; Masuyama, Shigeru
We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.
Automatic Design of Digital Synthetic Gene Circuits
Marchisio, Mario A.; Stelling, Jörg
2011-01-01
De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input–output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions. PMID:21399700
Application of multi-agent coordination methods to the design of space debris mitigation tours
NASA Astrophysics Data System (ADS)
Stuart, Jeffrey; Howell, Kathleen; Wilson, Roby
2016-04-01
The growth in the number of defunct and fragmented objects near to the Earth poses a growing hazard to launch operations as well as existing on-orbit assets. Numerous studies have demonstrated the positive impact of active debris mitigation campaigns upon the growth of debris populations, but comparatively fewer investigations incorporate specific mission scenarios. Furthermore, while many active mitigation methods have been proposed, certain classes of debris objects are amenable to mitigation campaigns employing chaser spacecraft with existing chemical and low-thrust propulsive technologies. This investigation incorporates an ant colony optimization routing algorithm and multi-agent coordination via auctions into a debris mitigation tour scheme suitable for preliminary mission design and analysis as well as spacecraft flight operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
Freedland, Kenneth E.; Mohr, David C.; Davidson, Karina W.; Schwartz, Joseph E.
2011-01-01
Objective To examine the use of existing practice control groups in randomized controlled trials of behavioral interventions, and the role of extrinsic healthcare services in the design and conduct of behavioral trials. Method Selective qualitative review. Results Extrinsic healthcare services, also known as nonstudy care, have important but under-recognized effects on the design and conduct of behavioral trials. Usual care, treatment as usual, standard of care, and other existing practice control groups pose a variety of methodological and ethical challenges, but they play a vital role in behavioral intervention research. Conclusion This review highlights the need for a scientific consensus statement on control groups in behavioral trials. PMID:21536837
Pollack, Ari H; Miller, Andrew; Mishra, Sonali R.; Pratt, Wanda
2016-01-01
Participatory design, a method by which system users and stakeholders meaningfully contribute to the development of a new process or technology, has great potential to revolutionize healthcare technology, yet has seen limited adoption. We conducted a design session with eleven physicians working to create a novel clinical information tool utilizing participatory design methods. During the two-hour session, the physicians quickly engaged in the process and generated a large quantity of information, informing the design of a future tool. By utilizing facilitators experienced in design methodology, with detailed domain expertise, and well integrated into the healthcare organization, the participatory design session engaged a group of users who are often disenfranchised with existing processes as well as health information technology in general. We provide insight into why participatory design works with clinicians and provide guiding principles for how to implement these methods in healthcare organizations interested in advancing health information technology. PMID:28269900
Reducing the complexity of the software design process with object-oriented design
NASA Technical Reports Server (NTRS)
Schuler, M. P.
1991-01-01
Designing software is a complex process. How object-oriented design (OOD), coupled with formalized documentation and tailored object diagraming techniques, can reduce the complexity of the software design process is described and illustrated. The described OOD methodology uses a hierarchical decomposition approach in which parent objects are decomposed into layers of lower level child objects. A method of tracking the assignment of requirements to design components is also included. Increases in the reusability, portability, and maintainability of the resulting products are also discussed. This method was built on a combination of existing technology, teaching experience, consulting experience, and feedback from design method users. The discussed concepts are applicable to hierarchal OOD processes in general. Emphasis is placed on improving the design process by documenting the details of the procedures involved and incorporating improvements into those procedures as they are developed.
A minimum cost tolerance allocation method for rocket engines and robust rocket engine design
NASA Technical Reports Server (NTRS)
Gerth, Richard J.
1993-01-01
Rocket engine design follows three phases: systems design, parameter design, and tolerance design. Systems design and parameter design are most effectively conducted in a concurrent engineering (CE) environment that utilize methods such as Quality Function Deployment and Taguchi methods. However, tolerance allocation remains an art driven by experience, handbooks, and rules of thumb. It was desirable to develop and optimization approach to tolerancing. The case study engine was the STME gas generator cycle. The design of the major components had been completed and the functional relationship between the component tolerances and system performance had been computed using the Generic Power Balance model. The system performance nominals (thrust, MR, and Isp) and tolerances were already specified, as were an initial set of component tolerances. However, the question was whether there existed an optimal combination of tolerances that would result in the minimum cost without any degradation in system performance.
The design of a joined wing flight demonstrator aircraft
NASA Technical Reports Server (NTRS)
Smith, S. C.; Cliff, S. E.; Kroo, I. M.
1987-01-01
A joined-wing flight demonstrator aircraft has been developed at the NASA Ames Research Center in collaboration with ACA Industries. The aircraft is designed to utilize the fuselage, engines, and undercarriage of the existing NASA AD-1 flight demonstrator aircraft. The design objectives, methods, constraints, and the resulting aircraft design, called the JW-1, are presented. A wind-tunnel model of the JW-1 was tested in the NASA Ames 12-foot wind tunnel. The test results indicate that the JW-1 has satisfactory flying qualities for a flight demonstrator aircraft. Good agreement of test results with design predictions confirmed the validity of the design methods used for application to joined-wing configurations.
Interactive design optimization of magnetorheological-brake actuators using the Taguchi method
NASA Astrophysics Data System (ADS)
Erol, Ozan; Gurocak, Hakan
2011-10-01
This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.
Conceptual design of industrial process displays.
Pedersen, C R; Lind, M
1999-11-01
Today, process displays used in industry are often designed on the basis of piping and instrumentation diagrams without any method of ensuring that the needs of the operators are fulfilled. Therefore, a method for a systematic approach to the design of process displays is needed. This paper discusses aspects of process display design taking into account both the designer's and the operator's points of view. Three aspects are emphasized: the operator tasks, the display content and the display form. The distinction between these three aspects is the basis for proposing an outline for a display design method that matches the industrial practice of modular plant design and satisfies the needs of reusability of display design solutions. The main considerations in display design in the industry are to specify the operator's activities in detail, to extract the information the operators need from the plant design specification and documentation, and finally to present this information. The form of the display is selected from existing standardized display elements such as trend curves, mimic diagrams, ecological interfaces, etc. Further knowledge is required to invent new display elements. That is, knowledge about basic visual means of presenting information and how humans perceive and interpret these means and combinations. This knowledge is required in the systematic selection of graphical items for a given display content. The industrial part of the method is first illustrated in the paper by a simple example from a plant with batch processes. Later the method is applied to develop a supervisory display for a condenser system in a nuclear power plant. The differences between the continuous plant domain of power production and the batch processes from the example are analysed and broad categories of display types are proposed. The problems involved in specification and invention of a supervisory display are analysed and conclusions from these problems are made. It is concluded that the design method proposed provides a framework for the progress of the display design and is useful in pin-pointing the actual problems. The method was useful in reducing the number of existing displays that could fulfil the requirements of the supervision task. The method provided at the same time a framework for dealing with the problems involved in inventing new displays based on structured analysis. However the problems in a systematic approach to display invention still need consideration.
Evaluation of design methods to determine scour depths for bridge structures.
DOT National Transportation Integrated Search
2013-03-01
Scour of bridge foundations is the most common cause of bridge failures. The overall goal of this project was to evaluate the applicability of the existing Hydraulic Engineering Circular (HEC-18) documents method to Louisiana bridges that are mostly ...
Participatory Design in Gerontechnology: A Systematic Literature Review.
Merkel, Sebastian; Kucharski, Alexander
2018-05-19
Participatory design (PD) is widely used within gerontechnology but there is no common understanding about which methods are used for what purposes. This review aims to examine what different forms of PD exist in the field of gerontechnology and how these can be categorized. We conducted a systematic literature review covering several databases. The search strategy was based on 3 elements: (1) participatory methods and approaches with (2) older persons aiming at developing (3) technology for older people. Our final review included 26 studies representing a variety of technologies designed/developed and methods/instruments applied. According to the technologies, the publications reviewed can be categorized in 3 groups: Studies that (1) use already existing technology with the aim to find new ways of use; (2) aim at creating new devices; (3) test and/or modify prototypes. The implementation of PD depends on the questions: Why a participatory approach is applied, who is involved as future user(s), when those future users are involved, and how they are incorporated into the innovation process. There are multiple ways, methods, and instruments to integrate users into the innovation process. Which methods should be applied, depends on the context. However, most studies do not evaluate if participatory approaches will lead to a better acceptance and/or use of the co-developed products. Therefore, participatory design should follow a comprehensive strategy, starting with the users' needs and ending with an evaluation if the applied methods have led to better results.
Clinical Trial Design for HIV Prevention Research: Determining Standards of Prevention.
Dawson, Liza; Zwerski, Sheryl
2015-06-01
This article seeks to advance ethical dialogue on choosing standards of prevention in clinical trials testing improved biomedical prevention methods for HIV. The stakes in this area of research are high, given the continued high rates of infection in many countries and the budget limitations that have constrained efforts to expand treatment for all who are currently HIV-infected. New prevention methods are still needed; at the same time, some existing prevention and treatment interventions have been proven effective but are not yet widely available in the countries where they most urgently needed. The ethical tensions in this field of clinical research are well known and have been the subject of extensive debate. There is no single clinical trial design that can optimize all the ethically important goals and commitments involved in research. Several recent articles have described the current ethical difficulties in designing HIV prevention trials, especially in resource limited settings; however, there is no consensus on how to handle clinical trial design decisions, and existing international ethical guidelines offer conflicting advice. This article acknowledges these deep ethical dilemmas and moves beyond a simple descriptive approach to advance an organized method for considering what clinical trial designs will be ethically acceptable for HIV prevention trials, balancing the relevant criteria and providing justification for specific design decisions. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Image-based corrosion recognition for ship steel structures
NASA Astrophysics Data System (ADS)
Ma, Yucong; Yang, Yang; Yao, Yuan; Li, Shengyuan; Zhao, Xuefeng
2018-03-01
Ship structures are subjected to corrosion inevitably in service. Existed image-based methods are influenced by the noises in images because they recognize corrosion by extracting features. In this paper, a novel method of image-based corrosion recognition for ship steel structures is proposed. The method utilizes convolutional neural networks (CNN) and will not be affected by noises in images. A CNN used to recognize corrosion was designed through fine-turning an existing CNN architecture and trained by datasets built using lots of images. Combining the trained CNN classifier with a sliding window technique, the corrosion zone in an image can be recognized.
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Shuttle mission simulator hardware conceptual design report
NASA Technical Reports Server (NTRS)
Burke, J. F.
1973-01-01
The detailed shuttle mission simulator hardware requirements are discussed. The conceptual design methods, or existing technology, whereby those requirements will be fulfilled are described. Information of a general nature on the total design problem plus specific details on how these requirements are to be satisfied are reported. The configuration of the simulator is described and the capabilities for various types of training are identified.
Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control
NASA Technical Reports Server (NTRS)
Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
Design enhancement tools in MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Wallerstein, D. V.
1984-01-01
Design sensitivity is the calculation of derivatives of constraint functions with respect to design variables. While a knowledge of these derivatives is useful in its own right, the derivatives are required in many efficient optimization methods. Constraint derivatives are also required in some reanalysis methods. It is shown where the sensitivity coefficients fit into the scheme of a basic organization of an optimization procedure. The analyzer is to be taken as MSC/NASTRAN. The terminator program monitors the termination criteria and ends the optimization procedure when the criteria are satisfied. This program can reside in several plances: in the optimizer itself, in a user written code, or as part of the MSC/EOS (Engineering Operating System) MSC/EOS currently under development. Since several excellent optimization codes exist and since they require such very specialized technical knowledge, the optimizer under the new MSC/EOS is considered to be selected and supplied by the user to meet his specific needs and preferences. The one exception to this is a fully stressed design (FSD) based on simple scaling. The gradients are currently supplied by various design sensitivity options now existing in MSC/NASTRAN's design sensitivity analysis (DSA).
A Developed Meta-model for Selection of Cotton Fabrics Using Design of Experiments and TOPSIS Method
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Chatterjee, Prasenjit
2017-12-01
Selection of cotton fabrics for providing optimal clothing comfort is often considered as a multi-criteria decision making problem consisting of an array of candidate alternatives to be evaluated based of several conflicting properties. In this paper, design of experiments and technique for order preference by similarity to ideal solution (TOPSIS) are integrated so as to develop regression meta-models for identifying the most suitable cotton fabrics with respect to the computed TOPSIS scores. The applicability of the adopted method is demonstrated using two real time examples. These developed models can also identify the statistically significant fabric properties and their interactions affecting the measured TOPSIS scores and final selection decisions. There exists good degree of congruence between the ranking patterns as derived using these meta-models and the existing methods for cotton fabric ranking and subsequent selection.
Computer Aided Design in Engineering Education.
ERIC Educational Resources Information Center
Gobin, R.
1986-01-01
Discusses the use of Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) systems in an undergraduate engineering education program. Provides a rationale for CAD/CAM use in the already existing engineering program. Describes the methods used in choosing the systems, some initial results, and warnings for first-time users. (TW)
Innovation and design approaches within prospective ergonomics.
Liem, André; Brangier, Eric
2012-01-01
In this conceptual article the topic of "Prospective Ergonomics" will be discussed within the context of innovation, design thinking and design processes & methods. Design thinking is essentially a human-centred innovation process that emphasises observation, collaboration, interpretation, visualisation of ideas, rapid concept prototyping and concurrent business analysis, which ultimately influences innovation and business strategy. The objective of this project is to develop a roadmap for innovation, involving consumers, designers and business people in an integrative process, which can be applied to product, service and business design. A theoretical structure comprising of Innovation perspectives (1), Worldviews supported by rationalist-historicist and empirical-idealistic dimensions (2) and Models of "design" reasoning (3) precedes the development and classification of existing methods as well as the introduction of new ones.
Heuristic decomposition for non-hierarchic systems
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.; Hajela, P.
1991-01-01
Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.
A DUST-SETTLING CHAMBER FOR SAMPLING-INSTRUMENT COMPARISON STUDIES
Introduction: Few methods exist that can evenly and reproducibly deposit dusts onto surfaces for surface-sampling methodological studies. A dust-deposition chamber was designed for that purpose.
Methods: A 1-m3 Rochester-type chamber was modified to produce high airborne d...
Design issues for grid-connected photovoltaic systems
NASA Astrophysics Data System (ADS)
Ropp, Michael Eugene
1998-08-01
Photovoltaics (PV) is the direct conversion of sunlight to electrical energy. In areas without centralized utility grids, the benefits of PV easily overshadow the present shortcomings of the technology. However, in locations with centralized utility systems, significant technical challenges remain before utility-interactive PV (UIPV) systems can be integrated into the mix of electricity sources. One challenge is that the needed computer design tools for optimal design of PV systems with curved PV arrays are not available, and even those that are available do not facilitate monitoring of the system once it is built. Another arises from the issue of islanding. Islanding occurs when a UIPV system continues to energize a section of a utility system after that section has been isolated from the utility voltage source. Islanding, which is potentially dangerous to both personnel and equipment, is difficult to prevent completely. The work contained within this thesis targets both of these technical challenges. In Task 1, a method for modeling a PV system with a curved PV array using only existing computer software is developed. This methodology also facilitates comparison of measured and modeled data for use in system monitoring. The procedure is applied to the Georgia Tech Aquatic Center (GTAC) FV system. In the work contained under Task 2, islanding prevention is considered. The existing state-of-the- art is thoroughly reviewed. In Subtask 2.1, an analysis is performed which suggests that standard protective relays are in fact insufficient to guarantee protection against islanding. In Subtask 2.2. several existing islanding prevention methods are compared in a novel way. The superiority of this new comparison over those used previously is demonstrated. A new islanding prevention method is the subject under Subtask 2.3. It is shown that it does not compare favorably with other existing techniques. However, in Subtask 2.4, a novel method for dramatically improving this new islanding prevention method is described. It is shown, both by computer modeling and experiment, that this new method is one of the most effective available today. Finally, under Subtask 2.5, the effects of certain types of loads; on the effectiveness of islanding prevention methods are discussed.
Negotiating a Systems Development Method
NASA Astrophysics Data System (ADS)
Karlsson, Fredrik; Hedström, Karin
Systems development methods (or methods) are often applied in tailored version to fit the actual situation. Method tailoring is in most the existing literature viewed as either (a) a highly rational process with the method engineer as the driver where the project members are passive information providers or (b) an unstructured process where the systems developer makes individual choices, a selection process without any driver. The purpose of this chapter is to illustrate that important design decisions during method tailoring are made by project members through negotiation. The study has been carried out using the perspective of actor-network theory. Our narratives depict method tailoring as more complex than (a) and (b) show the driver role rotates between the project members, and design decisions are based on influences from several project members. However, these design decisions are not consensus decisions.
Designing stellarator coils by a modified Newton method using FOCUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-06-01
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2018-03-22
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
A health literacy and usability heuristic evaluation of a mobile consumer health application.
Monkman, Helen; Kushniruk, Andre
2013-01-01
Usability and health literacy are two critical factors in the design and evaluation of consumer health information systems. However, methods for evaluating these two factors in conjunction remain limited. This study adapted a set of existing guidelines for the design of consumer health Web sites into evidence-based evaluation heuristics tailored specifically for mobile consumer health applications. In order to test the approach, a mobile consumer health application (app) was then evaluated using these heuristics. In addition to revealing ways to improve the usability of the system, this analysis identified opportunities to augment the content to make it more understandable by users with limited health literacy. This study successfully demonstrated the utility of converting existing design guidelines into heuristics for the evaluation of usability and health literacy. The heuristics generated could be applied for assessing and revising other existing consumer health information systems.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
ERIC Educational Resources Information Center
Nielsen, Richard A.
2016-01-01
This article shows how statistical matching methods can be used to select "most similar" cases for qualitative analysis. I first offer a methodological justification for research designs based on selecting most similar cases. I then discuss the applicability of existing matching methods to the task of selecting most similar cases and…
Next-Generation NATO Reference Mobility Model (NG-NRMM)
2016-05-11
facilitate comparisons between vehicle design candidates and to assess the mobility of existing vehicles under specific scenarios. Although NRMM has...of different deployed platforms in different areas of operation and routes Improved flexibility as a design and procurement support tool through...Element Method DEM Digital Elevation Model DIL Driver in the Loop DP Drawbar Pull Force DOE Design of Experiments DTED Digital Terrain Elevation Data
Tuberculosis vaccines in clinical trials
Rowland, Rosalind; McShane, Helen
2011-01-01
Effective prophylactic and/or therapeutic vaccination is a key strategy for controlling the global TB epidemic. The partial effectiveness of the existing TB vaccine, bacille Calmette–Guérin (BCG), suggests effective vaccination is possible and highlights the need for an improved vaccination strategy. Clinical trials are evaluating both modifications to the existing BCG immunization methods and also novel TB vaccines, designed to replace or boost BCG. Candidate vaccines in clinical development include live mycobacterial vaccines designed to replace BCG, subunit vaccines designed to boost BCG and therapeutic vaccines designed as an adjunct to chemotherapy. There is a great need for validated animal models, identification of immunological biomarkers of protection and field sites with the capacity for large-scale efficacy testing in order to develop and license a novel TB vaccine or regimen. PMID:21604985
Structural integrity of wind tunnel wooden fan blades
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Wingate, Robert T.; Rooker, James R.; Mort, Kenneth W.; Zager, Harold E.
1991-01-01
Information is presented which was compiled by the NASA Inter-Center Committee on Structural Integrity of Wooden Fan Blades and is intended for use as a guide in design, fabrication, evaluation, and assurance of fan systems using wooden blades. A risk assessment approach for existing NASA wind tunnels with wooden fan blades is provided. Also, state of the art information is provided for wooden fan blade design, drive system considerations, inspection and monitoring methods, and fan blade repair. Proposed research and development activities are discussed, and recommendations are provided which are aimed at future wooden fan blade design activities and safely maintaining existing NASA wind tunnel fan blades. Information is presented that will be of value to wooden fan blade designers, fabricators, inspectors, and wind tunnel operations personnel.
Analysis of a Knowledge-Management-Based Process of Transferring Project Management Skills
ERIC Educational Resources Information Center
Ioi, Toshihiro; Ono, Masakazu; Ishii, Kota; Kato, Kazuhiko
2012-01-01
Purpose: The purpose of this paper is to propose a method for the transfer of knowledge and skills in project management (PM) based on techniques in knowledge management (KM). Design/methodology/approach: The literature contains studies on methods to extract experiential knowledge in PM, but few studies exist that focus on methods to convert…
Zhang, Junming; Wu, Yan
2018-03-28
Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.
Analyzing the security of an existing computer system
NASA Technical Reports Server (NTRS)
Bishop, M.
1986-01-01
Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang; Gao, Peng
2012-07-01
The reliability apportionment of spacecraft solar array is of significant importance for spacecraft designers in the early stage of design. However, it is difficult to use the existing methods to resolve reliability apportionment problem because of the data insufficiency and the uncertainty of the relations among the components in the mechanical system. This paper proposes a new method which combines the fuzzy comprehensive evaluation with fuzzy reasoning Petri net (FRPN) to accomplish the reliability apportionment of the solar array. The proposed method extends the previous fuzzy methods and focuses on the characteristics of the subsystems and the intrinsic associations among the components. The analysis results show that the synchronization mechanism may obtain the highest reliability value and the solar panels and hinges may get the lowest reliability before design and manufacturing. Our developed method is of practical significance for the reliability apportionment of solar array where the design information has not been clearly identified, particularly in early stage of design.
A. Kumar; Bruce Marcot; G. Talukdar
2010-01-01
We studied vegetation and land cover characteristics within the existing array of protected areas (PAs) in South Garo Hills of Meghalaya, northeast India and introduce the concept of protected area network (PAN) and methods to determine linkages of forests among existing PAs. We describe and analyze potential elements of a PAN, including PAs, reserved forests,...
ERIC Educational Resources Information Center
Roberts, Kelly D.; Park, Hye Jin; Brown, Steven; Cook, Bryan
2011-01-01
Universal Design for Instruction (UDI) in postsecondary education is a relatively new concept/framework that has generated significant support. The purpose of this literature review was to examine existing empirical research, including qualitative, quantitative, and mixed methods, on the use of UDI (and related terms) in postsecondary education.…
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature of 760 °C, the design evaluation must document that these conditions exist. (ii) For a combustion... autoignition temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B...
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... temperature of 760 °C, the design evaluation must document that these conditions exist. (ii) For a combustion... autoignition temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B...
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... temperature of 760 °C, the design evaluation must document that these conditions exist. (ii) For a combustion... autoignition temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B...
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature of 760 °C, the design evaluation must document that these conditions exist. (ii) For a combustion... autoignition temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B...
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
Design for Usability; practice-oriented research for user-centered product design.
van Eijk, Daan; van Kuijk, Jasper; Hoolhorst, Frederik; Kim, Chajoong; Harkema, Christelle; Dorrestijn, Steven
2012-01-01
The Design for Usability project aims at improving the usability of electronic professional and consumer products by creating new methodology and methods for user-centred product development, which are feasible to apply in practice. The project was focused on 5 key areas: (i) design methodology, expanding the existing approach of scenario-based design to incorporate the interaction between product design, user characteristics, and user behaviour; (ii) company processes, barriers and enablers for usability in practice; (iii) user characteristics in relation to types of products and use-situations; (iv) usability decision-making; and (v) product impact on user behaviour. The project team developed methods and techniques in each of these areas to support the design of products with a high level of usability. This paper brings together and summarizes the findings.
Model-based synthesis of aircraft noise to quantify human perception of sound quality and annoyance
NASA Astrophysics Data System (ADS)
Berckmans, D.; Janssens, K.; Van der Auweraer, H.; Sas, P.; Desmet, W.
2008-04-01
This paper presents a method to synthesize aircraft noise as perceived on the ground. The developed method gives designers the opportunity to make a quick and economic evaluation concerning sound quality of different design alternatives or improvements on existing aircraft. By presenting several synthesized sounds to a jury, it is possible to evaluate the quality of different aircraft sounds and to construct a sound that can serve as a target for future aircraft designs. The combination of using a sound synthesis method that can perform changes to a recorded aircraft sound together with executing jury tests allows to quantify the human perception of aircraft noise.
Upgrading in an Industrial Setting. Final Report.
ERIC Educational Resources Information Center
Russell, Wendell
The project objectives were: (1) to assess existing industrial upgrading practices in an Atomic Energy Commission contractor organization, (2) to design new alternative upgrading methods, (3) to experiment with new upgrading methods, (4) to plan for utilization of proven upgrading programs, and (5) to document and disseminate activities. A twelve…
Computer method for design of acoustic liners for turbofan engines
NASA Technical Reports Server (NTRS)
Minner, G. L.; Rice, E. J.
1976-01-01
A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.
Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter
2013-08-27
The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion.
How to improve the comfort of Kesawan Heritage Corridor, Medan City
NASA Astrophysics Data System (ADS)
Tegar; Ginting, Nurlisa; Suwantoro, H.
2018-03-01
Comfort is indispensable to make a friendly neighborhood or city. Especially the comfort of the infrastructure in the corridor. People must be able to feel comfortable to act rationally in their physical environment. Existing infrastructure must able to support Kesawan as a historic district. Kesawan is an area that is filled with so many unique buildings. Without comfort, how good the existing buildings’ architecture cannot be enjoyed. It will also affect the identity of a region or city. The aim of this research is to re-design the public facilities from Kesawan corridor’s comfort aspect: orientation, traffic calming, vegetation, signage, public facilities (toilet, seating place, bus station, bins), information center, parking and pedestrian path. It will translate the design concept in the form of design criteria. This research uses qualitative methods. Some facilities in this corridor are unsuitable even some of them are not available. So, we need some improvements and additions to the existing facilities. It is expected that by upgrading the existing facilities, visitors who come to Kesawan will be able to enjoy more and able to make Medan city more friendly.
A Simple and Robust Method for Partially Matched Samples Using the P-Values Pooling Approach
Kuan, Pei Fen; Huang, Bo
2013-01-01
This paper focuses on statistical analyses in scenarios where some samples from the matched pairs design are missing, resulting in partially matched samples. Motivated by the idea of meta-analysis, we recast the partially matched samples as coming from two experimental designs, and propose a simple yet robust approach based on the weighted Z-test to integrate the p-values computed from these two designs. We show that the proposed approach achieves better operating characteristics in simulations and a case study, compared to existing methods for partially matched samples. PMID:23417968
Reusable design: A proposed approach to Public Health Informatics system design
2011-01-01
Background Since it was first defined in 1995, Public Health Informatics (PHI) has become a recognized discipline, with a research agenda, defined domain-specific competencies and a specialized corpus of technical knowledge. Information systems form a cornerstone of PHI research and implementation, representing significant progress for the nascent field. However, PHI does not advocate or incorporate standard, domain-appropriate design methods for implementing public health information systems. Reusable design is generalized design advice that can be reused in a range of similar contexts. We propose that PHI create and reuse information design knowledge by taking a systems approach that incorporates design methods from the disciplines of Human-Computer Interaction, Interaction Design and other related disciplines. Discussion Although PHI operates in a domain with unique characteristics, many design problems in public health correspond to classic design problems, suggesting that existing design methods and solution approaches are applicable to the design of public health information systems. Among the numerous methodological frameworks used in other disciplines, we identify scenario-based design and participatory design as two widely-employed methodologies that are appropriate for adoption as PHI standards. We make the case that these methods show promise to create reusable design knowledge in PHI. Summary We propose the formalization of a set of standard design methods within PHI that can be used to pursue a strategy of design knowledge creation and reuse for cost-effective, interoperable public health information systems. We suggest that all public health informaticians should be able to use these design methods and the methods should be incorporated into PHI training. PMID:21333000
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Statistical inference for the additive hazards model under outcome-dependent sampling
Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo
2015-01-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363
Zhong, Yi; Gross, Herbert
2017-05-01
Freeform surfaces play important roles in improving the imaging performance of off-axis optical systems. However, for some systems with high requirements in specifications, the structure of the freeform surfaces could be very complicated and the number of freeform surfaces could be large. That brings challenges in fabrication and increases the cost. Therefore, to achieve a good initial system with minimum aberrations and reasonable structure before implementing freeform surfaces is essential for optical designers. The already existing initial system design methods are limited to certain types of systems. A universal tool or method to achieve a good initial system efficiently is very important. In this paper, based on the Nodal aberration theory and the system design method using Gaussian Brackets, the initial system design method is extended from rotationally symmetric systems to general non-rotationally symmetric systems. The design steps are introduced and on this basis, two off-axis three-mirror systems are pre-designed using spherical shape surfaces. The primary aberrations are minimized using the nonlinear least-squares solver. This work provides insight and guidance for initial system design of off-axis mirror systems.
Optimal design of a bank of spatio-temporal filters for EEG signal classification.
Higashi, Hiroshi; Tanaka, Toshihisa
2011-01-01
The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery.
ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES
LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.
2008-01-01
Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys
Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.
Hund, Lauren; Bedrick, Edward J; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.
Research on Design Information Management System for Leather Goods
NASA Astrophysics Data System (ADS)
Lu, Lei; Peng, Wen-li
The idea of setting up a design information management system of leather goods was put forward to solve the problems existed in current information management of leather goods. Working principles of the design information management system for leather goods were analyzed in detail. Firstly, the acquiring approach of design information of leather goods was introduced. Secondly, the processing methods of design information were introduced. Thirdly, the management of design information in database was studied. Finally, the application of the system was discussed by taking the shoes products as an example.
Estimation of the behavior factor of existing RC-MRF buildings
NASA Astrophysics Data System (ADS)
Vona, Marco; Mastroberti, Monica
2018-01-01
In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.
Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter
2013-01-01
Background The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. Objective This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). Methods A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. Results The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. Conclusions The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion. PMID:23981848
Workflow Design Using Fragment Composition
NASA Astrophysics Data System (ADS)
Mosser, Sébastien; Blay-Fornarino, Mireille; France, Robert
The Service-Oriented Architecture (Soa) paradigm supports the assembly of atomic services to create applications that implement complex business processes. Assembly can be accomplished by service orchestrations defined by Soa architects. The Adore method allows Soa architects to model complex orchestrations of services by composing models of smaller orchestrations called orchestration fragments. The Adore method can also be used to weave fragments that address new concerns into existing application models. In this paper we illustrate how the Adore method can be used to separate and compose process aspects in a Soa design of the Car Crash Crisis Management System. The paper also includes a discussion of the benefits and limitations of the Adore method.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2014-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.
NASA Astrophysics Data System (ADS)
Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua
2018-03-01
Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.
Measure Guideline: Installing Rigid Foam Insulation on the Interior of Existing Brick Walls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Natarajan, H.; Klocke, S.; Puttagunta, S.
2012-06-01
This measure guideline provides information on an effective method to insulate the interior of existing brick masonry walls with extruded polystyrene (XPS) insulation board. The guide outlines step-by-step design and installation procedures while explaining the benefits and tradeoffs where applicable. The authors intend that this document be useful to a varied audience that includes builders, remodelers, contractors and homeowners.
Measure Guideline. Installing Rigid Foam Insulation on the Interior of Existing Brick Walls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Natarajan, Hariharan; Klocke, Steve; Puttagunta, Srikanth
2012-06-01
This measure guideline provides information on an effective method to insulate the interior of existing brick masonry walls with extruded polystyrene (XPS) insulation board. The guide outlines step-by-step design and installation procedures while explaining the benefits and tradeoffs where applicable. The authors intend that this document be useful to a varied audience that includes builders,remodelers, contractors and homeowners.
Greek classicism in living structure? Some deductive pathways in animal morphology.
Zweers, G A
1985-01-01
Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".
User-Centered Design for Psychosocial Intervention Development and Implementation
Lyon, Aaron R.; Koerner, Kelly
2018-01-01
The current paper articulates how common difficulties encountered when attempting to implement or scale-up evidence-based treatments are exacerbated by fundamental design problems, which may be addressed by a set of principles and methods drawn from the contemporary field of user-centered design. User-centered design is an approach to product development that grounds the process in information collected about the individuals and settings where products will ultimately be used. To demonstrate the utility of this perspective, we present four design concepts and methods: (a) clear identification of end users and their needs, (b) prototyping/rapid iteration, (c) simplifying existing intervention parameters/procedures, and (d) exploiting natural constraints. We conclude with a brief design-focused research agenda for the developers and implementers of evidence-based treatments. PMID:29456295
Unified Program Design: Organizing Existing Programming Models, Delivery Options, and Curriculum
ERIC Educational Resources Information Center
Rubenstein, Lisa DaVia; Ridgley, Lisa M.
2017-01-01
A persistent problem in the field of gifted education has been the lack of categorization and delineation of gifted programming options. To address this issue, we propose Unified Program Design as a structural framework for gifted program models. This framework defines gifted programs as the combination of delivery methods and curriculum models.…
Considerations for the Systematic Analysis and Use of Single-Case Research
ERIC Educational Resources Information Center
Horner, Robert H.; Swaminathan, Hariharan; Sugai, George; Smolkowski, Keith
2012-01-01
Single-case research designs provide a rigorous research methodology for documenting experimental control. If single-case methods are to gain wider application, however, a need exists to define more clearly (a) the logic of single-case designs, (b) the process and decision rules for visual analysis, and (c) an accepted process for integrating…
Incorporating Total Quality Management in an Engineering Design Course. Report 5-1993.
ERIC Educational Resources Information Center
Wilczynski, V.; And Others
One definition of creativity is the conviction that each and every existing idea can be improved. It is proposed that creativity in an engineering design process can be encouraged by the adoption of Total Quality Management (TQM) methods based on a commitment to continuous improvement. This paper addresses the introduction and application of TQM…
Formal and heuristic system decomposition methods in multidisciplinary synthesis. Ph.D. Thesis, 1991
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.
1991-01-01
The multidisciplinary interactions which exist in large scale engineering design problems provide a unique set of difficulties. These difficulties are associated primarily with unwieldy numbers of design variables and constraints, and with the interdependencies of the discipline analysis modules. Such obstacles require design techniques which account for the inherent disciplinary couplings in the analyses and optimizations. The objective of this work was to develop an efficient holistic design synthesis methodology that takes advantage of the synergistic nature of integrated design. A general decomposition approach for optimization of large engineering systems is presented. The method is particularly applicable for multidisciplinary design problems which are characterized by closely coupled interactions among discipline analyses. The advantage of subsystem modularity allows for implementation of specialized methods for analysis and optimization, computational efficiency, and the ability to incorporate human intervention and decision making in the form of an expert systems capability. The resulting approach is not a method applicable to only a specific situation, but rather, a methodology which can be used for a large class of engineering design problems in which the system is non-hierarchic in nature.
NASA Technical Reports Server (NTRS)
Lukash, James A.; Daley, Earl
2011-01-01
This work describes the design and development effort to adapt rapid-development space hardware by creating a ground system using solutions of low complexity, mass, & cost. The Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft is based on the modular common spacecraft bus architecture developed at NASA Ames Research Center. The challenge was building upon the existing modular common bus design and development work and improving the LADEE spacecraft design by adding an Equipotential Voltage Reference (EVeR) system, commonly referred to as a ground system. This would aid LADEE in meeting Electromagnetic Environmental Effects (E3) requirements, thereby making the spacecraft more compatible with itself and its space environment. The methods used to adapt existing hardware are presented, including provisions which may be used on future spacecraft.
Similitude design for the vibration problems of plates and shells: A review
NASA Astrophysics Data System (ADS)
Zhu, Yunpeng; Wang, You; Luo, Zhong; Han, Qingkai; Wang, Deyou
2017-06-01
Similitude design plays a vital role in the analysis of vibration and shock problems encountered in large engineering equipment. Similitude design, including dimensional analysis and governing equation method, is founded on the dynamic similitude theory. This study reviews the application of similitude design methods in engineering practice and summarizes the major achievements of the dynamic similitude theory in structural vibration and shock problems in different fields, including marine structures, civil engineering structures, and large power equipment. This study also reviews the dynamic similitude design methods for thin-walled and composite material plates and shells, including the most recent work published by the authors. Structure sensitivity analysis is used to evaluate the scaling factors to attain accurate distorted scaling laws. Finally, this study discusses the existing problems and the potential of the dynamic similitude theory for the analysis of vibration and shock problems of structures.
From Usability Engineering to Evidence-based Usability in Health IT.
Marcilly, Romaric; Peute, Linda; Beuscart-Zephir, Marie-Catherine
2016-01-01
Usability is a critical factor in the acceptance, safe use, and success of health IT. The User-Centred Design process is widely promoted to improve usability. However, this traditional case by case approach that is rooted in the sound understanding of users' needs is not sufficient to improve technologies' usability and prevent usability-induced use-errors that may harm patients. It should be enriched with empirical evidence. This evidence is on design elements (what are the most valuable design principles, and the worst usability mistakes), and on the usability evaluation methods (which combination of methods is most suitable in which context). To achieve this evidence, several steps must be fulfilled and challenges must be overcome. Some attempts to search evidence for designing elements of health IT and for usability evaluation methods exist and are summarized. A concrete instance of evidence-based usability design principles for medication-related alerting systems is briefly described.
Integrating computer programs for engineering analysis and design
NASA Technical Reports Server (NTRS)
Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.
1983-01-01
The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.
A systematic composite service design modeling method using graph-based theory.
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.
A Systematic Composite Service Design Modeling Method Using Graph-Based Theory
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358
ERIC Educational Resources Information Center
Beveridge, Scott; Garcia, Jorge; Siblo, Matt
2015-01-01
Purpose: To examine the nature of ethical dilemmas most frequently reported by rehabilitation counselors in the private and public sectors and determine if significant differences exist in how practitioners experience ethical dilemmas in these two settings. Method: A mixed-methods internet-based survey design was utilized and included descriptive,…
Industrializing Offshore Wind Power with Serial Assembly and Lower-cost Deployment - Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kempton, Willett
A team of engineers and contractors has developed a method to move offshore wind installation toward lower cost, faster deployment, and lower environmental impact. A combination of methods, some incremental and some breaks from past practice, interact to yield multiple improvements. Three designs were evaluated based on detailed engineering: 1) a 5 MW turbine on a jacket with pin piles (base case), 2) a 10 MW turbine on a conventional jacket with pin piles, assembled at sea, and 3) a 10 MW turbine on tripod jacket with suction buckets (caissons) and with complete turbine assembly on-shore. The larger turbine, assemblymore » ashore, and the use of suction buckets together substantially reduce capital cost of offshore wind projects. Notable capital cost reductions are: changing from 5 MW to 10 MW turbine, a 31% capital cost reduction, and assembly on land then single-piece install at sea an additional 9% capital cost reduction. An estimated Design 4) estimates further cost reduction when equipment and processes of Design 3) are optimized, rather than adapted to existing equipment and process. Cost of energy for each of the four Designs are also calculated, yielding approximately the same percentage reductions. The methods of Design 3) analyzed here include accepted structures such as suction buckets used in new ways, innovations conceived but previously without engineering and economic validation, combined with new methods not previously proposed. Analysis of Designs 2) and 3) are based on extensive engineering calculations and detailed cost estimates. All design methods can be done with existing equipment, including lift equipment, ports and ships (except that design 4 assumes a more optimized ship). The design team consists of experienced offshore structure designers, heavy lift engineers, wind turbine designers, vessel operators, and marine construction contractors. Comparing the methods based on criteria of cost and deployment speed, the study selected the third design. That design is, in brief: a conventional turbine and tubular tower is mounted on a tripod jacket, in turn atop three suction buckets. Blades are mounted on the tower, not on the hub. The entire structure is built in port, from the bottom up, then assembled structures are queued in the port for deployment. During weather windows, the fully-assembled structures are lifted off the quay, lashed to the vessel, and transported to the deployment site. The vessel analyzed is a shear leg crane vessel with dynamic positioning like the existing Gulliver, or it could be a US-built crane barge. On site, the entire structure is lowered to the bottom by the crane vessel, then pumping of the suction buckets is managed by smaller service vessels. Blades are lifted into place by small winches operated by workers in the nacelle without lift vessel support. Advantages of the selected design include: cost and time at sea of the expensive lift vessel are significantly reduced; no jack up vessel is required; the weather window required for each installation is shorter; turbine structure construction is continuous with a queue feeding the weather-dependent installation process; pre-installation geotechnical work is faster and less expensive; there are no sound impacts on marine mammals, thus minimal spotting and no work stoppage Industrializing Offshore Wind Power 6 of 96 9 for mammal passage; the entire structure can be removed for decommissioning or major repairs; the method has been validated for current turbines up to 10 MW, and a calculation using simple scaling shows it usable up to 20 MW turbines.« less
2016-05-01
species as either threatened or endangered under the Endangered Species Act (ESA) and to designate critical habitat. In a Federal Register notice... designation as “critical habitat.” The designation of critical habitat affects activities that involve a federal permit, license, or funding, and are likely...are not likely to jeopardize the continued existence of a listed species, or destroy or adversely modify its designated critical habitat. In some cases
Aids in designing laboratory flumes
Williams, Garnett P.
1971-01-01
The upsurge of interest in our environment has caused research and instruction in the flow of water along open channels to become increasingly popular in universities and institutes. This, in turn, has brought a greater demand for properly-designed laboratory flumes. Whatever the reason for your interest, designing and building the flume will take a little preparation. You may choose a pattern exactly like a previous design, or you may follow the more time-consuming method of studying several existing flumes and combine the most desirable features of each.
A hybrid voice/data modulation for the VHF aeronautical channels
NASA Technical Reports Server (NTRS)
Akos, Dennis M.
1993-01-01
A method of improving the spectral efficiency of the existing Very High Frequency (VHF) Amplitude Modulation (AM) voice communication channels is proposed. The technique is to phase modulate the existing voice amplitude modulated carrier with digital data. This allows the transmission of digital information over an existing AM voice channel with no change to the existing AM signal format. There is no modification to the existing AM receiver to demodulate the voice signal and an additional receiver module can be added for processing of the digital data. The existing VHF AM transmitter requires only a slight modification for the addition of the digital data signal. The past work in the area is summarized and presented together with an improved system design and the proposed implementation.
Strategies and Approaches to TPS Design
NASA Technical Reports Server (NTRS)
Kolodziej, Paul
2005-01-01
Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.
Helicopter flight-control design using an H(2) method
NASA Technical Reports Server (NTRS)
Takahashi, Marc D.
1991-01-01
Rate-command and attitude-command flight-control designs for a UH-60 helicopter in hover are presented and were synthesized using an H(2) method. Using weight functions, this method allows the direct shaping of the singular values of the sensitivity, complementary sensitivity, and control input transfer-function matrices to give acceptable feedback properties. The designs were implemented on the Vertical Motion Simulator, and four low-speed hover tasks were used to evaluate the control system characteristics. The pilot comments from the accel-decel, bob-up, hovering turn, and side-step tasks indicated good decoupling and quick response characteristics. However, an underlying roll PIO tendency was found to exist away from the hover condition, which was caused by a flap regressing mode with insufficient damping.
Field Guide for Designing Human Interaction with Intelligent Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Thronesbery, Carroll G.
1998-01-01
The characteristics of this Field Guide approach address the problems of designing innovative software to support user tasks. The requirements for novel software are difficult to specify a priori, because there is not sufficient understanding of how the users' tasks should be supported, and there are not obvious pre-existing design solutions. When the design team is in unfamiliar territory, care must be taken to avoid rushing into detailed design, requirements specification, or implementation of the wrong product. The challenge is to get the right design and requirements in an efficient, cost-effective manner. This document's purpose is to describe the methods we are using to design human interactions with intelligent systems which support Space Shuttle flight controllers in the Mission Control Center at NASA/Johnson Space Center. Although these software systems usually have some intelligent features, the design challenges arise primarily from the innovation needed in the software design. While these methods are tailored to our specific context, they should be extensible, and helpful to designers of human interaction with other types of automated systems. We review the unique features of this context so that you can determine how to apply these methods to your project Throughout this Field Guide, goals of the design methods are discussed. This should help designers understand how a specific method might need to be adapted to the project at hand.
Frnakenstein: multiple target inverse RNA folding.
Lyngsø, Rune B; Anderson, James W J; Sizikova, Elena; Badugu, Amarendra; Hyland, Tomas; Hein, Jotun
2012-10-09
RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein.
Frnakenstein: multiple target inverse RNA folding
2012-01-01
Background RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. Results In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Conclusions Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein. PMID:23043260
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
Ndabarora, Eléazar; Mchunu, Gugu
2014-01-01
Various studies have reported that university students, who are mostly young people, rarely use existing HIV/AIDS preventive methods. Although studies have shown that young university students have a high degree of knowledge about HIV/AIDS and HIV modes of transmission, they are still not utilising the existing HIV prevention methods and still engage in risky sexual practices favourable to HIV. Some variables, such as awareness of existing HIV/AIDS prevention methods, have been associated with utilisation of such methods. The study aimed to explore factors that influence use of existing HIV/AIDS prevention methods among university students residing in a selected campus, using the Health Belief Model (HBM) as a theoretical framework. A quantitative research approach and an exploratory-descriptive design were used to describe perceived factors that influence utilisation by university students of HIV/AIDS prevention methods. A total of 335 students completed online and manual questionnaires. Study findings showed that the factors which influenced utilisation of HIV/AIDS prevention methods were mainly determined by awareness of the existing university-based HIV/AIDS prevention strategies. Most utilised prevention methods were voluntary counselling and testing services and free condoms. Perceived susceptibility and perceived threat of HIV/AIDS score was also found to correlate with HIV risk index score. Perceived susceptibility and perceived threat of HIV/AIDS showed correlation with self-efficacy on condoms and their utilisation. Most HBM variables were not predictors of utilisation of HIV/AIDS prevention methods among students. Intervention aiming to improve the utilisation of HIV/AIDS prevention methods among students at the selected university should focus on removing identified barriers, promoting HIV/AIDS prevention services and providing appropriate resources to implement such programmes.
Experimental design and quantitative analysis of microbial community multiomics.
Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis
2017-11-30
Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.
NASA Astrophysics Data System (ADS)
Doležel, Jiří; Novák, Drahomír; Petrů, Jan
2017-09-01
Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.
NASA Astrophysics Data System (ADS)
Jung, Sang-Young
Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.
Decentralized control of large flexible structures by joint decoupling
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Juang, Jer-Nan
1992-01-01
A decentralized control design method is presented for large complex flexible structures by using the idea of joint decoupling. The derivation is based on a coupled substructure state-space model, which is obtained from enforcing conditions of interface compatibility and equilibrium to the substructure state-space models. It is shown that by restricting the control law to be localized state feedback and by setting the joint actuator input commands to decouple joint 'degrees of freedom' (dof) from interior dof, the global structure control design problem can be decomposed into several substructure control design problems. The substructure control gains and substructure observers are designed based on modified substructure state-space models. The controllers produced by the proposed method can operate successfully at the individual substructure level as well as at the global structure level. Therefore, not only control design but also control implementation is decentralized. Stability and performance requirement of the closed-loop system can be achieved by using any existing state feedback control design method. A two-component mass-spring damper system and a three-truss structure are used as examples to demonstrate the proposed method.
ERIC Educational Resources Information Center
Public Technology, Inc., Washington, DC.
This technical guide is part of a packet of tools designed to assist state or local government practitioners in organizing and managing an energy conservation program. It gives information on adapting energy conservation methods to existing public buildings and on designing new public buildings with energy conservation in mind. It also discusses…
A Product Analysis Method and Its Staging to Develop Redesign Competences
ERIC Educational Resources Information Center
Hansen, Claus Thorp; Lenau, Torben Anker
2013-01-01
Most product development work in industrial practice is incremental, i.e., the company has had a product in production and on the market for some time, and now time has come to design an upgraded variant. This type of redesign project requires that the engineering designers have competences to carry through an analysis of the existing product…
Simplified Design Method for Tension Fasteners
NASA Astrophysics Data System (ADS)
Olmstead, Jim; Barker, Paul; Vandersluis, Jonathan
2012-07-01
Tension fastened joints design has traditionally been an iterative tradeoff between separation and strength requirements. This paper presents equations for the maximum external load that a fastened joint can support and the optimal preload to achieve this load. The equations, based on linear joint theory, account for separation and strength safety factors and variations in joint geometry, materials, preload, load-plane factor and thermal loading. The strength-normalized versions of the equations are applicable to any fastener and can be plotted to create a "Fastener Design Space", FDS. Any combination of preload and tension that falls within the FDS represents a safe joint design. The equation for the FDS apex represents the optimal preload and load capacity of a set of joints. The method can be used for preliminary design or to evaluate multiple pre-existing joints.
Direct design of aspherical lenses for extended non-Lambertian sources in two-dimensional geometry
Wu, Rengmao; Hua, Hong; Benítez, Pablo; Miñano, Juan C.
2016-01-01
Illumination design for extended sources is very important for practical applications. The existing direct methods that are all developed for extended Lambertian sources are not applicable to extended non-Lambertian sources whose luminance is a function of position and direction. What we present in this Letter is to our knowledge the first direct method for extended non-Lambertian sources. In this method, the edge rays and the interior rays are both used, and the output intensity at a given direction is calculated to be the integral of the luminance function of all the outgoing rays at this direction. No cumbersome iterative illuminance compensation is needed. Two examples are presented to demonstrate the elegance of this method in prescribed intensity design for extended non-Lambertian sources in two-dimensional geometry. PMID:26125361
Investigation of aged hot-mix asphalt pavements : technical summary.
DOT National Transportation Integrated Search
2013-09-01
Over the lifetime of an asphalt concrete (AC) pavement, the roadway requires periodic resurfacing and rehabilitation to provide acceptable performance. The most popular resurfacing method is an asphalt overlay over the existing roadway. In the design...
Research notes : retrofitting culverts for fish.
DOT National Transportation Integrated Search
2005-01-01
Culverts are a well established method to pass a roadway over a waterway. Standard design criteria exist for meeting the hydraulic requirements for moving the water through the culverts. However, the hydraulic conditions resulting from many culvert d...
Rotorcraft Performance Model (RPM) for use in AEDT.
DOT National Transportation Integrated Search
2015-11-01
This report documents a rotorcraft performance model for use in the FAAs Aviation Environmental Design Tool. The new rotorcraft performance model is physics-based. This new model replaces the existing helicopter trajectory modeling methods in the ...
Ergonomics and simulation-based approach in improving facility layout
NASA Astrophysics Data System (ADS)
Abad, Jocelyn D.
2018-02-01
The use of the simulation-based technique in facility layout has been a choice in the industry due to its convenience and efficient generation of results. Nevertheless, the solutions generated are not capable of addressing delays due to worker's health and safety which significantly impact overall operational efficiency. It is, therefore, critical to incorporate ergonomics in facility design. In this study, workstation analysis was incorporated into Promodel simulation to improve the facility layout of a garment manufacturing. To test the effectiveness of the method, existing and improved facility designs were measured using comprehensive risk level, efficiency, and productivity. Results indicated that the improved facility layout generated a decrease in comprehensive risk level and rapid upper limb assessment score; an increase of 78% in efficiency and 194% increase in productivity compared to existing design and thus proved that the approach is effective in attaining overall facility design improvement.
Adaptive Control for Microgravity Vibration Isolation System
NASA Technical Reports Server (NTRS)
Yang, Bong-Jun; Calise, Anthony J.; Craig, James I.; Whorton, Mark S.
2005-01-01
Most active vibration isolation systems that try to a provide quiescent acceleration environment for space science experiments have utilized linear design methods. In this paper, we address adaptive control augmentation of an existing classical controller that employs a high-gain acceleration feedback together with a low-gain position feedback to center the isolated platform. The control design feature includes parametric and dynamic uncertainties because the hardware of the isolation system is built as a payload-level isolator, and the acceleration Sensor exhibits a significant bias. A neural network is incorporated to adaptively compensate for the system uncertainties, and a high-pass filter is introduced to mitigate the effect of the measurement bias. Simulations show that the adaptive control improves the performance of the existing acceleration controller and keep the level of the isolated platform deviation to that of the existing control system.
Methods for roof-top mini-arrays
NASA Astrophysics Data System (ADS)
Hazen, W. E.; Hazen, E. S.
1985-08-01
To test the idea of the Linsley effect mini array for the study of giant air showers, it is desirable to have a trigger that exploits the effect itself. In addition to the trigger, it is necessary to have a method for measuring the relative arrival times of the particle swarm selected by the trigger. Since the idea of mini arrays is likely to appeal to small research groups, it is desirable to try to design relatively simple and inexpensive methods, and methods that utilize existing detectors. Clusters of small detectors have been designed for operation in the local particle density realm where the probability of or = 2 particles per detector is small. Consequently, this method can discriminate pulses from each detector and thenceforth deal mainly with logic pulses.
NASA Astrophysics Data System (ADS)
Vinh, T.
1980-08-01
There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.
Acoustic Treatment Design Scaling Methods. Volume 1; Overview, Results, and Recommendations
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.
1999-01-01
Scale model fan rigs that simulate new generation ultra-high-bypass engines at about 1/5-scale are achieving increased importance as development vehicles for the design of low-noise aircraft engines. Testing at small scale allows the tests to be performed in existing anechoic wind tunnels, which provides an accurate simulation of the important effects of aircraft forward motion on the noise generation. The ability to design, build, and test miniaturized acoustic treatment panels on scale model fan rigs representative of the fullscale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. The primary objective of this study was to develop methods that will allow scale model fan rigs to be successfully used as acoustic treatment design tools. The study focuses on finding methods to extend the upper limit of the frequency range of impedance prediction models and acoustic impedance measurement methods for subscale treatment liner designs, and confirm the predictions by correlation with measured data. This phase of the program had as a goal doubling the upper limit of impedance measurement from 6 kHz to 12 kHz. The program utilizes combined analytical and experimental methods to achieve the objectives.
Methods for the Joint Meta-Analysis of Multiple Tests
ERIC Educational Resources Information Center
Trikalinos, Thomas A.; Hoaglin, David C.; Small, Kevin M.; Terrin, Norma; Schmid, Christopher H.
2014-01-01
Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests'…
ERIC Educational Resources Information Center
Karpudewan, Mageswary; Hj Ismail, Zurida; Mohamed, Norita
2011-01-01
Green chemistry is the design, development and implementation of chemical products and processes to reduce or eliminate the use of sub-stances hazardous to human health and the environment. This article reports on the integration of green chemistry and sustainable development concepts (SDCs) into an existing teaching methods course for chemistry…
Program Retrieval/Dissemination: A Solid State Random Access System.
ERIC Educational Resources Information Center
Weeks, Walter O., Jr.
The trend toward greater flexibility in educational methods has led to a need for better and more rapid access to a variety of aural and audiovisual resource materials. This in turn has demanded the development of a flexible, reliable system of hardware designed to aid existing distribution methods in providing such access. The system must be…
ERIC Educational Resources Information Center
Avsec, Stanislav; Jamšek, Janez
2016-01-01
Technological literacy is identified as a vital achievement of technology- and engineering-intensive education. It guides the design of technology and technical components of educational systems and defines competitive employment in technological society. Existing methods for measuring technological literacy are incomplete or complicated,…
ERIC Educational Resources Information Center
White, Charles E., Jr.
The purpose of this study was to develop and implement a hypertext documentation system in an industrial laboratory and to evaluate its usefulness by participative observation and a questionnaire. Existing word-processing test method documentation was converted directly into a hypertext format or "hyperdocument." The hyperdocument was designed and…
Scholarly Practice and Inquiry: Dynamic Interactions in an Elementary Mathematics Methods Course
ERIC Educational Resources Information Center
Tyminski, Andrew M.; Brittain, McKenzie H.
2017-01-01
This paper represents research that exists at the crossroad of scholarly practice and scholarly inquiry. We share the design, enactment and empirical examination of an elementary methods course activity, Exploring and Supporting Student Thinking (ESST) which engaged 18 prospective teachers in two sessions of one on one problem posing with 3rd…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, J.R.; Minor, J.E.; Mehta, K.C.
1975-11-01
Criteria are prescribed and guidance is provided for professional personnel who are involved with the evaluation of existing buildings and facilities at Site 300 near Livermore, California to resist the possible effects of extreme winds and tornadoes. The development of parameters for the effects of tornadoes and extreme winds and guidelines for evaluation and design of structures are presented. The investigations conducted are summarized and the techniques used for arriving at the combined tornado and extreme wind risk model are discussed. The guidelines for structural design methods for calculating pressure distributions on walls and roofs of structures and methods formore » accommodating impact loads from missiles are also presented. (auth)« less
XDesign: an open-source software package for designing X-ray imaging phantoms and experiments.
Ching, Daniel J; Gürsoy, Dogˇa
2017-03-01
The development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.
XDesign: An open-source software package for designing X-ray imaging phantoms and experiments
Ching, Daniel J.; Gursoy, Dogˇa
2017-02-21
Here, the development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.
Li, Zhifei; Qin, Dongliang
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572
Li, Zhifei; Qin, Dongliang; Yang, Feng
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.
1988-08-01
exchanged between the cells, thus requiring existence of fast , high capacity, high availability communication channels. The same arguments indicate...mininet - loss of a cell - intermittent communications failure in the maxinet - partitioning of the maxinet or the mininet o Query decomposition. 3.3...take place. A new sequencer is selected by the timeout mechanism described above. This process Pj must set its priority to 0 in order to ensure fast
McLaughlan, Rebecca; Pert, Alan
2017-11-25
As the dominant research paradigm within the construction of contemporary healthcare facilities, evidence-based design (EBD) will increasingly impact our expectations of what hospital architecture should be. Research methods within EBD focus on prototyping incremental advances and evaluating what has already been built. Yet medical care is a rapidly evolving system; changes to technology, workforce composition, patient demographics and funding models can create rapid and unpredictable changes to medical practice and modes of care. This dynamism has the potential to curtail or negate the usefulness of current best practice approaches. To imagine new directions for the role of the hospital in society, or innovative ways in which the built environment might support well-being, requires a model that can project beyond existing constraints. Speculative design employs a design-based research methodology to imagine alternative futures and uses the artefacts created through this process to enable broader critical reflection on existing practices. This paper examines the contribution of speculative design within the context of the paediatric hospital as a means of facilitating critical reflection regarding the design of new healthcare facilities. While EBD is largely limited by what has already been built, speculative design offers a complementary research method to meet this limitation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Embedded WENO: A design strategy to improve existing WENO schemes
NASA Astrophysics Data System (ADS)
van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.
2017-02-01
Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.
Mixed-methods designs in mental health services research: a review.
Palinkas, Lawrence A; Horwitz, Sarah M; Chamberlain, Patricia; Hurlburt, Michael S; Landsverk, John
2011-03-01
Despite increased calls for use of mixed-methods designs in mental health services research, how and why such methods are being used and whether there are any consistent patterns that might indicate a consensus about how such methods can and should be used are unclear. Use of mixed methods was examined in 50 peer-reviewed journal articles found by searching PubMed Central and 60 National Institutes of Health (NIH)-funded projects found by searching the CRISP database over five years (2005-2009). Studies were coded for aims and the rationale, structure, function, and process for using mixed methods. A notable increase was observed in articles published and grants funded over the study period. However, most did not provide an explicit rationale for using mixed methods, and 74% gave priority to use of quantitative methods. Mixed methods were used to accomplish five distinct types of study aims (assess needs for services, examine existing services, develop new or adapt existing services, evaluate services in randomized controlled trials, and examine service implementation), with three categories of rationale, seven structural arrangements based on timing and weighting of methods, five functions of mixed methods, and three ways of linking quantitative and qualitative data. Each study aim was associated with a specific pattern of use of mixed methods, and four common patterns were identified. These studies offer guidance for continued progress in integrating qualitative and quantitative methods in mental health services research consistent with efforts by NIH and other funding agencies to promote their use.
NASA Astrophysics Data System (ADS)
Zhang, Ke-Jia; Kwek, Leong-Chuan; Ma, Chun-Guang; Zhang, Long; Sun, Hong-Wei
2018-02-01
Quantum sealed-bid auction (QSA) has been widely studied in quantum cryptography. For a successful auction, post-confirmation is regarded as an important mechanism to make every bidder verify the identity of the winner after the auctioneer has announced the result. However, since the auctioneer may be dishonest and collude with malicious bidders in practice, some potential loopholes could exist. In this paper, we point out two types of collusion attacks for a particular post-confirmation technique with EPR pairs. And it is not difficult to see that there exists no unconditionally secure post-confirmation mechanism in the existing QSA model, if the dishonest participants have the ability to control multiparticle entanglement. In the view of this, we note that some secure implementation could exist if the participants are supposed to be semi-quantum, i.e., they can only control single photons. Finally, two potential methods to design post-confirmation mechanism are presented in this restricted scenario.
An improved design method for EPC middleware
NASA Astrophysics Data System (ADS)
Lou, Guohuan; Xu, Ran; Yang, Chunming
2014-04-01
For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.
Development of uniform and predictable battery materials for nickel-cadmium aerospace cells
NASA Technical Reports Server (NTRS)
1971-01-01
Battery materials and manufacturing methods were analyzed with the aim of developing uniform and predictable battery plates for nickel cadmium aerospace cells. A study is presented for the high temperature electrochemical impregnation process for the preparation of nickel cadmium battery plates. This comparative study is set up as a factorially designed experiment to examine both manufacturing and operational variables and any interaction that might exist between them. The manufacturing variables in the factorial design include plaque preparative method, plaque porosity and thickness, impregnation method, and loading, The operational variables are type of duty cycle, charge and discharge rate, extent of overcharge, and depth of discharge.
Participatory Design Methods for C2 Systems (Proceedings/Presentation)
2006-01-01
Cognitive Task Analysis (CTA) 16. SECURITY CLASSIFICATION OF: 17. LIMITATION 18. NUMBER 19a. NAME OF RESPONSIBLE PERSON OF ABSTRACT OF PAGES Janet E. Miller...systems to support cognitive work such as is accomplished in a network-centric -environment. Cognitive task analysis (CTA) methods are used to...of cognitive task analysis methodologies exist (Schraagen et al., 2000). However, many of these methods are skeptically viewed by a domain’s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-12-01
An NOx control technology assessment study was conducted to examine the effectiveness of low-excess-air firing, staged combustion, flue gas recirculation, and current burner/boiler designs as applied to coal-fired utility boilers. Significant variations in NOx emissions exist with boiler type, firing method, and coal type, but a relative comparison of emissions control performance, cost, and operational considerations is presented for each method. The study emphasized the numerous operational factors that are of major importance to the user in selecting and implementing a combustion modification technique. Staged combustion and low-excess-air operation were identified as the most cost-effective methods for existing units. Closemore » control of local air/fuel ratios and rigorous combustion equipment maintenance are essential to the success of both methods. Flue gas recirculation is relatively ineffective and has the added concern of tube erosion. More research is needed to resolve potential corrosion concerns with low-NOx operating modes. Low-NOx burners in conjunction with a compartmentalized windbox are capable of meeting a 0.6-lb/million Btu emission level on new units. Advanced burner designs are being developed to meet research emission goals of approximately 0.25 lb/MBtu.« less
Method for Smoke Spread Testing of Large Premises
NASA Astrophysics Data System (ADS)
Walmerdahl, P.; Werling, P.
2001-11-01
A method for performing non-destructive smoke spread tests has been developed, tested and applied to several existing buildings. Burning methanol in different size steel trays cooled by water generates the heat source. Several tray sizes are available to cover fire sources up to nearly 1MW. The smoke is supplied by means of a suitable number of smoke generators that produce a smoke, which can be described as a non-toxic aerosol. The advantage of the method is that it provides a means for performing non-destructive tests in already existing buildings and other installations for the purpose of evaluating the functionality and design of the active fire protection measures such as smoke extraction systems, etc. In the report, the method is described in detail and experimental data from the try-out of the method are also presented in addition to a discussion on applicability and flexibility of the method.
Crisan, Anamaria; McKee, Geoffrey; Munzner, Tamara
2018-01-01
Background Microbial genome sequencing is now being routinely used in many clinical and public health laboratories. Understanding how to report complex genomic test results to stakeholders who may have varying familiarity with genomics—including clinicians, laboratorians, epidemiologists, and researchers—is critical to the successful and sustainable implementation of this new technology; however, there are no evidence-based guidelines for designing such a report in the pathogen genomics domain. Here, we describe an iterative, human-centered approach to creating a report template for communicating tuberculosis (TB) genomic test results. Methods We used Design Study Methodology—a human centered approach drawn from the information visualization domain—to redesign an existing clinical report. We used expert consults and an online questionnaire to discover various stakeholders’ needs around the types of data and tasks related to TB that they encounter in their daily workflow. We also evaluated their perceptions of and familiarity with genomic data, as well as its utility at various clinical decision points. These data shaped the design of multiple prototype reports that were compared against the existing report through a second online survey, with the resulting qualitative and quantitative data informing the final, redesigned, report. Results We recruited 78 participants, 65 of whom were clinicians, nurses, laboratorians, researchers, and epidemiologists involved in TB diagnosis, treatment, and/or surveillance. Our first survey indicated that participants were largely enthusiastic about genomic data, with the majority agreeing on its utility for certain TB diagnosis and treatment tasks and many reporting some confidence in their ability to interpret this type of data (between 58.8% and 94.1%, depending on the specific data type). When we compared our four prototype reports against the existing design, we found that for the majority (86.7%) of design comparisons, participants preferred the alternative prototype designs over the existing version, and that both clinicians and non-clinicians expressed similar design preferences. Participants showed clearer design preferences when asked to compare individual design elements versus entire reports. Both the quantitative and qualitative data informed the design of a revised report, available online as a LaTeX template. Conclusions We show how a human-centered design approach integrating quantitative and qualitative feedback can be used to design an alternative report for representing complex microbial genomic data. We suggest experimental and design guidelines to inform future design studies in the bioinformatics and microbial genomics domains, and suggest that this type of mixed-methods study is important to facilitate the successful translation of pathogen genomics in the clinic, not only for clinical reports but also more complex bioinformatics data visualization software. PMID:29340235
An introduction to human systems integration (HSI) in the U.S. railroad industry.
DOT National Transportation Integrated Search
2007-04-01
Human systems integration (HSI) is a systematic, organization-wide approach to : implementing new technologies and modernizing existing systems. It is a combination of : managerial philosophy, methods, techniques, and tools designed to emphasize, dur...
EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.
Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin
2018-04-24
Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.
Integrated design of structures, controls, and materials
NASA Technical Reports Server (NTRS)
Blankenship, G. L.
1994-01-01
In this talk we shall discuss algorithms and CAD tools for the design and analysis of structures for high performance applications using advanced composite materials. An extensive mathematical theory for optimal structural (e.g., shape) design was developed over the past thirty years. Aspects of this theory have been used in the design of components for hypersonic vehicles and thermal diffusion systems based on homogeneous materials. Enhancement of the design methods to include optimization of the microstructure of the component is a significant innovation which can lead to major enhancements in component performance. Our work is focused on the adaptation of existing theories of optimal structural design (e.g., optimal shape design) to treat the design of structures using advanced composite materials (e.g., fiber reinforced, resin matrix materials). In this talk we shall discuss models and algorithms for the design of simple structures from composite materials, focussing on a problem in thermal management. We shall also discuss methods for the integration of active structural controls into the design process.
Vimalchand, Pannalal; Liu, Guohai; Peng, Wan Wang
2015-02-24
The improvements proposed in this invention provide a reliable apparatus and method to gasify low rank coals in a class of pressurized circulating fluidized bed reactors termed "transport gasifier." The embodiments overcome a number of operability and reliability problems with existing gasifiers. The systems and methods address issues related to distribution of gasification agent without the use of internals, management of heat release to avoid any agglomeration and clinker formation, specific design of bends to withstand the highly erosive environment due to high solid particles circulation rates, design of a standpipe cyclone to withstand high temperature gasification environment, compact design of seal-leg that can handle high mass solids flux, design of nozzles that eliminate plugging, uniform aeration of large diameter Standpipe, oxidant injection at the cyclone exits to effectively modulate gasifier exit temperature and reduction in overall height of the gasifier with a modified non-mechanical valve.
Aerodynamic shape optimization using control theory
NASA Technical Reports Server (NTRS)
Reuther, James
1996-01-01
Aerodynamic shape design has long persisted as a difficult scientific challenge due its highly nonlinear flow physics and daunting geometric complexity. However, with the emergence of Computational Fluid Dynamics (CFD) it has become possible to make accurate predictions of flows which are not dominated by viscous effects. It is thus worthwhile to explore the extension of CFD methods for flow analysis to the treatment of aerodynamic shape design. Two new aerodynamic shape design methods are developed which combine existing CFD technology, optimal control theory, and numerical optimization techniques. Flow analysis methods for the potential flow equation and the Euler equations form the basis of the two respective design methods. In each case, optimal control theory is used to derive the adjoint differential equations, the solution of which provides the necessary gradient information to a numerical optimization method much more efficiently then by conventional finite differencing. Each technique uses a quasi-Newton numerical optimization algorithm to drive an aerodynamic objective function toward a minimum. An analytic grid perturbation method is developed to modify body fitted meshes to accommodate shape changes during the design process. Both Hicks-Henne perturbation functions and B-spline control points are explored as suitable design variables. The new methods prove to be computationally efficient and robust, and can be used for practical airfoil design including geometric and aerodynamic constraints. Objective functions are chosen to allow both inverse design to a target pressure distribution and wave drag minimization. Several design cases are presented for each method illustrating its practicality and efficiency. These include non-lifting and lifting airfoils operating at both subsonic and transonic conditions.
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
Design considerations for divers' breathing gas systems
NASA Technical Reports Server (NTRS)
Hansen, O. R.
1972-01-01
Some of the design methods used to establish the gas storage, mixing, and transfer requirements for existing deep dive systems are discussed. Gas mixing systems appear essential to provide the low oxygen concentration mixtures within the converging tolerance range dictated by applications to increasing depths. Time related use of gas together with the performance of the gas transfer system insures a reasonable time frame for systems application.
Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle
NASA Astrophysics Data System (ADS)
Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun
2018-05-01
The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ye; Karri, Naveen K.; Wang, Qi
Tidal power as a large-scale renewable source of energy has been receiving significant attention recently because of its advantages over the wind and other renewal energy sources. The technology used to harvest energy from tidal current is called a tidal current turbine. Though some of the principles of wind turbine design are applicable to tidal current turbines, the design of latter ones need additional considerations like cavitation damage, corrosion etc. for the long-term reliability of such turbines. Depending up on the orientation of axis, tidal current turbines can be classified as vertical axis turbines or horizontal axis turbines. Existing studiesmore » on the vertical axis tidal current turbine focus more on the hydrodynamic aspects of the turbine rather than the structural aspects. This paper summarizes our recent efforts to study the integrated hydrodynamic and structural aspects of the vertical axis tidal current turbines. After reviewing existing methods in modeling tidal current turbines, we developed a hybrid approach that combines discrete vortex method -finite element method that can simulate the integrated hydrodynamic and structural response of a vertical axis turbine. This hybrid method was initially employed to analyze a typical three-blade vertical axis turbine. The power coefficient was used to evaluate the hydrodynamic performance, and critical deflection was considered to evaluate the structural reliability. A sensitivity analysis was also conducted with various turbine height-to-radius ratios. The results indicate that both the power output and failure probability increase with the turbine height, suggesting a necessity for optimal design. An attempt to optimize a 3-blade vertical axis turbine design with hybrid method yielded a ratio of turbine height to radius (H/R) about 3.0 for reliable maximum power output.« less
Damage-mitigating control of aircraft for high performance and life extension
NASA Astrophysics Data System (ADS)
Caplin, Jeffrey
1998-12-01
A methodology is proposed for the synthesis of a Damage-Mitigating Control System for a high-performance fighter aircraft. The design of such a controller involves consideration of damage to critical points of the structure, as well as the performance requirements of the aircraft. This research is interdisciplinary, and brings existing knowledge in the fields of unsteady aerodynamics, structural dynamics, fracture mechanics, and control theory together to formulate a new approach towards aircraft flight controller design. A flexible wing model is formulated using the Finite Element Method, and the important mode shapes and natural frequencies are identified. The Doublet Lattice Method is employed to develop an unsteady flow model for computation of the unsteady aerodynamic loads acting on the wing due to rigid-body maneuvers and structural deformation. These two models are subsequently incorporated into a pre-existing nonlinear rigid-body aircraft flight-dynamic model. A family of robust Damage-Mitigating Controllers is designed using the Hinfinity-optimization and mu-synthesis method. In addition to weighting the error between the ideal performance and the actual performance of the aircraft, weights are also placed on the strain amplitude at the root of each wing. The results show significant savings in fatigue life of the wings while retaining the dynamic performance of the aircraft.
Computer-Aided Drug Design Methods.
Yu, Wenbo; MacKerell, Alexander D
2017-01-01
Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.
Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang
2013-01-01
Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
Shape Optimization of Supersonic Turbines Using Response Surface and Neural Network Methods
NASA Technical Reports Server (NTRS)
Papila, Nilay; Shyy, Wei; Griffin, Lisa W.; Dorney, Daniel J.
2001-01-01
Turbine performance directly affects engine specific impulse, thrust-to-weight ratio, and cost in a rocket propulsion system. A global optimization framework combining the radial basis neural network (RBNN) and the polynomial-based response surface method (RSM) is constructed for shape optimization of a supersonic turbine. Based on the optimized preliminary design, shape optimization is performed for the first vane and blade of a 2-stage supersonic turbine, involving O(10) design variables. The design of experiment approach is adopted to reduce the data size needed by the optimization task. It is demonstrated that a major merit of the global optimization approach is that it enables one to adaptively revise the design space to perform multiple optimization cycles. This benefit is realized when an optimal design approaches the boundary of a pre-defined design space. Furthermore, by inspecting the influence of each design variable, one can also gain insight into the existence of multiple design choices and select the optimum design based on other factors such as stress and materials considerations.
Cervera-Padrell, Albert E; Skovby, Tommy; Kiil, Søren; Gani, Rafiqul; Gernaey, Krist V
2012-10-01
A systematic framework is proposed for the design of continuous pharmaceutical manufacturing processes. Specifically, the design framework focuses on organic chemistry based, active pharmaceutical ingredient (API) synthetic processes, but could potentially be extended to biocatalytic and fermentation-based products. The method exploits the synergic combination of continuous flow technologies (e.g., microfluidic techniques) and process systems engineering (PSE) methods and tools for faster process design and increased process understanding throughout the whole drug product and process development cycle. The design framework structures the many different and challenging design problems (e.g., solvent selection, reactor design, and design of separation and purification operations), driving the user from the initial drug discovery steps--where process knowledge is very limited--toward the detailed design and analysis. Examples from the literature of PSE methods and tools applied to pharmaceutical process design and novel pharmaceutical production technologies are provided along the text, assisting in the accumulation and interpretation of process knowledge. Different criteria are suggested for the selection of batch and continuous processes so that the whole design results in low capital and operational costs as well as low environmental footprint. The design framework has been applied to the retrofit of an existing batch-wise process used by H. Lundbeck A/S to produce an API: zuclopenthixol. Some of its batch operations were successfully converted into continuous mode, obtaining higher yields that allowed a significant simplification of the whole process. The material and environmental footprint of the process--evaluated through the process mass intensity index, that is, kg of material used per kg of product--was reduced to half of its initial value, with potential for further reduction. The case-study includes reaction steps typically used by the pharmaceutical industry featuring different characteristic reaction times, as well as L-L separation and distillation-based solvent exchange steps, and thus constitutes a good example of how the design framework can be useful to efficiently design novel or already existing API manufacturing processes taking advantage of continuous processes. Copyright © 2012 Elsevier B.V. All rights reserved.
DataRocket: Interactive Visualisation of Data Structures
NASA Astrophysics Data System (ADS)
Parkes, Steve; Ramsay, Craig
2010-08-01
CodeRocket is a software engineering tool that provides cognitive support to the software engineer for reasoning about a method or procedure and for documenting the resulting code [1]. DataRocket is a software engineering tool designed to support visualisation and reasoning about program data structures. DataRocket is part of the CodeRocket family of software tools developed by Rapid Quality Systems [2] a spin-out company from the Space Technology Centre at the University of Dundee. CodeRocket and DataRocket integrate seamlessly with existing architectural design and coding tools and provide extensive documentation with little or no effort on behalf of the software engineer. Comprehensive, abstract, detailed design documentation is available early on in a project so that it can be used for design reviews with project managers and non expert stakeholders. Code and documentation remain fully synchronised even when changes are implemented in the code without reference to the existing documentation. At the end of a project the press of a button suffices to produce the detailed design document. Existing legacy code can be easily imported into CodeRocket and DataRocket to reverse engineer detailed design documentation making legacy code more manageable and adding substantially to its value. This paper introduces CodeRocket. It then explains the rationale for DataRocket and describes the key features of this new tool. Finally the major benefits of DataRocket for different stakeholders are considered.
The Effects of Two Sight Word Teaching Methods on Featural Attention of Children Beginning to Read.
ERIC Educational Resources Information Center
Ceprano, Maria A.
Designed to add to the existing knowledge base concerning the saliency of features used by children to identify isolated words, a study examined whether the method of instruction influences the extent to which various features are used for word identification and recall. Subjects, 117 kindergarten students from a suburban Buffalo, New York, school…
ERIC Educational Resources Information Center
Elder, Anastasia D.
2015-01-01
Problem based learning (PBL) is an instructional method aimed at engaging students in collaboratively solving an ill-structured problem. PBL has been presented and researched as an overhaul of existing curriculum design, yet a modified version may be attractive to college instructors who desire active learning on the part of their students, but…
Martinez, Carlos A.; Barr, Kenneth; Kim, Ah-Ram; Reinitz, John
2013-01-01
Synthetic biology offers novel opportunities for elucidating transcriptional regulatory mechanisms and enhancer logic. Complex cis-regulatory sequences—like the ones driving expression of the Drosophila even-skipped gene—have proven difficult to design from existing knowledge, presumably due to the large number of protein-protein interactions needed to drive the correct expression patterns of genes in multicellular organisms. This work discusses two novel computational methods for the custom design of enhancers that employ a sophisticated, empirically validated transcriptional model, optimization algorithms, and synthetic biology. These synthetic elements have both utilitarian and academic value, including improving existing regulatory models as well as evolutionary questions. The first method involves the use of simulated annealing to explore the sequence space for synthetic enhancers whose expression output fit a given search criterion. The second method uses a novel optimization algorithm to find functionally accessible pathways between two enhancer sequences. These paths describe a set of mutations wherein the predicted expression pattern does not significantly vary at any point along the path. Both methods rely on a predictive mathematical framework that maps the enhancer sequence space to functional output. PMID:23732772
Mechanistic-empirical asphalt overlay thickness design and analysis system.
DOT National Transportation Integrated Search
2009-10-01
The placement of an asphalt overlay is the most common method used by the Texas Department of Transportation (TxDOT) to rehabilitate : existing asphalt and concrete pavements. The type of overlay and its required thickness are important decisions tha...
ASSAYING PARTICLE-BOUND POLYCYCLIC AROMATIC HYDROCARBONS (PAH) FROM ARCHIVED PM2.5 FILTERS
Airborne particulate matter contains numerous organic species, including several polycyclic aromatic hydrocarbons (PAHs) that are known or suspected carcinogens. Existing methods for measuring airborne PAHs are complex and costly, primarily because they are designed to collect...
Methods for estimating magnitude and frequency of peak flows for small watersheds in Utah.
DOT National Transportation Integrated Search
2010-06-01
Determining discharge in a stream is important to the design of culverts, bridges, and other structures pertaining to : transportation systems. Currently in Utah regression equations exist to estimate recurrence flood year discharges for : rural wate...
Yanzhen Wu; Hu, A P; Budgett, D; Malpas, S C; Dissanayake, T
2011-06-01
Transcutaneous energy transfer (TET) enables the transfer of power across the skin without direct electrical connection. It is a mechanism for powering implantable devices for the lifetime of a patient. For maximum power transfer, it is essential that TET systems be resonant on both the primary and secondary sides, which requires considerable design effort. Consequently, a strong need exists for an efficient method to aid the design process. This paper presents an analytical technique appropriate to analyze complex TET systems. The system's steady-state solution in closed form with sufficient accuracy is obtained by employing the proposed equivalent small parameter method. It is shown that power-transfer capability can be correctly predicted without tedious iterative simulations or practical measurements. Furthermore, for TET systems utilizing a current-fed push-pull soft switching resonant converter, it is found that the maximum energy transfer does not occur when the primary and secondary resonant tanks are "tuned" to the nominal resonant frequency. An optimal turning point exists, corresponding to the system's maximum power-transfer capability when optimal tuning capacitors are applied.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Boomerang: A method for recursive reclassification.
Devlin, Sean M; Ostrovnaya, Irina; Gönen, Mithat
2016-09-01
While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set. © 2016, The International Biometric Society.
Boomerang: A Method for Recursive Reclassification
Devlin, Sean M.; Ostrovnaya, Irina; Gönen, Mithat
2016-01-01
Summary While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted towards this reclassification goal. In this paper, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a pre-specified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia dataset where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent dataset. PMID:26754051
A design procedure for the handling qualities optimization of the X-29A aircraft
NASA Technical Reports Server (NTRS)
Bosworth, John T.; Cox, Timothy H.
1989-01-01
The techniques used to improve the pitch-axis handling qualities of the X-29A wing-canard-planform fighter aircraft are reviewed. The aircraft and its FCS are briefly described, and the design method, which works within the existing FCS architecture, is characterized in detail. Consideration is given to the selection of design goals and design variables, the definition and calculation of the cost function, the validation of the mathematical model on the basis of flight-test data, and the validation of the improved design by means of nonlinear simulations. Flight tests of the improved design are shown to verify the simulation results.
An efficient temporal database design method based on EER
NASA Astrophysics Data System (ADS)
Liu, Zhi; Huang, Jiping; Miao, Hua
2007-12-01
Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.
Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control
NASA Astrophysics Data System (ADS)
Hu, Juju; Ke, Qiang; Ji, Yinghua
2018-02-01
The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.
Design and implementation of a general main axis controller for the ESO telescopes
NASA Astrophysics Data System (ADS)
Sandrock, Stefan; Di Lieto, Nicola; Pettazzi, Lorenzo; Erm, Toomas
2012-09-01
Most of the real-time control systems at the existing ESO telescopes were developed with "traditional" methods, using general purpose VMEbus electronics, and running applications that were coded by hand, mostly using the C programming language under VxWorks. As we are moving towards more modern design methods, we have explored a model-based design approach for real-time applications in the telescope area, and used the control algorithm of a standard telescope main axis as a first example. We wanted to have a clear work-flow that follows the "correct-by-construction" paradigm, where the implementation is testable in simulation on the development host, and where the testing time spent by debugging on target is minimized. It should respect the domains of control, electronics, and software engineers in the choice of tools. It should be a targetindependent approach so that the result could be deployed on various platforms. We have selected the Mathworks tools Simulink, Stateflow, and Embedded Coder for design and implementation, and LabVIEW with NI hardware for hardware-in-the-loop testing, all of which are widely used in industry. We describe how these tools have been used in order to model, simulate, and test the application. We also evaluate the benefits of this approach compared to the traditional method with respect to testing effort and maintainability. For a specific axis controller application we have successfully integrated the result into the legacy platform of the existing VLT software, as well as demonstrated how to use the same design for a new development with a completely different environment.
A low delay transmission method of multi-channel video based on FPGA
NASA Astrophysics Data System (ADS)
Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei
2018-03-01
In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.
The COLA Collision Avoidance Method
NASA Astrophysics Data System (ADS)
Assmann, K.; Berger, J.; Grothkopp, S.
2009-03-01
In the following we present a collision avoidance method named COLA. The method has been designed to predict collisions for Earth orbiting spacecraft on any orbits, including orbit changes, with other space-born objects. The point in time of a collision and the collision probability are determined. To guarantee effective processing the COLA method uses a modular design and is composed of several components which are either developed within this work or deduced from existing algorithms: A filtering module, the close approach determination, the collision detection and the collision probability calculation. A software tool which implements the COLA method has been verified using various test cases built from sample missions. This software has been implemented in the C++ programming language and serves as a universal collision detection tool at LSE Space Engineering & Operations AG.
Zero leakage separable and semipermanent ducting joints
NASA Technical Reports Server (NTRS)
Mischel, H. T.
1973-01-01
A study program has been conducted to explore new methods of achieving zero leakage, separable and semipermanent, ducting joints for space flight vehicles. The study consisted of a search of literature of existing zero leakage methods, the generation of concepts of new methods of achieving the desired zero leakage criteria and the development of detailed analysis and design of a selected concept. Other techniques of leak detection were explored with a view toward improving this area.
Estimating Logistics Support of Reusable Launch Vehicles During Conceptual Design
NASA Technical Reports Server (NTRS)
Morris, W. D.; White, N. H.; Davies, W. T.; Ebeling, C. E.
1997-01-01
Methods exist to define the logistics support requirements for new aircraft concepts but are not directly applicable to new launch vehicle concepts. In order to define the support requirements and to discriminate among new technologies and processing choices for these systems, NASA Langley Research Center (LaRC) is developing new analysis methods. This paper describes several methods under development, gives their current status, and discusses the benefits and limitations associated with their use.
Realizing the Potential of Patient Engagement: Designing IT to Support Health in Everyday Life
Novak, Laurie L.; Unertl, Kim M.; Holden, Richard J.
2017-01-01
Maintaining health or managing a chronic condition involves performing and coordinating potentially new and complex tasks in the context of everyday life. Tools such as reminder apps and online health communities are being created to support patients in carrying out these tasks. Research has documented mixed effectiveness and problems with continued use of these tools, and suggests that more widespread adoption may be aided by design approaches that facilitate integration of eHealth technologies into patients’ and family members’ daily routines. Given the need to augment existing methods of design and implementation of eHealth tools, this contribution discusses frameworks and associated methods that engage patients and explore contexts of use in ways that can produce insights for eHealth designers. PMID:27198106
Spillway sizing of large dams in Austria
NASA Astrophysics Data System (ADS)
Reszler, Ch.; Gutknecht, D.; Blöschl, G.
2003-04-01
This paper discusses the basic philosophy of defining and calculating design floods for large dams in Austria, both for the construction of new dams and for a re-assessment of the safety of existing dams. Currently the consensus is to choose flood peak values corresponding to a probability of exceedance of 2*10-4 for a given year. A two step procedure is proposed to estimate the design flood discharges - a rapid assessment and a detailed assessment. In the rapid assessment the design discharge is chosen as a constant multiple of flood values read from a map of regionalised floods. The safety factor or multiplier takes care of the uncertainties of the local estimation and the regionalisation procedure. If the current design level of a spillway exceeds the value so estimated, no further calculations are needed. Otherwise (and for new dams) a detailed assessment is required. The idea of the detailed assessment is to draw upon all existing sources of information to constrain the uncertainties. The three main sources are local flood frequency analysis, where flood data are available; regional flood estimation from hydrologically similar catchments; and rainfall-runoff modelling using design storms as inputs. The three values obtained by these methods are then assessed and weighted in terms of their reliability to facilitate selection of the design flood. The uncertainty assessment of the various methods is based on confidence intervals, estimates of regional heterogeneity, data availability and sensitivity analyses of the rainfall-runoff model. As the definition of the design floods discussed above is based on probability concepts it is also important to examine the excess risk, i.e. the possibility of the occurrence of a flood exceeding the design levels. The excess risk is evaluated based on a so called Safety Check Flood (SCF), similar to the existing practice in other countries in Europe. The SCF is a vehicle to analyse the damage potential of an event of this magnitude. This is to provide guidance for protective measures to dealing with very extreme floods. The SCF is used to check the vulnerability of the system with regard to structural stability, morphological effects, etc., and to develop alarm plans and disaster mitigation procedures. The basis for estimating the SCF are the uncertainty assessments of the design flood values estimated by the three methods including unlikely combinations of the controlling factors and attending uncertainties. Finally we discuss the impact on the downstream valley of floods exceeding the design values and of smaller floods and illustrate the basic concepts by examples from the recent flood in August 2002.
Joint histogram-based cost aggregation for stereo matching.
Min, Dongbo; Lu, Jiangbo; Do, Minh N
2013-10-01
This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.
NASA Technical Reports Server (NTRS)
Horan, Stephen; Wang, Ru-Hai
1999-01-01
There exists a need for designers and developers to have a method to conveniently test a variety of communications parameters for an overall system design. This is no different when testing network protocols as when testing modulation formats. In this report, we discuss a means of providing a networking test device specifically designed to be used for space communications. This test device is a PC-based Virtual Instrument (VI) programmed using the LabVIEW(TM) version 5 software suite developed by National Instruments(TM)TM. This instrument was designed to be portable and usable by others without special, additional equipment. The programming was designed to replicate a VME-based hardware module developed earlier at New Mexico State University (NMSU) and to provide expanded capabilities exceeding the baseline configuration existing in that module. This report describes the design goals for the VI module in the next section and follows that with a description of the design of the VI instrument. This is followed with a description of the validation tests run on the VI. An application of the error-generating VI to networking protocols is then given.
Configurations and calibration methods for passive sampling techniques.
Ouyang, Gangfeng; Pawliszyn, Janusz
2007-10-19
Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.
NASA Astrophysics Data System (ADS)
Omaraa, Ehsan; Saman, Wasim; Bruno, Frank; Liu, Ming
2017-06-01
Latent heat storage using phase change materials (PCMs) can be used to store large amounts of energy in a narrow temperature difference during phase transition. The thermophysical properties of PCMs such as latent heat, specific heat and melting and solidification temperature need to be defined at high precision for the design and estimating the cost of latent heat storage systems. The existing laboratory standard methods, such as differential thermal analysis (DTA) and differential scanning calorimetry (DSC), use a small sample size (1-10 mg) to measure thermophysical properties, which makes these methods suitable for homogeneous elements. In addition, this small amount of sample has different thermophysical properties when compared with the bulk sample and may have limitations for evaluating the properties of mixtures. To avoid the drawbacks in existing methods, the temperature - history (T-history) method can be used with bulk quantities of PCM salt mixtures to characterize PCMs. This paper presents a modified T-history setup, which was designed and built at the University of South Australia to measure the melting point, heat of fusion, specific heat, degree of supercooling and phase separation of salt mixtures for a temperature range between 200 °C and 400 °C. Sodium Nitrate (NaNO3) was used to verify the accuracy of the new setup.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
The Qualitative Measurement towards Emotional Feeling of Design for Product Development
NASA Astrophysics Data System (ADS)
Syaifoelida, Fevi; Megat Hamdan, M. A. M.; Murrad, M.; Aminuddin, Hazim
2018-04-01
To compete in today’s ever-growing technology market, a product needs to be well presentable to the customers. It is a challenge to design a product that is able to attract customer’s attention and to build their loyalty towards the product. A product needs to be designed with focus to give the maximum level of satisfaction to the end user which is the customer. That is the focus of this paper, to achieve customer’s satisfaction, by studying feelings and emotional value related to product designs using Kansei Engineering (KE) and to test how important that product element (level of satisfaction) by using Kano Method (KM). KE is a method of translating human emotions and feelings into product development. The method studies the human interaction and responses when a customer sees a product, then translates it into a new improved design. However, KE cannot stand on its own. It did not specify in which extends the feeling or emotions is important in a product. After we had the design appearance parameters from KE and existing design evaluation, it need to be classified which is more important than the other is. That is why Kano Method (KM) will also be used. Since this scope of study towards an emotional feeling of design (existing part/appearance) in Kano categories not in deep function of technical requirement, so KM will help to classify parts of product into categories, which part will give fully satisfaction while using it. It studies the more important attributes considered by the customers for improvement. The objective is to find out the design priority guide that can be used to maximize customer’s satisfaction. Therefore, in order to apply the qualitative measurement idea into the real situation, the headphone product is chosen (popular among students) as the product (the appearance of part, feeling when use it) domain for this study. As the results progressed, it was found out that the headband part of headphone is the most important part of the product. It needs to be durable and comfortably designed to give full satisfaction while using (Kano-functional). The research and development (R&D), and designing process of a product can be improved greatly to increase the customer’s satisfaction by capturing their emotional feeling in physiological design.
NASA Technical Reports Server (NTRS)
Dennison, J. R.; Swaminathan, Prasanna; Jost, Randy; Brunson, Jerilyn; Green, Nelson; Frederickson, A. Robb
2005-01-01
A key parameter in modeling differential spacecraft charging is the resistivity of insulating materials. This determines how charge will accumulate and redistribute across the spacecraft, as well as the time scale for charge transport and dissipation. Existing spacecraft charging guidelines recommend use of tests and imported resistivity data from handbooks that are based principally upon ASTM methods that are more applicable to classical ground conditions and designed for problems associated with power loss through the dielectric, than for how long charge can be stored on an insulator. These data have been found to underestimate charging effects by one to four orders of magnitude for spacecraft charging applications. A review is presented of methods to measure the resistive of highly insulating materials, including the electrometer-resistance method, the electrometer-constant voltage method, the voltage rate-of-change method and the charge storage method. This is based on joint experimental studies conducted at NASA Jet Propulsion Laboratory and Utah State University to investigate the charge storage method and its relation to spacecraft charging. The different methods are found to be appropriate for different resistivity ranges and for different charging circumstances. A simple physics-based model of these methods allows separation of the polarization current and dark current components from long duration measurements of resistivity over day- to month-long time scales. Model parameters are directly related to the magnitude of charge transfer and storage and the rate of charge transport. The model largely explains the observed differences in resistivity found using the different methods and provides a framework for recommendations for the appropriate test method for spacecraft materials with different resistivities and applications. The proposed changes to the existing engineering guidelines are intended to provide design engineers more appropriate methods for consideration and measurements of resistivity for many typical spacecraft charging scenarios.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
Storyline Visualization: A Compelling Way to Understand Patterns over Time and Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2017-10-16
Storyline visualization is a compelling way to understand patterns over time and space. Much effort has been spent developing efficient and aesthetically pleasing layout optimization algorithms. But what if those algorithms are optimizing the wrong things? To answer this question, we conducted a design study with different storyline layout algorithms. We found that users with our new design principles for storyline visualization outperform existing methods.
Grade 1 to 6 Thai students' existing ideas about light: Across-age study
NASA Astrophysics Data System (ADS)
Horasirt, Yupaporn; Yuenyong, Chokchai
2018-01-01
This paper aimed to investigate Grade 1 to 6 Thai (6 - 12 years old) students' existing ideas about light, sight, vision, source of light. The participants included 36 Grade 1 to 6 students (6 students in each Grade) who studying at a primary school in Khon Kaen. The method of this study is a descriptive qualitative research design. The tools included the two-tiered test about light and open-ended question. Students' responses were categorized the students' existing ideas about light. Findings indicated that young students held various existing ideas about light that could be categorized into 6 different groups relating to sight, vision, and source of light. The paper discussed these students' existing ideas for developing constructivist learning about light in Thailand context.
Training Feedback Handbook. Research Product 83-7.
ERIC Educational Resources Information Center
Burnside, Billy L.; And Others
This handbook is designed to assist training developers and evaluators in structuring their collection of feedback data. Addressed first are various methods for collecting feedback data, including informal feedback, existing unit performance records, questionnaires, structured interviews, systematic observation, and testing. The next chapter, a…
Improved design of electrophoretic equipment for rapid sickle-cell-anemia screening
NASA Technical Reports Server (NTRS)
Reddick, J. M.; Hirsch, I.
1974-01-01
Effective mass screening may be accomplished by modifying existing electrophoretic equipment in conjunction with multisample applicator used with cellulose-acetate-matrix test paper. Using this method, approximately 20 to 25 samples can undergo electrophoresis in 5 to 6 minutes.
Application of the Deming management method to equipment-inspection processes.
Campbell, C A
1996-01-01
The Biomedical Engineering staff at the Washington Hospital Center has designed an inspection process that optimizes timely completion of scheduled equipment inspections. The method used to revise the process was primarily Deming, but certainly the method incorporates the re-engineering concept of questioning the basic assumptions around which the original process was designed. This effort involved a review of the existing process in its entirety by task groups made up of representatives from all involved departments. Complete success in all areas has remained elusive. However, the lower variability of inspection completion ratios follows Deming's description of a successfully revised process. Further CQI efforts targeted at specific areas with low completion ratios will decrease this variability even further.
Aerodynamic optimization studies on advanced architecture computers
NASA Technical Reports Server (NTRS)
Chawla, Kalpana
1995-01-01
The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.
A fast approach to designing airfoils from given pressure distribution in compressible flows
NASA Technical Reports Server (NTRS)
Daripa, Prabir
1987-01-01
A new inverse method for aerodynamic design of airfols is presented for subcritical flows. The pressure distribution in this method can be prescribed as a function of the arc length of the as-yet unknown body. This inverse problem is shown to be mathematically equivalent to solving only one nonlinear boundary value problem subject to known Dirichlet data on the boundary. The solution to this problem determines the airfoil, the freestream Mach number, and the upstream flow direction. The existence of a solution to a given pressure distribution is discussed. The method is easy to implement and extremely efficient. A series of results for which comparisons are made with the known airfoils is presented.
Finite-time synchronization control of a class of memristor-based recurrent neural networks.
Jiang, Minghui; Wang, Shuangtao; Mei, Jun; Shen, Yanjun
2015-03-01
This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural network with the designed controller. In comparison with the existing results, the proposed stability conditions are new, and the obtained results extend some previous works on conventional recurrent neural networks. Two numerical examples are provided to illustrate the effective of the design method. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Brooke, D.; Vondrasek, D. V.
1978-01-01
The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.
Compact illumination optic with three freeform surfaces for improved beam control.
Sorgato, Simone; Mohedano, Rubén; Chaves, Julio; Hernández, Maikel; Blen, José; Grabovičkić, Dejan; Benítez, Pablo; Miñano, Juan Carlos; Thienpont, Hugo; Duerr, Fabian
2017-11-27
Multi-chip and large size LEDs dominate the lighting market in developed countries these days. Nevertheless, a general optical design method to create prescribed intensity patterns for this type of extended sources does not exist. We present a design strategy in which the source and the target pattern are described by means of "edge wavefronts" of the system. The goal is then finding an optic coupling these wavefronts, which in the current work is a monolithic part comprising up to three freeform surfaces calculated with the simultaneous multiple surface (SMS) method. The resulting optic fully controls, for the first time, three freeform wavefronts, one more than previous SMS designs. Simulations with extended LEDs demonstrate improved intensity tailoring capabilities, confirming the effectiveness of our method and suggesting that enhanced performance features can be achieved by controlling additional wavefronts.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
Development and Performance of the Alaska Transportable Array Posthole Broadband Seismic Station
NASA Astrophysics Data System (ADS)
Aderhold, K.; Enders, M.; Miner, J.; Bierma, R. M.; Bloomquist, D.; Theis, J.; Busby, R. W.
2017-12-01
The final stations of the Alaska Transportable Array (ATA) will be constructed in 2017, completing the full footprint of 280 new and existing broadband seismic stations stretching across 19 degrees of latitude from western Alaska to western Canada. Through significant effort in planning, site reconnaissance, permitting and the considerable and concerted effort of field crews, the IRIS Alaska TA team is on schedule to successfully complete the construction of 194 new stations and upgrades at 28 existing stations over four field seasons. The station design and installation method was developed over the course of several years, leveraging the experience of the L48 TA deployments and existing network operators in Alaska as well as incorporating newly engineered components and procedures. A purpose-built lightweight drill was designed and fabricated to facilitate the construction of shallow boreholes to incorporate newly available posthole seismometers. This allowed for the development of a streamlined system of procedures to manufacture uniform seismic stations with minimal crew and minimal time required at each station location. A new station can typically be constructed in a single day with a four-person field crew. The ATA utilizes a hammer-drilled, cased posthole emplacement method adapted to the remote and harsh working environment of Alaska. The same emplacement design is implemented in all ground conditions to preserve uniformity across the array and eliminate the need for specialized mechanical equipment. All components for station construction are ideally suited for transport via helicopter, and can be adapted to utilize more traditional methods of transportation when available. This emplacement design delivers high quality data when embedded in bedrock or permafrost, reaching the low noise levels of benchmark permanent global broadband stations especially at long periods over 70 seconds. The TA will operate the network of real-time stations through at least 2019, with service trips planned on a "as needed" basis to continue providing greater than 95% data return.
A new decentralised controller design method for a class of strongly interconnected systems
NASA Astrophysics Data System (ADS)
Duan, Zhisheng; Jiang, Zhong-Ping; Huang, Lin
2017-02-01
In this paper, two interconnected structures are first discussed, under which some closed-loop subsystems must be unstable to make the whole interconnected system stable, which can be viewed as a kind of strongly interconnected systems. Then, comparisons with small gain theorem are discussed and large gain interconnected characteristics are shown. A new approach for the design of decentralised controllers is presented by determining the Lyapunov function structure previously, which allows the existence of unstable subsystems. By fully utilising the orthogonal space information of input matrix, some new understandings are presented for the construction of Lyapunov matrix. This new method can deal with decentralised state feedback, static output feedback and dynamic output feedback controllers in a unified framework. Furthermore, in order to reduce the design conservativeness and deal with robustness, a new robust decentralised controller design method is given by combining with the parameter-dependent Lyapunov function method. Some basic rules are provided for the choice of initial variables in Lyapunov matrix or new introduced slack matrices. As byproducts, some linear matrix inequality based sufficient conditions are established for centralised static output feedback stabilisation. Effects of unstable subsystems in nonlinear Lur'e systems are further discussed. The corresponding decentralised controller design method is presented for absolute stability. The examples illustrate that the new method is significantly effective.
Non-destructive inspection of polymer composite products
NASA Astrophysics Data System (ADS)
Anoshkin, A. N.; Sal'nikov, A. F.; Osokin, V. M.; Tretyakov, A. A.; Luzin, G. S.; Potrakhov, N. N.; Bessonov, V. B.
2018-02-01
The paper considers the main types of defects encountered in products made of polymer composite materials for aviation use. The analysis of existing methods of nondestructive testing is carried out, features of their application are considered taking into account design features, geometrical parameters and internal structure of objects of inspection. The advantages and disadvantages of the considered methods of nondestructive testing used in industrial production are shown.
Study of solution procedures for nonlinear structural equations
NASA Technical Reports Server (NTRS)
Young, C. T., II; Jones, R. F., Jr.
1980-01-01
A method for the redution of the cost of solution of large nonlinear structural equations was developed. Verification was made using the MARC-STRUC structure finite element program with test cases involving single and multiple degrees of freedom for static geometric nonlinearities. The method developed was designed to exist within the envelope of accuracy and convergence characteristic of the particular finite element methodology used.
ERIC Educational Resources Information Center
Mayeaux, Amanda Shuford
2013-01-01
The purpose of this sequential mixed-methods research was to discover the impact school culture, internal factors, and the state of flow has upon motivating a teacher to develop teaching expertise. This research was designed to find answers concerning why and how individual teachers can nurture their existing internal factors to increase their…
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
A method to evaluate process performance by integrating time and resources
NASA Astrophysics Data System (ADS)
Wang, Yu; Wei, Qingjie; Jin, Shuang
2017-06-01
The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.
Scammon, Debra L; Tomoaia-Cotisel, Andrada; Day, Rachel L; Day, Julie; Kim, Jaewhan; Waitzman, Norman J; Farrell, Timothy W; Magill, Michael K
2013-01-01
Objective. To demonstrate the value of mixed methods in the study of practice transformation and illustrate procedures for connecting methods and for merging findings to enhance the meaning derived. Data Source/Study Setting. An integrated network of university-owned, primary care practices at the University of Utah (Community Clinics or CCs). CC has adopted Care by Design, its version of the Patient Centered Medical Home. Study Design. Convergent case study mixed methods design. Data Collection/Extraction Methods. Analysis of archival documents, internal operational reports, in-clinic observations, chart audits, surveys, semistructured interviews, focus groups, Centers for Medicare and Medicaid Services database, and the Utah All Payer Claims Database. Principal Findings. Each data source enriched our understanding of the change process and understanding of reasons that certain changes were more difficult than others both in general and for particular clinics. Mixed methods enabled generation and testing of hypotheses about change and led to a comprehensive understanding of practice change. Conclusions. Mixed methods are useful in studying practice transformation. Challenges exist but can be overcome with careful planning and persistence. PMID:24279836
A Procedure for the Design of Air-Heated Ice-Prevention Systems
NASA Technical Reports Server (NTRS)
Neel, C. B.
1954-01-01
A procedure proposed for use in the design of air-heated systems for the continuous prevention of ice formation on airplane components is set forth. Required heat-transfer and air-pressure-loss equations are presented, and methods of selecting appropriate meteorological conditions for flight over specified geographical areas and for the calculation of water-drop-impingement characteristics are suggested. In order to facilitate the design, a simple electrical analogue was devised which solves the complex heat-transfer relationships existing in the thermal-system analysis. The analogue is described and an illustration of its application to design is given.
Carbide fuel pin and capsule design for irradiations at thermionic temperatures
NASA Technical Reports Server (NTRS)
Siegel, B. L.; Slaby, J. G.; Mattson, W. F.; Dilanni, D. C.
1973-01-01
The design of a capsule assembly to evaluate tungsten-emitter - carbide-fuel combinations for thermionic fuel elements is presented. An inpile fuel pin evaluation program concerned with clad temperture, neutron spectrum, carbide fuel composition, fuel geometry,fuel density, and clad thickness is discussed. The capsule design was a compromise involving considerations between heat transfer, instrumentation, materials compatibility, and test location. Heat-transfer calculations were instrumental in determining the method of support of the fuel pin to minimize axial temperature variations. The capsule design was easily fabricable and utilized existing state-of-the-art experience from previous programs.
Design optimization of a high specific speed Francis turbine runner
NASA Astrophysics Data System (ADS)
Enomoto, Y.; Kurosawa, S.; Kawajiri, H.
2012-11-01
Francis turbine is used in many hydroelectric power stations. This paper presents the development of hydraulic performance in a high specific speed Francis turbine runner. In order to achieve the improvements of turbine efficiency throughout a wide operating range, a new runner design method which combines the latest Computational Fluid Dynamics (CFD) and a multi objective optimization method with an existing design system was applied in this study. The validity of the new design system was evaluated by model performance tests. As the results, it was confirmed that the optimized runner presented higher efficiency compared with an originally designed runner. Besides optimization of runner, instability vibration which occurred at high part load operating condition was investigated by model test and gas-liquid two-phase flow analysis. As the results, it was confirmed that the instability vibration was caused by oval cross section whirl which was caused by recirculation flow near runner cone wall.
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.
Zhang, Yi; Monsen, Karen A; Adam, Terrence J; Pieczkiewicz, David S; Daman, Megan; Melton, Genevieve B
2011-01-01
Time and motion (T&M) studies provide an objective method to measure the expenditure of time by clinicians. While some instruments for T&M studies have been designed to evaluate health information technology (HIT), these instruments have not been designed for nursing workflow. We took an existing open source HIT T&M study application designed to evaluate physicians in the ambulatory setting and rationally adapted it through empiric observations to record nursing activities in the inpatient setting and linked this instrument to an existing interface terminology, the Omaha System. Nursing activities involved several dimensions and could include multiple activities occurring simultaneously, requiring significant instrument redesign. 94% of the activities from the study instrument mapped adequately to the Omaha System. T&M study instruments require customization in design optimize them for different environments, such as inpatient nursing, to enable optimal data collection. Interface terminologies show promise as a framework for recording and analyzing T&M study data. PMID:22195228
NASA Technical Reports Server (NTRS)
Allen, Cheryl L.
1991-01-01
Enhanced engineering tools can be obtained through the integration of expert system methodologies and existing design software. The application of these methodologies to the spacecraft design and cost model (SDCM) software provides an improved technique for the selection of hardware for unmanned spacecraft subsystem design. The knowledge engineering system (KES) expert system development tool was used to implement a smarter equipment section algorithm than that which is currently achievable through the use of a standard data base system. The guidance, navigation, and control subsystems of the SDCM software was chosen as the initial subsystem for implementation. The portions of the SDCM code which compute the selection criteria and constraints remain intact, and the expert system equipment selection algorithm is embedded within this existing code. The architecture of this new methodology is described and its implementation is reported. The project background and a brief overview of the expert system is described, and once the details of the design are characterized, an example of its implementation is demonstrated.
[Application of microelectronics CAD tools to synthetic biology].
Madec, Morgan; Haiech, Jacques; Rosati, Élise; Rezgui, Abir; Gendrault, Yves; Lallement, Christophe
2017-02-01
Synthetic biology is an emerging science that aims to create new biological functions that do not exist in nature, based on the knowledge acquired in life science over the last century. Since the beginning of this century, several projects in synthetic biology have emerged. The complexity of the developed artificial bio-functions is relatively low so that empirical design methods could be used for the design process. Nevertheless, with the increasing complexity of biological circuits, this is no longer the case and a large number of computer aided design softwares have been developed in the past few years. These tools include languages for the behavioral description and the mathematical modelling of biological systems, simulators at different levels of abstraction, libraries of biological devices and circuit design automation algorithms. All of these tools already exist in other fields of engineering sciences, particularly in microelectronics. This is the approach that is put forward in this paper. © 2017 médecine/sciences – Inserm.
Genetic Optimization and Simulation of a Piezoelectric Pipe-Crawling Inspection Robot
NASA Technical Reports Server (NTRS)
Hollinger, Geoffrey A.; Briscoe, Jeri M.
2004-01-01
Using the DarwinZk development software, a genetic algorithm (GA) was used to design and optimize a pipe-crawling robot for parameters such as mass, power consumption, and joint extension to further the research of the Miniature Inspection Systems Technology (MIST) team. In an attempt to improve on existing designs, a new robot was developed, the piezo robot. The final proposed design uses piezoelectric expansion actuators to move the robot with a 'chimneying' method employed by mountain climbers and greatly improves on previous designs in load bearing ability, pipe traversing specifications, and field usability. This research shows the advantages of GA assisted design in the field of robotics.
NASA Astrophysics Data System (ADS)
Liang, Qingguo; Li, Jie; Li, Dewu; Ou, Erfeng
2013-01-01
The vibrations of existing service tunnels induced by blast-excavation of adjacent tunnels have attracted much attention from both academics and engineers during recent decades in China. The blasting vibration velocity (BVV) is the most widely used controlling index for in situ monitoring and safety assessment of existing lining structures. Although numerous in situ tests and simulations had been carried out to investigate blast-induced vibrations of existing tunnels due to excavation of new tunnels (mostly by bench excavation method), research on the overall dynamical response of existing service tunnels in terms of not only BVV but also stress/strain seemed limited for new tunnels excavated by the full-section blasting method. In this paper, the impacts of blast-induced vibrations from a new tunnel on an existing railway tunnel in Xinjiang, China were comprehensively investigated by using laboratory tests, in situ monitoring and numerical simulations. The measured data from laboratory tests and in situ monitoring were used to determine the parameters needed for numerical simulations, and were compared with the calculated results. Based on the results from in situ monitoring and numerical simulations, which were consistent with each other, the original blasting design and corresponding parameters were adjusted to reduce the maximum BVV, which proved to be effective and safe. The effect of both the static stress before blasting vibrations and the dynamic stress induced by blasting on the total stresses in the existing tunnel lining is also discussed. The methods and related results presented could be applied in projects with similar ground and distance between old and new tunnels if the new tunnel is to be excavated by the full-section blasting method.
RELIABLE COMPUTATION OF HOMOGENEOUS AZEOTROPES. (R824731)
It is important to determine the existence and composition of homogeneous azeotropes in the analysis of phase behavior and in the synthesis and design of separation systems, from both theoretical and practical standpoints. A new method for reliably locating an...
NASA Technical Reports Server (NTRS)
Chen, Shu-cheng, S.
2009-01-01
For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Merz, A. W.
1975-01-01
Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
An engineering approach to controlling indoor air quality.
Woods, J E
1991-11-01
Evidence is accumulating that indicates air quality problems in residential and commercial buildings are nearly always associated with inadequacies in building design and methods of operation. Thus, the very systems depended on to control the indoor environment can become indirect sources of contamination if diligence is not exercised at each stage of a building's life: a) planning and design, b) construction and commissioning, c) operation, and d) demolition or renovation. In this paper, an engineering perspective is presented in which the existing building stock is characterized in terms of its environmental performance. Preliminary data indicate that 20 to 30% of the existing buildings have sufficient problems to manifest as sick-building syndrome or building-related illness, while another 10 to 20% may have undetected problems. Thus, only about 50 to 70% of the existing buildings qualify as healthy buildings. Two methods and three mechanisms of control are described to achieve "acceptable" indoor air quality: source control and exposure control. If sources cannot be removed, some level of occupant exposure will result. To control exposures with acceptable values, the primary sensory receptors of the occupants (i.e., thermal, ocular, auditory, and olfactory) cannot be excessively stimulated. The three exposure control mechanisms are conduction, radiation, and convection. To achieve acceptable occupant responses, it is often practical to integrate the mechanisms of radiation and convection in heating, ventilating, and air conditioning systems that are designed to provide acceptable thermal, acoustic, and air quality conditions within occupied spaces.(ABSTRACT TRUNCATED AT 250 WORDS)
An engineering approach to controlling indoor air quality.
Woods, J E
1991-01-01
Evidence is accumulating that indicates air quality problems in residential and commercial buildings are nearly always associated with inadequacies in building design and methods of operation. Thus, the very systems depended on to control the indoor environment can become indirect sources of contamination if diligence is not exercised at each stage of a building's life: a) planning and design, b) construction and commissioning, c) operation, and d) demolition or renovation. In this paper, an engineering perspective is presented in which the existing building stock is characterized in terms of its environmental performance. Preliminary data indicate that 20 to 30% of the existing buildings have sufficient problems to manifest as sick-building syndrome or building-related illness, while another 10 to 20% may have undetected problems. Thus, only about 50 to 70% of the existing buildings qualify as healthy buildings. Two methods and three mechanisms of control are described to achieve "acceptable" indoor air quality: source control and exposure control. If sources cannot be removed, some level of occupant exposure will result. To control exposures with acceptable values, the primary sensory receptors of the occupants (i.e., thermal, ocular, auditory, and olfactory) cannot be excessively stimulated. The three exposure control mechanisms are conduction, radiation, and convection. To achieve acceptable occupant responses, it is often practical to integrate the mechanisms of radiation and convection in heating, ventilating, and air conditioning systems that are designed to provide acceptable thermal, acoustic, and air quality conditions within occupied spaces.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1821369
Thermal/structural design verification strategies for large space structures
NASA Technical Reports Server (NTRS)
Benton, David
1988-01-01
Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.
Fault detection for discrete-time LPV systems using interval observers
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Hui; Yang, Guang-Hong
2017-10-01
This paper is concerned with the fault detection (FD) problem for discrete-time linear parameter-varying systems subject to bounded disturbances. A parameter-dependent FD interval observer is designed based on parameter-dependent Lyapunov and slack matrices. The design method is presented by translating the parameter-dependent linear matrix inequalities (LMIs) into finite ones. In contrast to the existing results based on parameter-independent and diagonal Lyapunov matrices, the derived disturbance attenuation, fault sensitivity and nonnegative conditions lead to less conservative LMI characterisations. Furthermore, without the need to design the residual evaluation functions and thresholds, the residual intervals generated by the interval observers are used directly for FD decision. Finally, simulation results are presented for showing the effectiveness and superiority of the proposed method.
Reserves in load capacity assessment of existing bridges
NASA Astrophysics Data System (ADS)
Žitný, Jan; Ryjáček, Pavel
2017-09-01
High percentage of all railway bridges in the Czech Republic is made of structural steel. Majority of these bridges is designed according to historical codes and according to the deterioration, they have to be assessed if they satisfy the needs of modern railway traffic. The load capacity assessment of existing bridges according to Eurocodes is however often too conservative and especially, braking and acceleration forces cause huge problems to structural elements of the bridge superstructure. The aim of this paper is to review the different approaches for the determination of braking and acceleration forces. Both, current and historical theoretical models and in-situ measurements are considered. The research of several local European state norms superior to Eurocode for assessment of existing railway bridges shows the big diversity of used local approaches and the conservativeness of Eurocode. This paper should also work as an overview for designers dealing with load capacity assessment, revealing the reserves for existing bridges. Based on these different approaches, theoretical models and data obtained from the measurements, the method for determination of braking and acceleration forces on the basis of real traffic data should be proposed.
Object-oriented Approach to High-level Network Monitoring and Management
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
2000-01-01
An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.
Gao, Fangzheng; Wu, Yuqiang; Zhang, Zhongcai
2015-11-01
This paper investigates the problem of finite-time stabilization by output feedback for a class of nonholonomic systems in chained form with uncertainties. Comparing with the existing relevant literature, a distinguishing feature of the systems under investigation is that the x-subsystem is a feedforward-like rather than feedback-like system. This renders the existing control methods inapplicable to the control problems of the systems. A constructive design procedure for output feedback control is given. The designed controller renders that the states of closed-loop system are regulated to zero in a finite time. Two simulation examples are provided to illustrate the effectiveness of the proposed approach. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
2011-01-01
Background Existing methods of predicting DNA-binding proteins used valuable features of physicochemical properties to design support vector machine (SVM) based classifiers. Generally, selection of physicochemical properties and determination of their corresponding feature vectors rely mainly on known properties of binding mechanism and experience of designers. However, there exists a troublesome problem for designers that some different physicochemical properties have similar vectors of representing 20 amino acids and some closely related physicochemical properties have dissimilar vectors. Results This study proposes a systematic approach (named Auto-IDPCPs) to automatically identify a set of physicochemical and biochemical properties in the AAindex database to design SVM-based classifiers for predicting and analyzing DNA-binding domains/proteins. Auto-IDPCPs consists of 1) clustering 531 amino acid indices in AAindex into 20 clusters using a fuzzy c-means algorithm, 2) utilizing an efficient genetic algorithm based optimization method IBCGA to select an informative feature set of size m to represent sequences, and 3) analyzing the selected features to identify related physicochemical properties which may affect the binding mechanism of DNA-binding domains/proteins. The proposed Auto-IDPCPs identified m=22 features of properties belonging to five clusters for predicting DNA-binding domains with a five-fold cross-validation accuracy of 87.12%, which is promising compared with the accuracy of 86.62% of the existing method PSSM-400. For predicting DNA-binding sequences, the accuracy of 75.50% was obtained using m=28 features, where PSSM-400 has an accuracy of 74.22%. Auto-IDPCPs and PSSM-400 have accuracies of 80.73% and 82.81%, respectively, applied to an independent test data set of DNA-binding domains. Some typical physicochemical properties discovered are hydrophobicity, secondary structure, charge, solvent accessibility, polarity, flexibility, normalized Van Der Waals volume, pK (pK-C, pK-N, pK-COOH and pK-a(RCOOH)), etc. Conclusions The proposed approach Auto-IDPCPs would help designers to investigate informative physicochemical and biochemical properties by considering both prediction accuracy and analysis of binding mechanism simultaneously. The approach Auto-IDPCPs can be also applicable to predict and analyze other protein functions from sequences. PMID:21342579
Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.
Choi, Yun Ho; Yoo, Sung Jin
2016-12-01
A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.
Realtime system for GLAS on WHT
NASA Astrophysics Data System (ADS)
Skvarč, Jure; Tulloch, Simon; Myers, Richard M.
2006-06-01
The new ground layer adaptive optics system (GLAS) on the William Herschel Telescope (WHT) on La Palma will be based on the existing natural guide star adaptive optics system called NAOMI. A part of the new developments is a new control system for the tip-tilt mirror. Instead of the existing system, built around a custom built multiprocessor computer made of C40 DSPs, this system uses an ordinary PC machine and a Linux operating system. It is equipped with a high sensitivity L3 CCD camera with effective readout noise of nearly zero. The software design for the tip-tilt system is being completely redeveloped, in order to make a use of object oriented design which should facilitate easier integration with the rest of the observing system at the WHT. The modular design of the system allows incorporation of different centroiding and loop control methods. To test the system off-sky, we have built a laboratory bench using an artificial light source and a tip-tilt mirror. We present results of tip-tilt correction quality using different centroiding algorithms and different control loop methods at different light levels. This system will serve as a testing ground for a transition to a completely PC-based real-time control system.
Protocols for the Design of Kinase-focused Compound Libraries.
Jacoby, Edgar; Wroblowski, Berthold; Buyck, Christophe; Neefs, Jean-Marc; Meyer, Christophe; Cummings, Maxwell D; van Vlijmen, Herman
2018-05-01
Protocols for the design of kinase-focused compound libraries are presented. Kinase-focused compound libraries can be differentiated based on the design goal. Depending on whether the library should be a discovery library specific for one particular kinase, a general discovery library for multiple distinct kinase projects, or even phenotypic screening, there exists today a variety of in silico methods to design candidate compound libraries. We address the following scenarios: 1) Datamining of SAR databases and kinase focused vendor catalogues; 2) Predictions and virtual screening; 3) Structure-based design of combinatorial kinase inhibitors; 4) Design of covalent kinase inhibitors; 5) Design of macrocyclic kinase inhibitors; and 6) Design of allosteric kinase inhibitors and activators. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Topology Synthesis of Structures Using Parameter Relaxation and Geometric Refinement
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.
2007-01-01
Typically, structural topology optimization problems undergo relaxation of certain design parameters to allow the existence of intermediate variable optimum topologies. Relaxation permits the use of a variety of gradient-based search techniques and has been shown to guarantee the existence of optimal solutions and eliminate mesh dependencies. This Technical Publication (TP) will demonstrate the application of relaxation to a control point discretization of the design workspace for the structural topology optimization process. The control point parameterization with subdivision has been offered as an alternative to the traditional method of discretized finite element design domain. The principle of relaxation demonstrates the increased utility of the control point parameterization. One of the significant results of the relaxation process offered in this TP is that direct manufacturability of the optimized design will be maintained without the need for designer intervention or translation. In addition, it will be shown that relaxation of certain parameters may extend the range of problems that can be addressed; e.g., in permitting limited out-of-plane motion to be included in a path generation problem.
Recent Improvements in Aerodynamic Design Optimization on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Anderson, W. Kyle
2000-01-01
Recent improvements in an unstructured-grid method for large-scale aerodynamic design are presented. Previous work had shown such computations to be prohibitively long in a sequential processing environment. Also, robust adjoint solutions and mesh movement procedures were difficult to realize, particularly for viscous flows. To overcome these limiting factors, a set of design codes based on a discrete adjoint method is extended to a multiprocessor environment using a shared memory approach. A nearly linear speedup is demonstrated, and the consistency of the linearizations is shown to remain valid. The full linearization of the residual is used to precondition the adjoint system, and a significantly improved convergence rate is obtained. A new mesh movement algorithm is implemented and several advantages over an existing technique are presented. Several design cases are shown for turbulent flows in two and three dimensions.
2017-03-20
computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Future needs for biomedical transducers
NASA Technical Reports Server (NTRS)
Wooten, F. T.
1971-01-01
In summary there are three major classes of transducer improvements required: improvements in existing transducers, needs for unexploited physical science phenomena in transducer design, and needs for unutilized physiological phenomena in transducer design. During the next decade, increasing emphasis will be placed on noninvasive measurement in all of these areas. Patient safety, patient comfort, and the need for efficient utilization of the time of both patient and physician requires that noninvasive methods of monitoring be developed.
Methamphetamine Vaccines: Improvement through Hapten Design.
Collins, Karen C; Schlosburg, Joel E; Bremer, Paul T; Janda, Kim D
2016-04-28
Methamphetamine (MA) addiction is a serious public health problem, and current methods to abate addiction and relapse are currently ineffective for mitigating this growing global epidemic. Development of a vaccine targeting MA would provide a complementary strategy to existing behavioral therapies, but this has proven challenging. Herein, we describe optimization of both hapten design and formulation, identifying a vaccine that elicited a robust anti-MA immune response in mice, decreasing methamphetamine-induced locomotor activity.
NASA Technical Reports Server (NTRS)
1982-01-01
Philadelphia Gear Corporation used two COSMIC computer programs; one dealing with shrink fit analysis and the other with rotor dynamics problems in computerized design and test work. The programs were used to verify existing in-house programs to insure design accuracy by checking its company-developed computer methods against procedures developed by other organizations. Its specialty is in custom units for unique applications, such as Coast Guard ice breaking ships, steel mill drives, coal crusher, sewage treatment equipment and electricity.
Investigation into the development of computer aided design software for space based sensors
NASA Technical Reports Server (NTRS)
Pender, C. W.; Clark, W. L.
1987-01-01
The described effort is phase one of the development of a Computer Aided Design (CAD) software to be used to perform radiometric sensor design. The software package will be referred to as SCAD and is directed toward the preliminary phase of the design of space based sensor system. The approach being followed is to develop a modern, graphic intensive, user friendly software package using existing software as building blocks. The emphasis will be directed toward the development of a shell containing menus, smart defaults, and interfaces, which can accommodate a wide variety of existing application software packages. The shell will offer expected utilities such as graphics, tailored menus, and a variety of drivers for I/O devices. Following the development of the shell, the development of SCAD is planned as chiefly selection and integration of appropriate building blocks. The phase one development activities have included: the selection of hardware which will be used with SCAD; the determination of the scope of SCAD; the preliminary evaluation of a number of software packages for applicability to SCAD; determination of a method for achieving required capabilities where voids exist; and then establishing a strategy for binding the software modules into an easy to use tool kit.
Kinematic Determination of an Unmodeled Serial Manipulator by Means of an IMU
NASA Astrophysics Data System (ADS)
Ciarleglio, Constance A.
Kinematic determination for an unmodeled manipulator is usually done through a-priori knowledge of the manipulator physical characteristics or external sensor information. The mathematics of the kinematic estimation, often based on Denavit- Hartenberg convention, are complex and have high computation requirements, in addition to being unique to the manipulator for which the method is developed. Analytical methods that can compute kinematics on-the fly have the potential to be highly beneficial in dynamic environments where different configurations and variable manipulator types are often required. This thesis derives a new screw theory based method of kinematic determination, using a single inertial measurement unit (IMU), for use with any serial, revolute manipulator. The method allows the expansion of reconfigurable manipulator design and simplifies the kinematic process for existing manipulators. A simulation is presented where the theory of the method is verified and characterized with error. The method is then implemented on an existing manipulator as a verification of functionality.
IPAC-Inlet Performance Analysis Code
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1997-01-01
A series of analyses have been developed which permit the calculation of the performance of common inlet designs. The methods presented are useful for determining the inlet weight flows, total pressure recovery, and aerodynamic drag coefficients for given inlet geometric designs. Limited geometric input data is required to use this inlet performance prediction methodology. The analyses presented here may also be used to perform inlet preliminary design studies. The calculated inlet performance parameters may be used in subsequent engine cycle analyses or installed engine performance calculations for existing uninstalled engine data.
The Impact of Early Design Phase Risk Identification Biases on Space System Project Performance
NASA Technical Reports Server (NTRS)
Reeves, John D., Jr.; Eveleigh, Tim; Holzer, Thomas; Sarkani, Shahryar
2012-01-01
Risk identification during the early design phases of complex systems is commonly implemented but often fails to result in the identification of events and circumstances that truly challenge project performance. Inefficiencies in cost and schedule estimation are usually held accountable for cost and schedule overruns, but the true root cause is often the realization of programmatic risks. A deeper understanding of frequent risk identification trends and biases pervasive during space system design and development is needed, for it would lead to improved execution of existing identification processes and methods.
Design and realization of tourism spatial decision support system based on GIS
NASA Astrophysics Data System (ADS)
Ma, Zhangbao; Qi, Qingwen; Xu, Li
2008-10-01
In this paper, the existing problems of current tourism management information system are analyzed. GIS, tourism as well as spatial decision support system are introduced, and the application of geographic information system technology and spatial decision support system to tourism management and the establishment of tourism spatial decision support system based on GIS are proposed. System total structure, system hardware and software environment, database design and structure module design of this system are introduced. Finally, realization methods of this systemic core functions are elaborated.
Reconstructing metastatic seeding patterns of human cancers
Reiter, Johannes G.; Makohon-Moore, Alvin P.; Gerold, Jeffrey M.; Bozic, Ivana; Chatterjee, Krishnendu; Iacobuzio-Donahue, Christine A.; Vogelstein, Bert; Nowak, Martin A.
2017-01-01
Reconstructing the evolutionary history of metastases is critical for understanding their basic biological principles and has profound clinical implications. Genome-wide sequencing data has enabled modern phylogenomic methods to accurately dissect subclones and their phylogenies from noisy and impure bulk tumour samples at unprecedented depth. However, existing methods are not designed to infer metastatic seeding patterns. Here we develop a tool, called Treeomics, to reconstruct the phylogeny of metastases and map subclones to their anatomic locations. Treeomics infers comprehensive seeding patterns for pancreatic, ovarian, and prostate cancers. Moreover, Treeomics correctly disambiguates true seeding patterns from sequencing artifacts; 7% of variants were misclassified by conventional statistical methods. These artifacts can skew phylogenies by creating illusory tumour heterogeneity among distinct samples. In silico benchmarking on simulated tumour phylogenies across a wide range of sample purities (15–95%) and sequencing depths (25-800 × ) demonstrates the accuracy of Treeomics compared with existing methods. PMID:28139641
Analysis of Genome-Wide Association Studies with Multiple Outcomes Using Penalization
Liu, Jin; Huang, Jian; Ma, Shuangge
2012-01-01
Genome-wide association studies have been extensively conducted, searching for markers for biologically meaningful outcomes and phenotypes. Penalization methods have been adopted in the analysis of the joint effects of a large number of SNPs (single nucleotide polymorphisms) and marker identification. This study is partly motivated by the analysis of heterogeneous stock mice dataset, in which multiple correlated phenotypes and a large number of SNPs are available. Existing penalization methods designed to analyze a single response variable cannot accommodate the correlation among multiple response variables. With multiple response variables sharing the same set of markers, joint modeling is first employed to accommodate the correlation. The group Lasso approach is adopted to select markers associated with all the outcome variables. An efficient computational algorithm is developed. Simulation study and analysis of the heterogeneous stock mice dataset show that the proposed method can outperform existing penalization methods. PMID:23272092
Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.
Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan
2014-09-22
A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.
NASA Astrophysics Data System (ADS)
Matovnikov, Sergei; Matovnikova, Natalia; Samoylenko, Polina
2018-03-01
The paper considers the issues of designing a modern courtyard space for high-rise buildings in Volgograd to obtain a multifunctional environment through the arrangement of new recreational territories and the search of innovative planning methods for urban landscape design. In professionals' opinion, the problem concerning the design and construction of recreational zones and greenery planting is very acute for Volgograd, such territories are often absent in many districts of the city. Generally, the decrease in the natural component and a low level of recreational territories improvement are typical for Volgograd. In addition, the problem of designing a modern urban courtyard space for high-rise buildings to obtain a multi-functional environment exists and requires a thorough investigation. The question is if there is a possibility to solve these difficult tasks by means of local design methods only or whether there should be a complex approach at the stage of the formation of master plans for modern residential areas and which modern design methods can ensure the creation of a courtyard space as a multi-functional environment. These questions as well as some other ones will be the topic of our paper.
Optimization of monopiles for offshore wind turbines.
Kallehave, Dan; Byrne, Byron W; LeBlanc Thilsted, Christian; Mikkelsen, Kristian Kousgaard
2015-02-28
The offshore wind industry currently relies on subsidy schemes to be competitive with fossil-fuel-based energy sources. For the wind industry to survive, it is vital that costs are significantly reduced for future projects. This can be partly achieved by introducing new technologies and partly through optimization of existing technologies and design methods. One of the areas where costs can be reduced is in the support structure, where better designs, cheaper fabrication and quicker installation might all be possible. The prevailing support structure design is the monopile structure, where the simple design is well suited to mass-fabrication, and the installation approach, based on conventional impact driving, is relatively low-risk and robust for most soil conditions. The range of application of the monopile for future wind farms can be extended by using more accurate engineering design methods, specifically tailored to offshore wind industry design. This paper describes how state-of-the-art optimization approaches are applied to the design of current wind farms and monopile support structures and identifies the main drivers where more accurate engineering methods could impact on a next generation of highly optimized monopiles. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Human-Automation Allocations for Current Robotic Space Operations
NASA Technical Reports Server (NTRS)
Marquez, Jessica J.; Chang, Mai L.; Beard, Bettina L.; Kim, Yun Kyung; Karasinski, John A.
2018-01-01
Within the Human Research Program, one risk delineates the uncertainty surrounding crew working with automation and robotics in spaceflight. The Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is concerned with the detrimental effects on crew performance due to ineffective user interfaces, system designs and/or functional task allocation, potentially compromising mission success and safety. Risk arises because we have limited experience with complex automation and robotics. One key gap within HARI, is the gap related to functional allocation. The gap states: We need to evaluate, develop, and validate methods and guidelines for identifying human-automation/robot task information needs, function allocation, and team composition for future long duration, long distance space missions. Allocations determine the human-system performance as it identifies the functions and performance levels required by the automation/robotic system, and in turn, what work the crew is expected to perform and the necessary human performance requirements. Allocations must take into account each of the human, automation, and robotic systems capabilities and limitations. Some functions may be intuitively assigned to the human versus the robot, but to optimize efficiency and effectiveness, purposeful role assignments will be required. The role of automation and robotics will significantly change in future exploration missions, particularly as crew becomes more autonomous from ground controllers. Thus, we must understand the suitability of existing function allocation methods within NASA as well as the existing allocations established by the few robotic systems that are operational in spaceflight. In order to evaluate future methods of robotic allocations, we must first benchmark the allocations and allocation methods that have been used. We will present 1) documentation of human-automation-robotic allocations in existing, operational spaceflight systems; and 2) To gather existing lessons learned and best practices in these role assignments, from spaceflight operational experience of crew and ground teams that may be used to guide development for future systems. NASA and other space agencies have operational spaceflight experience with two key Human-Automation-Robotic (HAR) systems: heavy lift robotic arms and planetary robotic explorers. Additionally, NASA has invested in high-fidelity rover systems that can carry crew, building beyond Apollo's lunar rover. The heavy lift robotic arms reviewed are: Space Station Remote Manipulator System (SSRMS), Japanese Remote Manipulator System (JEMRMS), and the European Robotic Arm (ERA, designed but not deployed in space). The robotic rover systems reviewed are: Mars Exploration Rovers, Mars Science Laboratory rover, and the high-fidelity K10 rovers. Much of the design and operational feedback for these systems have been communicated to flight controllers and robotic design teams. As part of the mitigating the HARI risk for future human spaceflight operations, we must document function allocations between robots and humans that have worked well in practice.
Efficient Design and Analysis of Lightweight Reinforced Core Sandwich and PRSEUS Structures
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Yarrington, Phillip W.; Lucking, Ryan C.; Collier, Craig S.; Ainsworth, James J.; Toubia, Elias A.
2012-01-01
Design, analysis, and sizing methods for two novel structural panel concepts have been developed and incorporated into the HyperSizer Structural Sizing Software. Reinforced Core Sandwich (RCS) panels consist of a foam core with reinforcing composite webs connecting composite facesheets. Boeing s Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) panels use a pultruded unidirectional composite rod to provide axial stiffness along with integrated transverse frames and stitching. Both of these structural concepts are ovencured and have shown great promise applications in lightweight structures, but have suffered from the lack of efficient sizing capabilities similar to those that exist for honeycomb sandwich, foam sandwich, hat stiffened, and other, more traditional concepts. Now, with accurate design methods for RCS and PRSEUS panels available in HyperSizer, these concepts can be traded and used in designs as is done with the more traditional structural concepts. The methods developed to enable sizing of RCS and PRSEUS are outlined, as are results showing the validity and utility of the methods. Applications include several large NASA heavy lift launch vehicle structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzke, Brett D.; Wilson, John E.; Hathaway, J.
2008-02-12
Statistically defensible methods are presented for developing geophysical detector sampling plans and analyzing data for munitions response sites where unexploded ordnance (UXO) may exist. Detection methods for identifying areas of elevated anomaly density from background density are shown. Additionally, methods are described which aid in the choice of transect pattern and spacing to assure with degree of confidence that a target area (TA) of specific size, shape, and anomaly density will be identified using the detection methods. Methods for evaluating the sensitivity of designs to variation in certain parameters are also discussed. Methods presented have been incorporated into the Visualmore » Sample Plan (VSP) software (free at http://dqo.pnl.gov/vsp) and demonstrated at multiple sites in the United States. Application examples from actual transect designs and surveys from the previous two years are demonstrated.« less
Prediction of anthropometric accommodation in aircraft cockpits
NASA Astrophysics Data System (ADS)
Zehner, Gregory Franklin
Designing aircraft cockpits to accommodate the wide range of body sizes existing in the U.S. population has always been a difficult problem for Crewstation Engineers. The approach taken in the design of military aircraft has been to restrict the range of body sizes allowed into flight training, and then to develop standards and specifications to ensure that the majority of the pilots are accommodated. Accommodation in this instance is defined as the ability to: (1) Adequately see, reach, and actuate controls; (2) Have external visual fields so that the pilot can see to land, clear for other aircraft, and perform a wide variety of missions (ground support/attack or air to air combat); and (3) Finally, if problems arise, the pilot has to be able to escape safely. Each of these areas is directly affected by the body size of the pilot. Unfortunately, accommodation problems persist and may get worse. Currently the USAF is considering relaxing body size entrance requirements so that smaller and larger people could become pilots. This will make existing accommodation problems much worse. This dissertation describes a methodology for correcting this problem and demonstrates the method by predicting pilot fit and performance in the USAF T-38A aircraft based on anthropometric data. The methods described can be applied to a variety of design applications where fitting the human operator into a system is a major concern. A systematic approach is described which includes: defining the user population, setting functional requirements that operators must be able to perform, testing the ability of the user population to perform the functional requirements, and developing predictive equations for selecting future users of the system. Also described is a process for the development of new anthropometric design criteria and cockpit design methods that assure body size accommodation is improved in the future.
2014-12-01
manufacturing BPA blanket purchase agreement BMW Bavarian Motor Works CAD computer-aided design CASREP casualty report CDSA Combat Direction...agreements ( BPA ), and through existing indefinite delivery and indefinite quantity (IDIQ) contracts. These types of procurement methods have less visibility
Soil nailing of a bridge embankment : report 2 : design and field performance report.
DOT National Transportation Integrated Search
1995-07-01
Soil nailing has recently been introduced in Oregon as an alternative lateral earth support method. The first permanent soil nail wall on the state's highway system was used where an underpass was widened under the existing Oregon Slough Bridge in Po...
OPTIMIZING POTENTIAL GREEN REPLACEMENT CHEMICALS – BALANCING FUNCTION AND RISK
An important focus of green chemistry is the design of new chemicals that are inherently less toxic than the ones they might replace, but still retain required functional properties. A variety of methods exist to measure or model both functional and toxicity surrogates that could...
UV-TUBE DESIGN CONCEPT FOR SUSTAINABLE, POINT-OF-USE WATER DISINFECTION
The 2002 World Health Organization (WHO) report on worldwide mortality indicates that waterborne illnesses associated with unsafe drinking water and poor sanitation are still a major cause of death in the developing world. Although a variety of methods exist for treating dr...
Energy Efficiency for Building Construction Technology.
ERIC Educational Resources Information Center
Scharmann, Larry, Ed.
Intended primarily but not solely for use at the postsecondary level, this curriculum guide contains five units of materials on energy efficiency that were designed to be incorporated into an existing program in building construction. The following topics are examined: conservation measures (residential energy use and methods for reducing…
Energy Efficiency for Electrical Technology.
ERIC Educational Resources Information Center
Scharmann, Larry, Ed.
Intended primarily but not solely for use at the postsecondary level, this curriculum guide contains five units on energy efficiency that were designed to be incorporated into an existing program in electrical technology. The following topics are examined: where to look for energy waste; conservation methods for electrical consumers, for…
ERIC Educational Resources Information Center
Wurman, Richard Saul
Explored in this special issue of "Design Quarterly" are some of the existing data systems which describe, in visual terms, various urban entities: transportation systems, roadways, public buildings, land patterns, historical structures, as well as some new methods for developing physical information that will be widely used in the future. Through…
NASA Astrophysics Data System (ADS)
Wang, Bin; Wu, Xinyuan
2014-11-01
In this paper we consider multi-frequency highly oscillatory second-order differential equations x″ (t) + Mx (t) = f (t , x (t) ,x‧ (t)) where high-frequency oscillations are generated by the linear part Mx (t), and M is positive semi-definite (not necessarily nonsingular). It is known that Filon-type methods are effective approach to numerically solving highly oscillatory problems. Unfortunately, however, existing Filon-type asymptotic methods fail to apply to the highly oscillatory second-order differential equations when M is singular. We study and propose an efficient improvement on the existing Filon-type asymptotic methods, so that the improved Filon-type asymptotic methods can be able to numerically solving this class of multi-frequency highly oscillatory systems with a singular matrix M. The improved Filon-type asymptotic methods are designed by combining Filon-type methods with the asymptotic methods based on the variation-of-constants formula. We also present one efficient and practical improved Filon-type asymptotic method which can be performed at lower cost. Accompanying numerical results show the remarkable efficiency.
Developing an Engineering Design Process Assessment using Mixed Methods.
Wind, Stefanie A; Alemdar, Meltem; Lingle, Jeremy A; Gale, Jessica D; Moore, Roxanne A
Recent reforms in science education worldwide include an emphasis on engineering design as a key component of student proficiency in the Science, Technology, Engineering, and Mathematics disciplines. However, relatively little attention has been directed to the development of psychometrically sound assessments for engineering. This study demonstrates the use of mixed methods to guide the development and revision of K-12 Engineering Design Process (EDP) assessment items. Using results from a middle-school EDP assessment, this study illustrates the combination of quantitative and qualitative techniques to inform item development and revisions. Overall conclusions suggest that the combination of quantitative and qualitative evidence provides an in-depth picture of item quality that can be used to inform the revision and development of EDP assessment items. Researchers and practitioners can use the methods illustrated here to gather validity evidence to support the interpretation and use of new and existing assessments.
Experimental Robot Position Sensor Fault Tolerance Using Accelerometers and Joint Torque Sensors
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.; Juang, Jer-Nan
1997-01-01
Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. The proposed method uses joint torque sensors found in most existing advanced robot designs along with easily locatable, lightweight accelerometers to provide a joint position sensor fault recovery mode. This mode uses the torque sensors along with a virtual passive control law for stability and accelerometers for joint position information. Two methods for conversion from Cartesian acceleration to joint position based on robot kinematics, not integration, are presented. The fault tolerant control method was tested on several joints of a laboratory robot. The controllers performed well with noisy, biased data and a model with uncertain parameters.
A comprehensive method for preliminary design optimization of axial gas turbine stages
NASA Technical Reports Server (NTRS)
Jenkins, R. M.
1982-01-01
A method is presented that performs a rapid, reasonably accurate preliminary pitchline optimization of axial gas turbine annular flowpath geometry, as well as an initial estimate of blade profile shapes, given only a minimum of thermodynamic cycle requirements. No geometric parameters need be specified. The following preliminary design data are determined: (1) the optimum flowpath geometry, within mechanical stress limits; (2) initial estimates of cascade blade shapes; (3) predictions of expected turbine performance. The method uses an inverse calculation technique whereby blade profiles are generated by designing channels to yield a specified velocity distribution on the two walls. Velocity distributions are then used to calculate the cascade loss parameters. Calculated blade shapes are used primarily to determine whether the assumed velocity loadings are physically realistic. Model verification is accomplished by comparison of predicted turbine geometry and performance with four existing single stage turbines.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Defense Small Business Innovation Research Program (SBIR) FY 1984.
1984-01-12
nuclear submarine non-metallic, light weight, high strength piping . Includes the development of adequate fabrication procedures for attaching pipe ...waste heat economizer methods, require development. Improved conventional and hybrid heat pipes and/or two phase transport devices 149 IF are required...DESCRIPTION: A need exists to conceive, design, fabricate and test a method of adjusting the length of the individual legs of nylon or Kevlar rope sling
Koehler Leman, Julia; Bonneau, Richard
2018-04-03
Membrane proteins composed of soluble and membrane domains are often studied one domain at a time. However, to understand the biological function of entire protein systems and their interactions with each other and drugs, knowledge of full-length structures or models is required. Although few computational methods exist that could potentially be used to model full-length constructs of membrane proteins, none of these methods are perfectly suited for the problem at hand. Existing methods require an interface or knowledge of the relative orientations of the domains or are not designed for domain assembly, and none of them are developed for membrane proteins. Here we describe the first domain assembly protocol specifically designed for membrane proteins that assembles intra- and extracellular soluble domains and the transmembrane domain into models of the full-length membrane protein. Our protocol does not require an interface between the domains and samples possible domain orientations based on backbone dihedrals in the flexible linker regions, created via fragment insertion, while keeping the transmembrane domain fixed in the membrane. For five examples tested, our method mp_domain_assembly, implemented in RosettaMP, samples domain orientations close to the known structure and is best used in conjunction with experimental data to reduce the conformational search space.
Passive Magnetic Bearing With Ferrofluid Stabilization
NASA Technical Reports Server (NTRS)
Jansen, Ralph; DiRusso, Eliseo
1996-01-01
A new class of magnetic bearings is shown to exist analytically and is demonstrated experimentally. The class of magnetic bearings utilize a ferrofluid/solid magnet interaction to stabilize the axial degree of freedom of a permanent magnet radial bearing. Twenty six permanent magnet bearing designs and twenty two ferrofluid stabilizer designs are evaluated. Two types of radial bearing designs are tested to determine their force and stiffness utilizing two methods. The first method is based on the use of frequency measurements to determine stiffness by utilizing an analytical model. The second method consisted of loading the system and measuring displacement in order to measure stiffness. Two ferrofluid stabilizers are tested and force displacement curves are measured. Two experimental test fixtures are designed and constructed in order to conduct the stiffness testing. Polynomial models of the data are generated and used to design the bearing prototype. The prototype was constructed and tested and shown to be stable. Further testing shows the possibility of using this technology for vibration isolation. The project successfully demonstrated the viability of the passive magnetic bearing with ferrofluid stabilization both experimentally and analytically.
Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang
2017-01-01
Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.
Hosseini, Marjan; Kerachian, Reza
2017-09-01
This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.
Ergonomic study and static analysis for new design of electric scooter
NASA Astrophysics Data System (ADS)
Fadzly, M. K.; Munirah, Anis; Shayfull, Z.; Saad, Mohd Sazli
2017-09-01
The purposes of this project are to design and diversify the function of a battery powered scooter frame which is more practical for the human factor in ergonomic and optimum design. The new design is based on ideas which are studied from existing scooter frame, United States Patent design and European States International Patent design. The final idea of concept design for scooter frame is based on concept chosen from the best characteristics and it is divided into three main difference ideas and the matrix evaluation method is applied. Analysis that applies to frame design, arm, rim and drive train component is based on Cosmos Express program. As a conclusion, the design that is produce are able to carry the maximum also has more practical features in ergonomic view.
Direct method of design and stress analysis of rotating disks with temperature gradient
NASA Technical Reports Server (NTRS)
Manson, S S
1950-01-01
A method is presented for the determination of the contour of disks, typified by those of aircraft gas turbines, to incorporate arbitrary elastic-stress distributions resulting from either centrifugal or combined centrifugal and thermal effects. The specified stress may be radial, tangential, or any combination of the two. Use is made of the finite-difference approach in solving the stress equations, the amount of computation necessary in the evolution of a design being greatly reduced by the judicious selection of point stations by the aid of a design chart. Use of the charts and of a preselected schedule of point stations is also applied to the direct problem of finding the elastic and plastic stress distribution in disks of a given design, thereby effecting a great reduction in the amount of calculation. Illustrative examples are presented to show computational procedures in the determination of a new design and in analyzing an existing design for elastic stress and for stresses resulting from plastic flow.
Modeling biology using relational databases.
Peitzsch, Robert M
2003-02-01
There are several different methodologies that can be used for designing a database schema; no one is the best for all occasions. This unit demonstrates two different techniques for designing relational tables and discusses when each should be used. These two techniques presented are (1) traditional Entity-Relationship (E-R) modeling and (2) a hybrid method that combines aspects of data warehousing and E-R modeling. The method of choice depends on (1) how well the information and all its inherent relationships are understood, (2) what types of questions will be asked, (3) how many different types of data will be included, and (4) how much data exists.
Recent Progresses in Nanobiosensing for Food Safety Analysis
Yang, Tao; Huang, Huifen; Zhu, Fang; Lin, Qinlu; Zhang, Lin; Liu, Junwen
2016-01-01
With increasing adulteration, food safety analysis has become an important research field. Nanomaterials-based biosensing holds great potential in designing highly sensitive and selective detection strategies necessary for food safety analysis. This review summarizes various function types of nanomaterials, the methods of functionalization of nanomaterials, and recent (2014–present) progress in the design and development of nanobiosensing for the detection of food contaminants including pathogens, toxins, pesticides, antibiotics, metal contaminants, and other analytes, which are sub-classified according to various recognition methods of each analyte. The existing shortcomings and future perspectives of the rapidly growing field of nanobiosensing addressing food safety issues are also discussed briefly. PMID:27447636
Laser penetration spike welding: a welding tool enabling novel process and design opportunities
NASA Astrophysics Data System (ADS)
Dijken, Durandus K.; Hoving, Willem; De Hosson, J. Th. M.
2002-06-01
A novel method for laser welding for sheet metal. is presented. This laser spike welding method is capable of bridging large gaps between sheet metal plates. Novel constructions can be designed and manufactured. Examples are light weight metal epoxy multi-layers and constructions having additional strength with respect to rigidity and impact resistance. Its capability to bridge large gaps allows higher dimensional tolerances in production. The required laser systems are commercially available and are easily implemented in existing production lines. The lasers are highly reliable, the resulting spike welds are quickly realized and the cost price per weld is very low.
Recent Progresses in Nanobiosensing for Food Safety Analysis.
Yang, Tao; Huang, Huifen; Zhu, Fang; Lin, Qinlu; Zhang, Lin; Liu, Junwen
2016-07-19
With increasing adulteration, food safety analysis has become an important research field. Nanomaterials-based biosensing holds great potential in designing highly sensitive and selective detection strategies necessary for food safety analysis. This review summarizes various function types of nanomaterials, the methods of functionalization of nanomaterials, and recent (2014-present) progress in the design and development of nanobiosensing for the detection of food contaminants including pathogens, toxins, pesticides, antibiotics, metal contaminants, and other analytes, which are sub-classified according to various recognition methods of each analyte. The existing shortcomings and future perspectives of the rapidly growing field of nanobiosensing addressing food safety issues are also discussed briefly.
Lake bed classification using acoustic data
Yin, Karen K.; Li, Xing; Bonde, John; Richards, Carl; Cholwek, Gary
1998-01-01
As part of our effort to identify the lake bed surficial substrates using remote sensing data, this work designs pattern classifiers by multivariate statistical methods. Probability distribution of the preprocessed acoustic signal is analyzed first. A confidence region approach is then adopted to improve the design of the existing classifier. A technique for further isolation is proposed which minimizes the expected loss from misclassification. The devices constructed are applicable for real-time lake bed categorization. A mimimax approach is suggested to treat more general cases where the a priori probability distribution of the substrate types is unknown. Comparison of the suggested methods with the traditional likelihood ratio tests is discussed.
MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR
Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo
2015-01-01
Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. PMID:26109350
Experiments to evolve toward a tangible user interface for computer-aided design parts assembly
NASA Astrophysics Data System (ADS)
Legardeur, Jeremy; Garreau, Ludovic; Couture, Nadine
2004-05-01
In this paper, we present the concepts of the ESKUA (Experimentation of a Kinesics System Usable for Assembly) platform that allows designers to carry out the assembly of mechanical CAD (Computer Aided Design) parts. This platform, based on tangible user interface lead taking into account assembly constraints from the beginning of the design phase and especially during the phase of CAD models manipulation. Our goal is to propose a working environment where the designer is confronted with real assembly constraints which are currently masked by existing CAD software functionalities. Thus, the platform is based on the handling of physical objects, called tangible interactors, which enable having a physical perception of the assembly constraints. In this goal, we have defined a typology of interactors based on concepts proposed in Design For Assembly methods. We present here the results of studies that led to the evolution of this first interactors set. One is concerning an experiment to evaluate the cognitive aspects of the use of interactors. The other is about an analysis of existing mechanical product and fasteners. We will show how these studies lead to the evolution of the interactors based on the functional surfaces use.
Dynamically Evolving Sectors for Convective Weather Impact
NASA Technical Reports Server (NTRS)
Drew, Michael C.
2010-01-01
A new strategy for altering existing sector boundaries in response to blocking convective weather is presented. This method seeks to improve the reduced capacity of sectors directly affected by weather by moving boundaries in a direction that offers the greatest capacity improvement. The boundary deformations are shared by neighboring sectors within the region in a manner that preserves their shapes and sizes as much as possible. This reduces the controller workload involved with learning new sector designs. The algorithm that produces the altered sectors is based on a force-deflection mesh model that needs only nominal traffic patterns and the shape of the blocking weather for input. It does not require weather-affected traffic patterns that would have to be predicted by simulation. When compared to an existing optimal sector design method, the sectors produced by the new algorithm are more similar to the original sector shapes, resulting in sectors that may be more suitable for operational use because the change is not as drastic. Also, preliminary results show that this method produces sectors that can equitably distribute the workload of rerouted weather-affected traffic throughout the region where inclement weather is present. This is demonstrated by sector aircraft count distributions of simulated traffic in weather-affected regions.
Analog self-powered harvester achieving switching pause control to increase harvested energy
NASA Astrophysics Data System (ADS)
Makihara, Kanjuro; Asahina, Kei
2017-05-01
In this paper, we propose a self-powered analog controller circuit to increase the efficiency of electrical energy harvesting from vibrational energy using piezoelectric materials. Although the existing synchronized switch harvesting on inductor (SSHI) method is designed to produce efficient harvesting, its switching operation generates a vibration-suppression effect that reduces the harvested levels of electrical energy. To solve this problem, the authors proposed—in a previous paper—a switching method that takes this vibration-suppression effect into account. This method temporarily pauses the switching operation, allowing the recovery of the mechanical displacement and, therefore, of the piezoelectric voltage. In this paper, we propose a self-powered analog circuit to implement this switching control method. Self-powered vibration harvesting is achieved in this study by attaching a newly designed circuit to an existing analog controller for SSHI. This circuit aims to effectively implement the aforementioned new switching control strategy, where switching is paused in some vibration peaks, in order to allow motion recovery and a consequent increase in the harvested energy. Harvesting experiments performed using the proposed circuit reveal that the proposed method can increase the energy stored in the storage capacitor by a factor of 8.5 relative to the conventional SSHI circuit. This proposed technique is useful to increase the harvested energy especially for piezoelectric systems having large coupling factor.
Variable Selection in the Presence of Missing Data: Imputation-based Methods.
Zhao, Yize; Long, Qi
2017-01-01
Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.
29 CFR 1926.450 - Scope, application and definitions applicable to this subpart.
Code of Federal Regulations, 2010 CFR
2010-07-01
... one who is capable of identifying existing and predictable hazards in the surroundings or working... locking together the tubes of a tube and coupler scaffold. Crawling board (chicken ladder) means a... alternative designs, materials or methods to protect against a hazard which the employer can demonstrate will...
A Design To Improve Children's Competencies in Solving Mathematical Word Problems.
ERIC Educational Resources Information Center
Zimmerman, Helene
A discrepancy exists between children's ability to compute and their ability to solve mathematical word problems. The literature suggests a variety of methods that have been attempted to improve this skill with varying success. The utilization of manipulatives, visualization, illustration, and emphasis on improving listening skills all were…
A Salamander Tale: Effective Exhibits and Attitude Change
ERIC Educational Resources Information Center
Rollins, Jeffrey; Watson, Sunnie Lee
2017-01-01
Little information exists regarding intention behind the design and development of Extension outreach and educational exhibits. An evaluation of response to the exhibit "A Salamander Tale" indicates that the methods used to develop the exhibit resulted in an effective way to present information to an adult audience. Survey questions were…
Power Pedagogy: Integrating Technology in the Classroom.
ERIC Educational Resources Information Center
Juliano, Benjoe A.
Connectivity on the Internet through the use of World Wide Web browsers is becoming commonplace in the classroom, at home, and in the office. The term, "power pedagogy" refers to any set of instructional methods designed to increase faculty productivity and to accommodate more students with existing facilities. This paper examines the…
Getting started with package sampSurf
Jeffrey H. Gove
2014-01-01
The sampSurf package is designed to facilitate the comparison of new and existing areal sampling methods through simulation. The package is thoroughly documented in several vignettes as mentioned below. This document is meant to point you in the right direction in finding the needed information to get started using sampSurf.
Global Climates--Past, Present, and Future. Activities for Integrated Science Education.
ERIC Educational Resources Information Center
Henderson, Sandra, Ed.; And Others
Designed for integration into existing science curriculum for grades 8-10, this curriculum uses a current environmental issue, climate change, as a vehicle for teaching science education. Instructional goals include: (1) familiarize students with scientific methods; (2) help students understand the role of uncertainty; (3) encourage students to…
The Impact of Missing Background Data on Subpopulation Estimation
ERIC Educational Resources Information Center
Rutkowski, Leslie
2011-01-01
Although population modeling methods are well established, a paucity of literature appears to exist regarding the effect of missing background data on subpopulation achievement estimates. Using simulated data that follows typical large-scale assessment designs with known parameters and a number of missing conditions, this paper examines the extent…
Transporting Radioactive Waste: An Engineering Activity. Grades 5-12.
ERIC Educational Resources Information Center
HAZWRAP, The Hazardous Waste Remedial Actions Program.
This brochure contains an engineering activity for upper elementary, middle school, and high school students that examines the transportation of radioactive waste. The activity is designed to inform students about the existence of radioactive waste and its transportation to disposal sites. Students experiment with methods to contain the waste and…
Mobile Phone Mood Charting for Adolescents
ERIC Educational Resources Information Center
Matthews, Mark; Doherty, Gavin; Sharry, John; Fitzpatrick, Carol
2008-01-01
Mobile phones may provide a useful and engaging platform for supporting therapeutic services working with adolescents. This paper examines the potential benefits of the mobile phone for self-charting moods in comparison to existing methods in current practice. The paper describes a mobile phone application designed by the authors which allows…
Resource Utilisation and Curriculum Implementation in Community Colleges in Kenya
ERIC Educational Resources Information Center
Kigwilu, Peter Changilwa; Akala, Winston Jumba
2017-01-01
The study investigated how Catholic-sponsored community colleges in Nairobi utilise the existing physical facilities and teaching and learning resources for effective implementation of Artisan and Craft curricula. The study adopted a mixed methods research design. Proportional stratified random sampling was used to sample 172 students and 18…
A Multidisciplinary Osteoporosis Service-Based Action Research Study
ERIC Educational Resources Information Center
Whitehead, Dean; Keast, John; Montgomery, Val; Hayman, Sue
2004-01-01
Objective: To investigate an existing Trust-based osteoporosis service's preventative activity, determine any issues and problems and use this data to reorganise the service, as part of a National Health Service Executive/Regional Office-commissioned and funded study. Setting: A UK Hospital Trust's Osteoporosis Service. Design & Method: A…
ERIC Educational Resources Information Center
Collins, Lorna A.; Smith, Alison J.; Hannon, Paul D.
2006-01-01
Purpose: To describe an exploration in the use of synergistic learning methods in the delivery of an innovative pilot programme designed to teach entrepreneurship capacities. The programme took a tripartite approach involving nascent entrepreneurs, existing entrepreneurs and facilitators using an action research and action learning approach.…
Historical Trends in Counsellor Education Dissertations
ERIC Educational Resources Information Center
Richards, Judith; Dykeman, Cass; Bender, Sara
2016-01-01
There exists a dearth of literature on the content, research method and research design trends of dissertations in education. Within one large subfield of education (i.e. counsellor education), an online and full-text archive of dissertations has become available. This archive contains over 200 dissertations produced in Oregon State University's…
NASA Technical Reports Server (NTRS)
Metschan, Stephen L.; Wilden, Kurtis S.; Sharpless, Garrett C.; Andelman, Rich M.
1993-01-01
Textile manufacturing processes offer potential cost and weight advantages over traditional composite materials and processes for transport fuselage elements. In the current study, design cost modeling relationships between textile processes and element design details were developed. Such relationships are expected to help future aircraft designers to make timely decisions on the effect of design details and overall configurations on textile fabrication costs. The fundamental advantage of a design cost model is to insure that the element design is cost effective for the intended process. Trade studies on the effects of processing parameters also help to optimize the manufacturing steps for a particular structural element. Two methods of analyzing design detail/process cost relationships developed for the design cost model were pursued in the current study. The first makes use of existing databases and alternative cost modeling methods (e.g. detailed estimating). The second compares design cost model predictions with data collected during the fabrication of seven foot circumferential frames for ATCAS crown test panels. The process used in this case involves 2D dry braiding and resin transfer molding of curved 'J' cross section frame members having design details characteristic of the baseline ATCAS crown design.
McCann, Russell A; Armstrong, Christina M; Skopp, Nancy A; Edwards-Stewart, Amanda; Smolenski, Derek J; June, Jennifer D; Metzger-Abamukong, Melinda; Reger, Greg M
2014-08-01
Randomized controlled trials (RCTs) support the effectiveness of virtual reality exposure therapy (VRET) for anxiety disorders; however, the overall quality of the VRET RCT literature base has yet to be evaluated. This study reviewed 27 VRET RCTs and the degree of adherence to 8 RCT research design criteria derived from existing standards. Adherence to the study quality criteria was generally low as the articles met an average 2.85 criteria (SD=1.56). None of the studies met more than six quality criteria. Study quality did not predict effect size; however, a reduction in effect size magnitude was observed for studies with larger sample sizes when comparing VRET to non-active control groups. VRET may be an effective method of treatment but caution should be exercised in interpreting the existing body of literature supporting VRET relative to existing standards of care. The need for well-designed VRET research is discussed. Copyright © 2014. Published by Elsevier Ltd.
Phase I/II adaptive design for drug combination oncology trials
Wages, Nolan A.; Conaway, Mark R.
2014-01-01
Existing statistical methodology on dose finding for combination chemotherapies has focused on toxicity considerations alone in finding a maximum tolerated dose combination to recommend for further testing of efficacy in a phase II setting. Recently, there has been increasing interest in integrating phase I and phase II trials in order to facilitate drug development. In this article, we propose a new adaptive phase I/II method for dual-agent combinations that takes into account both toxicity and efficacy after each cohort inclusion. The primary objective, both within and at the conclusion of the trial, becomes finding a single dose combination with an acceptable level of toxicity that maximizes efficacious response. We assume that there exist monotone dose–toxicity and dose–efficacy relationships among doses of one agent when the dose of other agent is fixed. We perform extensive simulation studies that demonstrate the operating characteristics of our proposed approach, and we compare simulated results to existing methodology in phase I/II design for combinations of agents. PMID:24470329
Development Of A Numerical Tow Tank With Wave Generation To Supplement Experimental Efforts
2017-12-01
vehicles CAD computer aided design CFD computational fluid dynamics FVM finite volume method IO information operations ISR intelligence, surveillance, and...deliver a product that I am truly proud of. xv THIS PAGE INTENTIONALLY LEFT BLANK xvi CHAPTER 1: Introduction 1.1 Importance of Tow Tank Testing Modern...wedge installation. 1 In 2016, NPS student Ensign Ryan Tran adapted an existing vertical plunging wedge wave maker design used at the U.S. Naval
Research on Energy-saving Shape Design of High School Library Building in Cold Region
NASA Astrophysics Data System (ADS)
Hui, Zhao; Weishuang, Xie; Zirui, Tong
2017-11-01
Considering climatic characteristics in cold region, existing high school libraries in Changchun are researched according to investigation of real conditions of these library buildings. Mathematical analysis and CAD methods are used to summarize the relation between building shape and building energy saving of high school library. Strategies are put forward for sustainable development of high school library building in cold region, providing reliable design basis for construction of high school libraries in Changchun.
Schumacher, I; Zechmeister, I
2012-04-01
In Austria research in Health Technology Assessment (HTA) has been conducted since the 1990s. Research in HTA aims at supporting an adequate and efficient use of health care resources in order to sustain a publicly financed and solidary health care system. Ultimately, HTA research should result in better health of the population. Research results should provide independent information for decision makers. For legitimizing further research resources and for prioritizing future HTA research and guaranteeing the value of future research, HTA research needs itself to undergo evaluation. Aim of the study is to design a conceptual framework for evaluating the impact of HTA research in Austria on the basis of the existing literature. An already existing review which presents methods and concepts how to evaluate HTA-impact was updated by a systematic research including literature of the years 2004-January 2010. Results were analysed in regard to 4 categories: definition of the term impact, target groups and system levels, operationalisation of indicators and evaluation methods. Overall, 19 publications were included. Referring to the 4 categories, an explanation of impact has to take into account HTAs multidisciplinary setting and needs a context related definition. Target groups, system levels, indicators and methods depend on the impact defined. Studies investigated direct and indirect impact and were focused on different target groups like physicians, nurses and decision makers on the micro-, and meso level, as well as politicians and reimbursement institutions on the macro level. Except for one reference all studies applied already known and mostly qualitative methods for measuring the impact of HTA research. Thus, an appropriate pool of instruments seems to be available. There is a lack of information about validity of applied methods and indicators. By adapting adequate methods and concepts a conceptual framework for the Austrian HTA-Impact evaluation has been designed. The paper presents an overview of existing methods for the evaluation of the HTA research. This has been used to identify useful approaches for measuring the HTA-impact in Austria. By providing a context sensitive framework for impact evaluation in Austria the Austrian HTA-research contributes to the international trend of impact-evaluation. © Georg Thieme Verlag KG Stuttgart · New York.
How to Get the Recommender Out of the Lab?
NASA Astrophysics Data System (ADS)
Picault, Jérome; Ribière, Myriam; Bonnefoy, David; Mercer, Kevin
A personalised system is a complex system made of many interacting parts, from data ingestion to presenting the results to the users. A plethora of methods, tools, algorithms and approaches exist for each piece of such a system: many data and metadata processing methods, many user models, many filtering techniques, many accuracy metrics, many personalisation levels. In addition, a realworld recommender is a piece of an even larger and more complex environment on which there is little control: often the recommender is part of a larger application introducing constraints for the design of the recommender, e.g. the data may not be in a suitable format, or the environment may impose some architectural or privacy constraints. This can make the task of building such a recommender system daunting, and it is easy to make errors. Based on the experience of the authors and the study of other works, this chapter intends to be a guide on the design, implementation and evaluation of personalised systems. It presents the different aspects that must be studied before the design is even started, and how to avoid pitfalls, in a hands-on approach. The chapter presents the main factors to take into account to design a recommender system, and illustrates them through case studies of existing systems to help navigate in the many and complex choices that have to be faced.
Ultramicrowave communications system, phase 2
NASA Technical Reports Server (NTRS)
1980-01-01
Communications system design was completed and reviewed. Minor changes were made in order to make it more cost effective and to increase design flexibility. System design activities identified the techniques and procedures to generate and monitor high data rate test signals. Differential bi-phase demodulation is the proposed method for this system. The mockup and packaging designs were performed, and component layout and interconnection constraints were determined, as well as design drawings for dummy parts of the system. The possibility of adding a low cost option to the transceiver system was studied. The communications program has the advantage that new technology signal processing devices can be readily interfaced with the existing radio frequency subsystem to produce a short range radar.
A decentralized linear quadratic control design method for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1990-01-01
A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass and stiffness properties.
Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis
Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan
2014-01-01
Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502
Laser spot tracking based on modified circular Hough transform and motion pattern analysis.
Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan
2014-10-27
Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development.
Metamodels for Computer-Based Engineering Design: Survey and Recommendations
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.
Design and validation of instruments to measure knowledge.
Elliott, T E; Regal, R R; Elliott, B A; Renier, C M
2001-01-01
Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.
Time-varying phononic crystals
NASA Astrophysics Data System (ADS)
Wright, Derek Warren
The primary objective of this thesis was to gain a deeper understanding of acoustic wave propagation in phononic crystals, particularly those that include materials whose properties can be varied periodically in time. This research was accomplished in three ways. First, a 2D phononic crystal was designed, created, and characterized. Its properties closely matched those determined through simulation. The crystal demonstrated band gaps, dispersion, and negative refraction. It served as a means of elucidating the practicalities of phononic crystal design and construction and as a physical verification of their more interesting properties. Next, the transmission matrix method for analyzing 1D phononic crystals was extended to include the effects of time-varying material parameters. The method was then used to provide a closed-form solution for the case of periodically time-varying material parameters. Some intriguing results from the use of the extended method include dramatically altered transmission properties and parametric amplification. New insights can be gained from the governing equations and have helped to identify the conditions that lead to parametric amplification in these structures. Finally, 2D multiple scattering theory was modified to analyze scatterers with time-varying material parameters. It is shown to be highly compatible with existing multiple scattering theories. It allows the total scattered field from a 2D time-varying phononic crystal to be determined. It was shown that time-varying material parameters significantly affect the phononic crystal transmission spectrum, and this was used to switch an incident monochromatic wave. Parametric amplification can occur under certain circumstances, and this effect was investigated using the closed-form solutions provided by the new 1D method. The complexity of the extended methods grows logarithmically as opposed linearly with existing methods, resulting in superior computational complexity for large numbers of scatterers. Also, since both extended methods provide analytic solutions, they may give further insights into the factors that govern the behaviour of time-varying phononic crystals. These extended methods may now be used to design an active phononic crystal that could demonstrate new or enhanced properties.
Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.
Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng
2018-05-01
Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.
Brochhausen, Mathias; Spear, Andrew D.; Cocos, Cristian; Weiler, Gabriele; Martín, Luis; Anguita, Alberto; Stenzhorn, Holger; Daskalaki, Evangelia; Schera, Fatima; Schwarz, Ulf; Sfakianakis, Stelios; Kiefer, Stephan; Dörr, Martin; Graf, Norbert; Tsiknakis, Manolis
2017-01-01
Objective This paper introduces the objectives, methods and results of ontology development in the EU co-funded project Advancing Clinico-genomic Trials on Cancer – Open Grid Services for Improving Medical Knowledge Discovery (ACGT). While the available data in the life sciences has recently grown both in amount and quality, the full exploitation of it is being hindered by the use of different underlying technologies, coding systems, category schemes and reporting methods on the part of different research groups. The goal of the ACGT project is to contribute to the resolution of these problems by developing an ontology-driven, semantic grid services infrastructure that will enable efficient execution of discovery-driven scientific workflows in the context of multi-centric, post-genomic clinical trials. The focus of the present paper is the ACGT Master Ontology (MO). Methods ACGT project researchers undertook a systematic review of existing domain and upper-level ontologies, as well as of existing ontology design software, implementation methods, and end-user interfaces. This included the careful study of best practices, design principles and evaluation methods for ontology design, maintenance, implementation, and versioning, as well as for use on the part of domain experts and clinicians. Results To date, the results of the ACGT project include (i) the development of a master ontology (the ACGT-MO) based on clearly defined principles of ontology development and evaluation; (ii) the development of a technical infra-structure (the ACGT Platform) that implements the ACGT-MO utilizing independent tools, components and resources that have been developed based on open architectural standards, and which includes an application updating and evolving the ontology efficiently in response to end-user needs; and (iii) the development of an Ontology-based Trial Management Application (ObTiMA) that integrates the ACGT-MO into the design process of clinical trials in order to guarantee automatic semantic integration without the need to perform a separate mapping process. PMID:20438862
A novel method for intelligent fault diagnosis of rolling bearings using ensemble deep auto-encoders
NASA Astrophysics Data System (ADS)
Shao, Haidong; Jiang, Hongkai; Lin, Ying; Li, Xingqiu
2018-03-01
Automatic and accurate identification of rolling bearings fault categories, especially for the fault severities and fault orientations, is still a major challenge in rotating machinery fault diagnosis. In this paper, a novel method called ensemble deep auto-encoders (EDAEs) is proposed for intelligent fault diagnosis of rolling bearings. Firstly, different activation functions are employed as the hidden functions to design a series of auto-encoders (AEs) with different characteristics. Secondly, EDAEs are constructed with various auto-encoders for unsupervised feature learning from the measured vibration signals. Finally, a combination strategy is designed to ensure accurate and stable diagnosis results. The proposed method is applied to analyze the experimental bearing vibration signals. The results confirm that the proposed method can get rid of the dependence on manual feature extraction and overcome the limitations of individual deep learning models, which is more effective than the existing intelligent diagnosis methods.
Quantification and propagation of disciplinary uncertainty via Bayesian statistics
NASA Astrophysics Data System (ADS)
Mantis, George Constantine
2002-08-01
Several needs exist in the military, commercial, and civil sectors for new hypersonic systems. These needs remain unfulfilled, due in part to the uncertainty encountered in designing these systems. This uncertainty takes a number of forms, including disciplinary uncertainty, that which is inherent in the analytical tools utilized during the design process. Yet, few efforts to date empower the designer with the means to account for this uncertainty within the disciplinary analyses. In the current state-of-the-art in design, the effects of this unquantifiable uncertainty significantly increase the risks associated with new design efforts. Typically, the risk proves too great to allow a given design to proceed beyond the conceptual stage. To that end, the research encompasses the formulation and validation of a new design method, a systematic process for probabilistically assessing the impact of disciplinary uncertainty. The method implements Bayesian Statistics theory to quantify this source of uncertainty, and propagate its effects to the vehicle system level. Comparison of analytical and physical data for existing systems, modeled a priori in the given analysis tools, leads to quantification of uncertainty in those tools' calculation of discipline-level metrics. Then, after exploration of the new vehicle's design space, the quantified uncertainty is propagated probabilistically through the design space. This ultimately results in the assessment of the impact of disciplinary uncertainty on the confidence in the design solution: the final shape and variability of the probability functions defining the vehicle's system-level metrics. Although motivated by the hypersonic regime, the proposed treatment of uncertainty applies to any class of aerospace vehicle, just as the problem itself affects the design process of any vehicle. A number of computer programs comprise the environment constructed for the implementation of this work. Application to a single-stage-to-orbit (SSTO) reusable launch vehicle concept, developed by the NASA Langley Research Center under the Space Launch Initiative, provides the validation case for this work, with the focus placed on economics, aerothermodynamics, propulsion, and structures metrics. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Qin, Shengfeng; Van der Velde, David; Chatzakis, Emmanouil; McStea, Terry; Smith, Neil
2016-10-01
Crowdsourcing is an innovative business practice of obtaining needed services, ideas, or content or even funds by soliciting contributions from a large group of people (the `Crowd'). The potential benefits of utilizing crowdsourcing in product design are well-documented, but little research exists on what are the barriers and opportunities in adopting crowdsourcing in new product development (NPD) of manufacturing SMEs. In order to answer the above questions, a Proof of Market study is carried out on crowdsourcing-based product design under an Innovate UK funded Smart project, which aims at identifying the needs, challenges and future development opportunities associated with adopting crowdsourcing strategies for NPD. The research findings from this study are reported here and can be used to guide future development of crowdsourcing-based collaborative design methods and tools and provide some practical references for industry to adopt this new and emerging collaborative design method in their business.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
An approach to constrained aerodynamic design with application to airfoils
NASA Technical Reports Server (NTRS)
Campbell, Richard L.
1992-01-01
An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.
Ranking Reputation and Quality in Online Rating Systems
Liao, Hao; Zeng, An; Xiao, Rui; Ren, Zhuo-Ming; Chen, Duan-Bing; Zhang, Yi-Cheng
2014-01-01
How to design an accurate and robust ranking algorithm is a fundamental problem with wide applications in many real systems. It is especially significant in online rating systems due to the existence of some spammers. In the literature, many well-performed iterative ranking methods have been proposed. These methods can effectively recognize the unreliable users and reduce their weight in judging the quality of objects, and finally lead to a more accurate evaluation of the online products. In this paper, we design an iterative ranking method with high performance in both accuracy and robustness. More specifically, a reputation redistribution process is introduced to enhance the influence of highly reputed users and two penalty factors enable the algorithm resistance to malicious behaviors. Validation of our method is performed in both artificial and real user-object bipartite networks. PMID:24819119
Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel
NASA Astrophysics Data System (ADS)
Xie, Yanmin
2011-08-01
Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.
A requirements specification for a software design support system
NASA Technical Reports Server (NTRS)
Noonan, Robert E.
1988-01-01
Most existing software design systems (SDSS) support the use of only a single design methodology. A good SDSS should support a wide variety of design methods and languages including structured design, object-oriented design, and finite state machines. It might seem that a multiparadigm SDSS would be expensive in both time and money to construct. However, it is proposed that instead an extensible SDSS that directly implements only minimal database and graphical facilities be constructed. In particular, it should not directly implement tools to faciliate language definition and analysis. It is believed that such a system could be rapidly developed and put into limited production use, with the experience gained used to refine and evolve the systems over time.
NASA Technical Reports Server (NTRS)
Day, John H. (Technical Monitor); LaBel, Kenneth A.; Howard, James W.; Carts, Martin A.; Seidleck, Christine
2003-01-01
With the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the natural space radiation environment issues: total ionizing dose (TID) and single event effects (SEE). Topics of discussion include: Types of tests that are required, Design coverage (i.e., design libraries: do they need validating for each application?) A new task within NASA to compare existing design. This latter task is a new effort in FY03 utilizing a 8051 microcontroller core from multiple design hardening developers as a test vehicle to evaluate each mitigative technique.
Evaluating How to Alter Design Processes to Consider Sustainable Practices
NASA Astrophysics Data System (ADS)
Liew, V.
2017-12-01
The Design Cycle is a well established design methodology featuring four major criterion (Investigating, Planning, Creating, and Evaluating), adopted by International Baccalaureate education foundation. However, as sustainability has become an alarmingly relevant issue, the Design Cycle is not a sufficient guide in its current form. With the the excessive quantities of waste entering Hong Kong's landfills as well as the worldwide issue of rapidly depleting resources, it is imperative that products reduce waste via adaptive or mitigative methods, and that an environmental sector be integrated into the existing Design Cycle. In this piece of research, sustainable design practices will be evaluated to form a list of specifications that products can be assessed against to reduce waste and repurpose materials.
Iris movement based wheel chair control using raspberry pi
NASA Astrophysics Data System (ADS)
Sharma, Jatin; Anbarasu, M.; Chakraborty, Chandan; Shanmugasundaram, M.
2017-11-01
Paralysis is considered as a major curse in this world. The number of persons who are paralyzed and therefore dependent on others due to loss of self-mobility is growing with the population. Quadriplegia is a form of Paralysis in which you can only move your eyes. Much work has been done to help disabled persons to live independently. Various methods are used for the same and this paper enlists some of the already existing methods along with some add-ons to improve the existing system. Add-ons include a system, which will be designed using Raspberry Pi and IR Camera Module. OpenCV will be used for image processing and Python is used for programming the Raspberry Pi.
Defining the Field of Existence of Shrouded Blades in High-Speed Gas Turbines
NASA Astrophysics Data System (ADS)
Belousov, Anatoliy I.; Nazdrachev, Sergeiy V.
2018-01-01
This work provides a method for determining the region of existence of banded blades of gas turbines for aircraft engines based on the analytical evaluation of tensile stresses in specific characteristic sections of the blade. This region is determined by the set of values of the parameter, which forms the law of distribution of the cross-sectional area of the cross-sections along the height of the airfoil. When seven independent parameters (gas-dynamic, structural and strength) are changed, the choice of the best option is proposed at the early design stage. As an example, the influence of the dimension of a turbine on the domain of the existence of banded blades is shown.
Clarifying values: an updated review
2013-01-01
Background Consensus guidelines have recommended that decision aids include a process for helping patients clarify their values. We sought to examine the theoretical and empirical evidence related to the use of values clarification methods in patient decision aids. Methods Building on the International Patient Decision Aid Standards (IPDAS) Collaboration’s 2005 review of values clarification methods in decision aids, we convened a multi-disciplinary expert group to examine key definitions, decision-making process theories, and empirical evidence about the effects of values clarification methods in decision aids. To summarize the current state of theory and evidence about the role of values clarification methods in decision aids, we undertook a process of evidence review and summary. Results Values clarification methods (VCMs) are best defined as methods to help patients think about the desirability of options or attributes of options within a specific decision context, in order to identify which option he/she prefers. Several decision making process theories were identified that can inform the design of values clarification methods, but no single “best” practice for how such methods should be constructed was determined. Our evidence review found that existing VCMs were used for a variety of different decisions, rarely referenced underlying theory for their design, but generally were well described in regard to their development process. Listing the pros and cons of a decision was the most common method used. The 13 trials that compared decision support with or without VCMs reached mixed results: some found that VCMs improved some decision-making processes, while others found no effect. Conclusions Values clarification methods may improve decision-making processes and potentially more distal outcomes. However, the small number of evaluations of VCMs and, where evaluations exist, the heterogeneity in outcome measures makes it difficult to determine their overall effectiveness or the specific characteristics that increase effectiveness. PMID:24625261
ERIC Educational Resources Information Center
Livingston, Samuel A.; Kim, Sooyeon
2010-01-01
A series of resampling studies investigated the accuracy of equating by four different methods in a random groups equating design with samples of 400, 200, 100, and 50 test takers taking each form. Six pairs of forms were constructed. Each pair was constructed by assigning items from an existing test taken by 9,000 or more test takers. The…
Finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems
NASA Astrophysics Data System (ADS)
Xie, Xue-Jun; Zhang, Xing-Hui; Zhang, Kemei
2016-07-01
This paper studies the finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems. Based on the stochastic Lyapunov theorem on finite-time stability, by using the homogeneous domination method, the adding one power integrator and sign function method, constructing a ? Lyapunov function and verifying the existence and uniqueness of solution, a continuous state feedback controller is designed to guarantee the closed-loop system finite-time stable in probability.
Integrating linear optimization with structural modeling to increase HIV neutralization breadth.
Sevy, Alexander M; Panda, Swetasudha; Crowe, James E; Meiler, Jens; Vorobeychik, Yevgeniy
2018-02-01
Computational protein design has been successful in modeling fixed backbone proteins in a single conformation. However, when modeling large ensembles of flexible proteins, current methods in protein design have been insufficient. Large barriers in the energy landscape are difficult to traverse while redesigning a protein sequence, and as a result current design methods only sample a fraction of available sequence space. We propose a new computational approach that combines traditional structure-based modeling using the Rosetta software suite with machine learning and integer linear programming to overcome limitations in the Rosetta sampling methods. We demonstrate the effectiveness of this method, which we call BROAD, by benchmarking the performance on increasing predicted breadth of anti-HIV antibodies. We use this novel method to increase predicted breadth of naturally-occurring antibody VRC23 against a panel of 180 divergent HIV viral strains and achieve 100% predicted binding against the panel. In addition, we compare the performance of this method to state-of-the-art multistate design in Rosetta and show that we can outperform the existing method significantly. We further demonstrate that sequences recovered by this method recover known binding motifs of broadly neutralizing anti-HIV antibodies. Finally, our approach is general and can be extended easily to other protein systems. Although our modeled antibodies were not tested in vitro, we predict that these variants would have greatly increased breadth compared to the wild-type antibody.
Boundary cooled rocket engines for space storable propellants
NASA Technical Reports Server (NTRS)
Kesselring, R. C.; Mcfarland, B. L.; Knight, R. M.; Gurnitz, R. N.
1972-01-01
An evaluation of an existing analytical heat transfer model was made to develop the technology of boundary film/conduction cooled rocket thrust chambers to the space storable propellant combination oxygen difluoride/diborane. Critical design parameters were identified and their importance determined. Test reduction methods were developed to enable data obtained from short duration hot firings with a thin walled (calorimeter) chamber to be used quantitatively evaluate the heat absorbing capability of the vapor film. The modification of the existing like-doublet injector was based on the results obtained from the calorimeter firings.
NASA Astrophysics Data System (ADS)
Gilliom, R.; Hogue, T. S.; McCray, J. E.
2017-12-01
There is a need for improved parameterization of stormwater best management practices (BMP) performance estimates to improve modeling of urban hydrology, planning and design of green infrastructure projects, and water quality crediting for stormwater management. Percent removal is commonly used to estimate BMP pollutant removal efficiency, but there is general agreement that this approach has significant uncertainties and is easily affected by site-specific factors. Additionally, some fraction of monitored BMPs have negative percent removal, so it is important to understand the probability that a BMP will provide the desired water quality function versus exacerbating water quality problems. The widely used k-C* equation has shown to provide a more adaptable and accurate method to model BMP contaminant attenuation, and previous work has begun to evaluate the strengths and weaknesses of the k-C* method. However, no systematic method exists for obtaining first-order removal rate constants needed to use the k-C* equation for stormwater BMPs; thus there is minimal application of the method. The current research analyzes existing water quality data in the International Stormwater BMP Database to provide screening-level parameterization of the k-C* equation for selected BMP types and analysis of factors that skew the distribution of efficiency estimates from the database. Results illustrate that while certain BMPs are more likely to provide desired contaminant removal than others, site- and design-specific factors strongly influence performance. For example, bioretention systems show both the highest and lowest removal rates of dissolved copper, total phosphorous, and total nitrogen. Exploration and discussion of this and other findings will inform the application of the probabilistic pollutant removal rate constants. Though data limitations exist, this research will facilitate improved accuracy of BMP modeling and ultimately aid decision-making for stormwater quality management in urban systems.
High-Order Automatic Differentiation of Unmodified Linear Algebra Routines via Nilpotent Matrices
NASA Astrophysics Data System (ADS)
Dunham, Benjamin Z.
This work presents a new automatic differentiation method, Nilpotent Matrix Differentiation (NMD), capable of propagating any order of mixed or univariate derivative through common linear algebra functions--most notably third-party sparse solvers and decomposition routines, in addition to basic matrix arithmetic operations and power series--without changing data-type or modifying code line by line; this allows differentiation across sequences of arbitrarily many such functions with minimal implementation effort. NMD works by enlarging the matrices and vectors passed to the routines, replacing each original scalar with a matrix block augmented by derivative data; these blocks are constructed with special sparsity structures, termed "stencils," each designed to be isomorphic to a particular multidimensional hypercomplex algebra. The algebras are in turn designed such that Taylor expansions of hypercomplex function evaluations are finite in length and thus exactly track derivatives without approximation error. Although this use of the method in the "forward mode" is unique in its own right, it is also possible to apply it to existing implementations of the (first-order) discrete adjoint method to find high-order derivatives with lowered cost complexity; for example, for a problem with N inputs and an adjoint solver whose cost is independent of N--i.e., O(1)--the N x N Hessian can be found in O(N) time, which is comparable to existing second-order adjoint methods that require far more problem-specific implementation effort. Higher derivatives are likewise less expensive--e.g., a N x N x N rank-three tensor can be found in O(N2). Alternatively, a Hessian-vector product can be found in O(1) time, which may open up many matrix-based simulations to a range of existing optimization or surrogate modeling approaches. As a final corollary in parallel to the NMD-adjoint hybrid method, the existing complex-step differentiation (CD) technique is also shown to be capable of finding the Hessian-vector product. All variants are implemented on a stochastic diffusion problem and compared in-depth with various cost and accuracy metrics.
Design Methods and Practices for Fault Prevention and Management in Spacecraft
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.
2005-01-01
Integrated Systems Health Management (ISHM) is intended to become a critical capability for all space, lunar and planetary exploration vehicles and systems at NASA. Monitoring and managing the health state of diverse components, subsystems, and systems is a difficult task that will become more challenging when implemented for long-term, evolving deployments. A key technical challenge will be to ensure that the ISHM technologies are reliable, effective, and low cost, resulting in turn in safe, reliable, and affordable missions. To ensure safety and reliability, ISHM functionality, decisions and knowledge have to be incorporated into the product lifecycle as early as possible, and ISHM must be considered as an essential element of models developed and used in various stages during system design. During early stage design, many decisions and tasks are still open, including sensor and measurement point selection, modeling and model-checking, diagnosis, signature and data fusion schemes, presenting the best opportunity to catch and prevent potential failures and anomalies in a cost-effective way. Using appropriate formal methods during early design, the design teams can systematically explore risks without committing to design decisions too early. However, the nature of ISHM knowledge and data is detailed, relying on high-fidelity, detailed models, whereas the earlier stages of the product lifecycle utilize low-fidelity, high-level models of systems and their functionality. We currently lack the tools and processes necessary for integrating ISHM into the vehicle system/subsystem design. As a result, most existing ISHM-like technologies are retrofits that were done after the system design was completed. It is very expensive, and sometimes futile, to retrofit a system health management capability into existing systems. Last-minute retrofits result in unreliable systems, ineffective solutions, and excessive costs (e.g., Space Shuttle TPS monitoring which was considered only after 110 flights and the Columbia disaster). High false alarm or false negative rates due to substandard implementations hurt the credibility of the ISHM discipline. This paper presents an overview of the current state of ISHM design,and a review of formal design methods to make recommendations about possible approaches to enable the ISHM capabilities to be designed-in at the system-level, from the very beginning of the vehicle design process.
Hu, Rui; Liu, Shutian; Li, Quhao
2017-05-20
For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.
NASA Technical Reports Server (NTRS)
Hampton, R. David; Whorton, Mark S.
2000-01-01
Many microgravity space-science experiments require active vibration isolation, to attain suitably low levels of background acceleration for useful experimental results. The design of state-space controllers by optimal control methods requires judicious choices of frequency-weighting design filters. Kinematic coupling among states greatly clouds designer intuition in the choices of these filters, and the masking effects of the state observations cloud the process further. Recent research into the practical application of H2 synthesis methods to such problems, indicates that certain steps can lead to state frequency-weighting design-filter choices with substantially improved promise of usefulness, even in the face of these difficulties. In choosing these filters on the states, one considers their relationships to corresponding design filters on appropriate pseudo-sensitivity- and pseudo-complementary-sensitivity functions. This paper investigates the application of these considerations to a single-degree-of-freedom microgravity vibration-isolation test case. Significant observations that were noted during the design process are presented. along with explanations based on the existent theory for such problems.
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Yaosuo
The battery energy stored quasi-Z-source (BES-qZS) based photovoltaic (PV) power generation system combines advantages of the qZS inverter and the battery energy storage system. However, the second harmonic (2 ) power ripple will degrade the system's performance and affect the system's design. An accurate model to analyze the 2 ripple is very important. The existing models did not consider the battery, and with the assumption L1=L2 and C1=C2, which causes the non-optimized design for the impedance parameters of qZS network. This paper proposes a comprehensive model for single-phase BES-qZS-PV inverter system, where the battery is considered and without any restrictionmore » of L1, L2, C1, and C2. A BES-qZS impedance design method based on the built model is proposed to mitigate the 2 ripple. Simulation and experimental results verify the proposed 2 ripple model and design method.« less
On the design of airfoils in which the transition of the boundary layer is delayed
NASA Technical Reports Server (NTRS)
Tani, Itiro
1952-01-01
A method is presented for designing suitable thickness distributions and mean camber lines for airfoils permitting extensive chordwise laminar flow. Wind tunnel and flight tests confirming the existence of laminar flow; possible maintenance of laminar flow by area suction; and the effects of wind tunnel turbulence and surface roughness on the promotion of premature boundary layer transition are discussed. In addition, estimates of profile drag and scale effect on maximum lift of the derived airfoils are made.
Space Transportation Avionics Technology Symposium. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1990-01-01
The focus of the symposium was to examine existing and planned avionics technology processes and products and to recommend necessary changes for strengthening priorities and program emphases. Innovative changes in avionics technology development and design processes, identified during the symposium, are needed to support the increasingly complex, multi-vehicle, integrated, autonomous space-based systems. Key technology advances make such a major initiative viable at this time: digital processing capabilities, integrated on-board test/checkout methods, easily reconfigurable laboratories, and software design and production techniques.
Research on Capturing of Customer Requirements Based on Innovation Theory
NASA Astrophysics Data System (ADS)
junwu, Ding; dongtao, Yang; zhenqiang, Bao
To exactly and effectively capture customer requirements information, a new customer requirements capturing modeling method was proposed. Based on the analysis of function requirement models of previous products and the application of technology system evolution laws of the Theory of Innovative Problem Solving (TRIZ), the customer requirements could be evolved from existing product designs, through modifying the functional requirement unit and confirming the direction of evolution design. Finally, a case study was provided to illustrate the feasibility of the proposed approach.
Engine dynamic analysis with general nonlinear finite element codes
NASA Technical Reports Server (NTRS)
Adams, M. L.; Padovan, J.; Fertis, D. G.
1991-01-01
A general engine dynamic analysis as a standard design study computational tool is described for the prediction and understanding of complex engine dynamic behavior. Improved definition of engine dynamic response provides valuable information and insights leading to reduced maintenance and overhaul costs on existing engine configurations. Application of advanced engine dynamic simulation methods provides a considerable cost reduction in the development of new engine designs by eliminating some of the trial and error process done with engine hardware development.
Space Transportation Avionics Technology Symposium. Volume 2: Conference Proceedings
NASA Technical Reports Server (NTRS)
1990-01-01
The focus of the symposium was to examine existing and planned avionics technology processes and products and to recommend necessary changes for strengthening priorities and program emphases. Innovative changes in avionics technology development and design processes are needed to support the increasingly complex, multi-vehicle, integrated, autonomous space-based systems. Key technology advances make such a major initiative viable at this time: digital processing capabilities, integrated on-board test/checkout methods, easily reconfigurable laboratories, and software design and production techniques.
Application of subharmonics for active sound design of electric vehicles.
Gwak, Doo Young; Yoon, Kiseop; Seong, Yeolwan; Lee, Soogab
2014-12-01
The powertrain of electric vehicles generates an unfamiliar acoustical environment for customers. This paper seeks optimal interior sound for electric vehicles based on psychoacoustic knowledge and musical harmonic theory. The concept of inserting a virtual sound, which consists of the subharmonics of an existing high-frequency component, is suggested to improve sound quality. Subjective evaluation results indicate that the impression of interior sound can be enhanced in this manner. Increased appeal is achieved through two designed stimuli, which proves the effectiveness of the method proposed.
Enabling fast charging – A battery technology gap assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.
The battery technology literature is reviewed, with an emphasis on key elements that limit extreme fast charging. Key gaps in existing elements of the technology are presented as well as developmental needs. Among these needs are advanced models and methods to detect and prevent lithium plating; new positive-electrode materials which are less prone to stress-induced failure; better electrode designs to accommodate very rapid diffusion in and out of the electrode; and thermal management and pack designs to accommodate the higher operating voltage.
Enabling fast charging – A battery technology gap assessment
Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.; ...
2017-10-23
The battery technology literature is reviewed, with an emphasis on key elements that limit extreme fast charging. Key gaps in existing elements of the technology are presented as well as developmental needs. Among these needs are advanced models and methods to detect and prevent lithium plating; new positive-electrode materials which are less prone to stress-induced failure; better electrode designs to accommodate very rapid diffusion in and out of the electrode; and thermal management and pack designs to accommodate the higher operating voltage.
Joshi, Ashish; de Araujo Novaes, Magdala; Machiavelli, Josiane; Iyengar, Sriram; Vogler, Robert; Johnson, Craig; Zhang, Jiajie; Hsu, Chiehwen E
2012-01-01
Public health data is typically organized by geospatial unit. GeoVisualization (GeoVis) allows users to see information visually on a map. Examine telehealth users' perceptions towards existing public health GeoVis applications and obtains users' feedback about features important for the design and development of Human Centered GeoVis application "the SanaViz". We employed a cross sectional study design using mixed methods approach for this pilot study. Twenty users involved with the NUTES telehealth center at Federal University of Pernambuco (UFPE), Recife, Brazil were enrolled. Open and closed ended questionnaires were used to gather data. We performed audio recording for the interviews. Information gathered included socio-demographics, prior spatial skills and perception towards use of GeoVis to evaluate telehealth services. Card sorting and sketching methods were employed. Univariate analysis was performed for the continuous and categorical variables. Qualitative analysis was performed for open ended questions. Existing Public Health GeoVis applications were difficult to use. Results found interaction features zooming, linking and brushing and representation features Google maps, tables and bar chart as most preferred GeoVis features. Early involvement of users is essential to identify features necessary to be part of the human centered GeoVis application "the SanaViz".
Computational Methods for HSCT-Inlet Controls/CFD Interdisciplinary Research
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Melcher, Kevin J.; Chicatelli, Amy K.; Hartley, Tom T.; Chung, Joongkee
1994-01-01
A program aimed at facilitating the use of computational fluid dynamics (CFD) simulations by the controls discipline is presented. The objective is to reduce the development time and cost for propulsion system controls by using CFD simulations to obtain high-fidelity system models for control design and as numerical test beds for control system testing and validation. An interdisciplinary team has been formed to develop analytical and computational tools in three discipline areas: controls, CFD, and computational technology. The controls effort has focused on specifying requirements for an interface between the controls specialist and CFD simulations and a new method for extracting linear, reduced-order control models from CFD simulations. Existing CFD codes are being modified to permit time accurate execution and provide realistic boundary conditions for controls studies. Parallel processing and distributed computing techniques, along with existing system integration software, are being used to reduce CFD execution times and to support the development of an integrated analysis/design system. This paper describes: the initial application for the technology being developed, the high speed civil transport (HSCT) inlet control problem; activities being pursued in each discipline area; and a prototype analysis/design system in place for interactive operation and visualization of a time-accurate HSCT-inlet simulation.
2012-01-01
Background While research on the impact of global climate change (GCC) on ecosystems and species is flourishing, a fundamental component of biodiversity – molecular variation – has not yet received its due attention in such studies. Here we present a methodological framework for projecting the loss of intraspecific genetic diversity due to GCC. Methods The framework consists of multiple steps that combines 1) hierarchical genetic clustering methods to define comparable units of inference, 2) species accumulation curves (SAC) to infer sampling completeness, and 3) species distribution modelling (SDM) to project the genetic diversity loss under GCC. We suggest procedures for existing data sets as well as specifically designed studies. We illustrate the approach with two worked examples from a land snail (Trochulus villosus) and a caddisfly (Smicridea (S.) mucronata). Results Sampling completeness was diagnosed on the third coarsest haplotype clade level for T. villosus and the second coarsest for S. mucronata. For both species, a substantial species range loss was projected under the chosen climate scenario. However, despite substantial differences in data set quality concerning spatial sampling and sampling depth, no loss of haplotype clades due to GCC was predicted for either species. Conclusions The suggested approach presents a feasible method to tap the rich resources of existing phylogeographic data sets and guide the design and analysis of studies explicitly designed to estimate the impact of GCC on a currently still neglected level of biodiversity. PMID:23176586
Quick-Change Ceramic Flame Holder for High-Output Torches
NASA Technical Reports Server (NTRS)
Haskin, Henry
2010-01-01
Researchers at NASA's Langley Research Center have developed a new ceramic design flame holder with a service temperature of 4,000 F (2,204 C). The combination of high strength and high temperature capability, as well as a twist-lock mounting method to the steel burner, sets this flame holder apart from existing technology.
Medicare Part D and the Nursing Home Setting
ERIC Educational Resources Information Center
Stevenson, David G.; Huskamp, Haiden A.; Newhouse, Joseph P.
2008-01-01
Purpose: The purpose of this article is to explore how the introduction of Medicare Part D is changing the operations of long-term-care pharmacies (LTCPs) and nursing homes, as well as implications of those changes for nursing home residents. Design and Methods: We reviewed existing sources of information and interviewed stakeholders across…
The Financial Burden of Attending University in Georgia: Implications for Rural Students
ERIC Educational Resources Information Center
Chankseliani, Maia
2013-01-01
By evaluating the impact of policies to financially support university students in Georgia, this article demonstrates the systematic spatial disparities that exist in a context of formally equal competition. The author uses a mixed-methods design, combining quantitative evidence on the entire population of Georgian university applicants in…
General Open Systems Theory and the Substrata-Factor Theory of Reading.
ERIC Educational Resources Information Center
Kling, Martin
This study was designed to extend the generality of the Substrata-Factor Theory by two methods of investigation: (1) theoretically, to establish the validity of the hypothesis that an isomorphic relationship exists between the Substrata-Factor Theory and the General Open Systems Theory, and (2) experimentally, to discover through a series of…
General Open Systems Theory and the Substrata-Factor Theory of Reading.
ERIC Educational Resources Information Center
Kling, Martin
This study was designed to extend the generality of the Substrata-Factor Theory by two methods of investigation: (1) theoretically, to est"blish the validity of the hypothesis that an isomorphic relationship exists between the Substrata-Factor Theory and the General Open Systems Theory, and (2) experimentally, to disc"ver through a…
Taux: A System for Evaluating Sound Feedback in Navigational Tasks
ERIC Educational Resources Information Center
Lutz, Robert J.
2008-01-01
This thesis presents the design and development of an evaluation system for generating audio displays that provide feedback to persons performing navigation tasks. It first develops the need for such a system by describing existing wayfinding solutions, investigating new electronic location-based methods that have the potential of changing these…
An Integrated Model for Effective Knowledge Management in Chinese Organizations
ERIC Educational Resources Information Center
An, Xiaomi; Deng, Hepu; Wang, Yiwen; Chao, Lemen
2013-01-01
Purpose: The purpose of this paper is to provide organizations in the Chinese cultural context with a conceptual model for an integrated adoption of existing knowledge management (KM) methods and to improve the effectiveness of their KM activities. Design/methodology/approaches: A comparative analysis is conducted between China and the western…
Nickel-Cadmium Battery Operation Management Optimization Using Robust Design
NASA Technical Reports Server (NTRS)
Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador
1996-01-01
In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.
The Association of Physical Activity and Academic Behavior: A Systematic Review
ERIC Educational Resources Information Center
Sullivan, Rachel A.; Kuzel, AnnMarie H.; Vaandering, Michael E.; Chen, Weiyun
2017-01-01
Background: In this systematic review, we assessed the existing research describing the effects of physical activity (PA) on academic behavior, with a special focus on the effectiveness of the treatments applied, study designs, outcome measures, and results. Methods: We obtained data from various journal search engines and 218 journal articles…
Remediating Misconception on Climate Change among Secondary School Students in Malaysia
ERIC Educational Resources Information Center
Karpudewan, Mageswary; Roth, Wolff-Michael; Chandrakesan, Kasturi
2015-01-01
Existing studies report on secondary school students' misconceptions related to climate change; they also report on the methods of teaching as reinforcing misconceptions. This quasi-experimental study was designed to test the null hypothesis that a curriculum based on constructivist principles does not lead to greater understanding and fewer…
ERIC Educational Resources Information Center
Muller, Susan M.; Gorrow, Teena R.; Schneider, Sidney R.
2009-01-01
Objective: The authors designed this study to determine if differences exist between male and female collegiate athletes' supplement use and behaviors to modify body appearance. Participants: Collegiate athletes who participated in this study were 241 females and 210 males, aged 17 to 28 years. Method: Participants completed a questionnaire about…
Chaos in the fractional order logistic delay system: Circuit realization and synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baskonus, Haci Mehmet; Hammouch, Zakia; Mekkaoui, Toufik
2016-06-08
In this paper, we present a numerical study and a circuit design to prove existence of chaos in the fractional order Logistic delay system. In addition, we investigate an active control synchronization scheme in this system. Numerical and cicruit simulations show the effectiveness and feasibility of this method.
Fashion Students Choose How to Learn by Constructing Videos of Pattern Making
ERIC Educational Resources Information Center
Cavanagh, Michaella; Peté, Marí
2017-01-01
This paper analyses new learning experiences of first year pattern technology students at a university of technology, in the context of selected characteristics of authentic learning theories. The paper contributes to existing knowledge by proposing a method that could be followed for design-based subjects in a vocational education setting.…
At many of the sites where we have been asked to assist in site characterization, we have discovered severe discrepancies that new technologies may be able to prevent. This presentation is designed to illustrate these new technologies or unique uses of existing technology and the...
Demulsification; industrial applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lissant, K.J.
1983-01-01
For scientists involved in the problems of selecting or designing demulsification programs. The author shows clearly why no pat formula exists to help out but does point out initial information required to start work. Theory. Testing. Demulsification of oil-in-water emulsions. Demulsification of water-in-oil emulsions. Demulsification of petroleum emulsions. Additional methods and areas in demulsification.
ERIC Educational Resources Information Center
Emanouilidis, Emanuel
2005-01-01
Latin squares have existed for hundreds of years but it wasn't until rather recently that Latin squares were used in other areas such as statistics, graph theory, coding theory and the generation of random numbers as well as in the design and analysis of experiments. This note describes Latin and diagonal Latin squares, a method of constructing…
Mapped Plot Patch Size Estimates
Paul C. Van Deusen
2005-01-01
This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...
Students' Perceptions of a Blended Web-Based Learning Environment
ERIC Educational Resources Information Center
Chandra, Vinesh; Fisher, Darrell L.
2009-01-01
The enhanced accessibility, affordability and capability of the Internet has created enormous possibilities in terms of designing, developing and implementing innovative teaching methods in the classroom. As existing pedagogies are revamped and new ones are added, there is a need to assess the effectiveness of these approaches from the students'…
Lessons Learned from the Whole Child and Coordinated School Health Approaches
ERIC Educational Resources Information Center
Rasberry, Catherine N.; Slade, Sean; Lohrmann, David K.; Valois, Robert F.
2015-01-01
Background: The new Whole School, Whole Community, Whole Child (WSCC) model, designed to depict links between health and learning, is founded on concepts of coordinated school health (CSH) and a whole child approach to education. Methods: The existing literature, including scientific articles and key publications from national agencies and…
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Survey on large scale system control methods
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1987-01-01
The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.
Physiologic measures of sexual function in women: a review.
Woodard, Terri L; Diamond, Michael P
2009-07-01
To review and describe physiologic measures of assessing sexual function in women. Literature review. Studies that use instruments designed to measure female sexual function. Women participating in studies of female sexual function. Various instruments that measure physiologic features of female sexual function. Appraisal of the various instruments, including their advantages and disadvantages. Many unique physiologic methods of evaluating female sexual function have been developed during the past four decades. Each method has its benefits and limitations. Many physiologic methods exist, but most are not well-validated. In addition there has been an inability to correlate most physiologic measures with subjective measures of sexual arousal. Furthermore, given the complex nature of the sexual response in women, physiologic measures should be considered in context of other data, including the history, physical examination, and validated questionnaires. Nonetheless, the existence of appropriate physiologic measures is vital to our understanding of female sexual function and dysfunction.
Research on single-chip microcomputer controlled rotating magnetic field mineralization model
NASA Astrophysics Data System (ADS)
Li, Yang; Qi, Yulin; Yang, Junxiao; Li, Na
2017-08-01
As one of the method of selecting ore, the magnetic separation method has the advantages of stable operation, simple process flow, high beneficiation efficiency and no chemical environment pollution. But the existing magnetic separator are more mechanical, the operation is not flexible, and can not change the magnetic field parameters according to the precision of the ore needed. Based on the existing magnetic separator is mechanical, the rotating magnetic field can be used for single chip microcomputer control as the research object, design and trial a rotating magnetic field processing prototype, and through the single-chip PWM pulse output to control the rotation of the magnetic field strength and rotating magnetic field speed. This method of using pure software to generate PWM pulse to control rotary magnetic field beneficiation, with higher flexibility, accuracy and lower cost, can give full play to the performance of single-chip.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Development of wheelchair caster testing equipment and preliminary testing of caster models
Mhatre, Anand; Ott, Joseph
2017-01-01
Background Because of the adverse environmental conditions present in less-resourced environments (LREs), the World Health Organization (WHO) has recommended that specialised wheelchair test methods may need to be developed to support product quality standards in these environments. A group of experts identified caster test methods as a high priority because of their common failure in LREs, and the insufficiency of existing test methods described in the International Organization for Standardization (ISO) Wheelchair Testing Standards (ISO 7176). Objectives To develop and demonstrate the feasibility of a caster system test method. Method Background literature and expert opinions were collected to identify existing caster test methods, caster failures common in LREs and environmental conditions present in LREs. Several conceptual designs for the caster testing method were developed, and through an iterative process using expert feedback, a final concept and a design were developed and a prototype was fabricated. Feasibility tests were conducted by testing a series of caster systems from wheelchairs used in LREs, and failure modes were recorded and compared to anecdotal reports about field failures. Results The new caster testing system was developed and it provides the flexibility to expose caster systems to typical conditions in LREs. Caster failures such as stem bolt fractures, fork fractures, bearing failures and tire cracking occurred during testing trials and are consistent with field failures. Conclusion The new caster test system has the capability to incorporate necessary test factors that degrade caster quality in LREs. Future work includes developing and validating a testing protocol that results in failure modes common during wheelchair use in LRE. PMID:29062762
X-ray optics simulation and beamline design for the APS upgrade
NASA Astrophysics Data System (ADS)
Shi, Xianbo; Reininger, Ruben; Harder, Ross; Haeffner, Dean
2017-08-01
The upgrade of the Advanced Photon Source (APS) to a Multi-Bend Achromat (MBA) will increase the brightness of the APS by between two and three orders of magnitude. The APS upgrade (APS-U) project includes a list of feature beamlines that will take full advantage of the new machine. Many of the existing beamlines will be also upgraded to profit from this significant machine enhancement. Optics simulations are essential in the design and optimization of these new and existing beamlines. In this contribution, the simulation tools used and developed at APS, ranging from analytical to numerical methods, are summarized. Three general optical layouts are compared in terms of their coherence control and focusing capabilities. The concept of zoom optics, where two sets of focusing elements (e.g., CRLs and KB mirrors) are used to provide variable beam sizes at a fixed focal plane, is optimized analytically. The effects of figure errors on the vertical spot size and on the local coherence along the vertical direction of the optimized design are investigated.
Large rotorcraft transmission technology development program
NASA Technical Reports Server (NTRS)
Mack, J. C.
1983-01-01
Testing of a U.S. Army XCH-62 HLH aft rotor transmission under NASA Contract NAS 3-22143 was successfully completed. This test establishes the feasibility of large, high power rotorcraft transmissions as well as demonstrating the resolution of deficiencies identified during the HLH advanced technology programs and reported by USAAMRDLTR-77-38. Over 100 hours of testing was conducted. At the 100% design power rating of 10,620 horsepower, the power transferred through a single spiral bevel gear mesh is more than twice that of current helicopter bevel gearing. In the original design of these gears, industry-wide design methods were employed and failures were experienced which identified problem areas unique to gear size. To remedy this technology shortfall, a program was developed to predict gear stresses using finite element analysis for complete and accurate representation of the gear tooth and supporting structure. To validate the finite element methodology gear strain data from the existing U.S. Army HLH aft transmission was acquired, and existing data from smaller gears were made available.
Fabrication and evaluation of advanced titanium structural panels for supersonic cruise aircraft
NASA Technical Reports Server (NTRS)
Payne, L.
1977-01-01
Flightworthy primary structural panels were designed, fabricated, and tested to investigate two advanced fabrication methods for titanium alloys. Skin-stringer panels fabricated using the weldbraze process, and honeycomb-core sandwich panels fabricated using a diffusion bonding process, were designed to replace an existing integrally stiffened shear panel on the upper wing surface of the NASA YF-12 research aircraft. The investigation included ground testing and Mach 3 flight testing of full-scale panels, and laboratory testing of representative structural element specimens. Test results obtained on full-scale panels and structural element specimens indicate that both of the fabrication methods investigated are suitable for primary structural applications on future civil and military supersonic cruise aircraft.
Experimental and analytical research on the aerodynamics of wind driven turbines. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohrbach, C.; Wainauski, H.; Worobel, R.
1977-12-01
This aerodynamic research program was aimed at providing a reliable, comprehensive data base on a series of wind turbine models covering a broad range of the prime aerodynamic and geometric variables. Such data obtained under controlled laboratory conditions on turbines designed by the same method, of the same size, and tested in the same wind tunnel had not been available in the literature. Moreover, this research program was further aimed at providing a basis for evaluating the adequacy of existing wind turbine aerodynamic design and performance methodology, for assessing the potential of recent advanced theories and for providing a basismore » for further method development and refinement.« less
Reduction of Unsteady Forcing in a Vaned, Contra-Rotating Transonic Turbine Configuration
NASA Technical Reports Server (NTRS)
Clark, John
2010-01-01
HPT blade unsteadiness in the presence of a downstream vane consistent with contra-rotation is characterized by strong interaction at the first harmonic of downstream vane passing. E An existing stage-and-one-half transonic turbine rig design was used as a baseline to investigate means of reducing such a blade-vane interaction. E Methods assessed included: Aerodynamic shaping of HPT blades 3D stacking of the downstream vane Steady pressure-side blowing E Of the methods assessed, a combination of vane bowing and steady pressure-side blowing produced the most favorable result. E Transonic turbine experiments are planned to assess predictive accuracy for the baseline turbine and any design improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
Sutton, Deanna A
2007-09-01
The recovery of Coccidioides spp. by culture and confirmation utilizing the AccuProbe nucleic acid hybridization method by GenProbe remain the definitive diagnostic method. Biosafety considerations from specimen collection through culture confirmation in the mycology laboratory are critical, as acquisition of coccidioidomycosis by laboratory workers is well documented. The designation of Coccidioides spp. as select agents of potential bioterrorism has mandated strict regulation of their transport and inventory. The genus appears generally susceptible, in vitro, although no defined breakpoints exist. Susceptibility testing may assist in documenting treatment failures.
The acoustical design of vehicles-a challenge for qualitative evaluation
NASA Astrophysics Data System (ADS)
Schulte-Fortkamp, Brigitte; Genuit, Klaus; Fiebig, Andre
2005-09-01
Whenever the acoustical design of vehicles is explored, the crucial question about the appropriate method of evaluation arises. Research shows that not only acoustic but also non-acoustic parameters have a major influence on the way sounds are evaluated. Therefore, new methods of evaluation have to be implemented. Methods are needed which give the opportunity to test the quality of the given ambience and to register the effects and evaluations in their functional interdependence as well as the influence of personal and contextual factors. Moreover, new methods have to give insight into processes of evaluation and their contextual parameters. In other words, the task of evaluating acoustical ambiences consists of designating a set of social, psychological, and cultural conditions which are important to determine particular individual and collective behavior, attitudes, and also emotions relative to the given ambience. However, no specific recommendations exist yet which comprise particular descriptions of how to assess those specific sound effects. That is why there is a need to develop alternative methods of evaluation with whose help effects of acoustical ambiences can be better predicted. A method of evaluation will be presented which incorporates a new sensitive approach for the evaluation of vehicle sounds.
Lahmann, John M; Benson, James D; Higgins, Adam Z
2018-02-01
For more than fifty years the human red blood cell (RBC) has been a widely studied model for transmembrane mass transport. Existing literature spans myriad experimental designs with varying results and physiologic interpretations. In this review, we examine the kinetics and mechanisms of membrane transport in the context of RBC cryopreservation. We include a discussion of the pathways for water and glycerol permeation through the cell membrane and the implications for mathematical modeling of the membrane transport process. In particular, we examine the concentration dependence of water and glycerol transport and provide equations for estimating permeability parameters as a function of concentration based on a synthesis of literature data. This concentration-dependent transport model may allow for design of improved methods for post-thaw removal of glycerol from cryopreserved blood. More broadly, the consideration of the concentration dependence of membrane permeability parameters may be important for other cell types as well, especially for design of methods for equilibration with the highly concentrated solutions used for vitrification. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
So, Sung-Sau; Karplus, Martin
2001-07-01
Glycogen phosphorylase (GP) is an important enzyme that regulates blood glucose level and a key therapeutic target for the treatment of type II diabetes. In this study, a number of potential GP inhibitors are designed with a variety of computational approaches. They include the applications of MCSS, LUDI and CoMFA to identify additional fragments that can be attached to existing lead molecules; the use of 2D and 3D similarity-based QSAR models (HQSAR and SMGNN) and of the LUDI program to identify novel molecules that may bind to the glucose binding site. The designed ligands are evaluated by a multiple screening method, which is a combination of commercial and in-house ligand-receptor binding affinity prediction programs used in a previous study (So and Karplus, J. Comp.-Aid. Mol. Des., 13 (1999), 243-258). Each method is used at an appropriate point in the screening, as determined by both the accuracy of the calculations and the computational cost. A comparison of the strengths and weaknesses of the ligand design approaches is made.
Scene text recognition in mobile applications by character descriptor and structure configuration.
Yi, Chucai; Tian, Yingli
2014-07-01
Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.
10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... behavior of items relied on for safety. (b) Facility and system design and facility layout must be based on... existing facilities. (a) Baseline design criteria. Each prospective applicant or licensee shall address the following baseline design criteria in the design of new facilities. Each existing licensee shall address the...
10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... behavior of items relied on for safety. (b) Facility and system design and facility layout must be based on... existing facilities. (a) Baseline design criteria. Each prospective applicant or licensee shall address the following baseline design criteria in the design of new facilities. Each existing licensee shall address the...
10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... behavior of items relied on for safety. (b) Facility and system design and facility layout must be based on... existing facilities. (a) Baseline design criteria. Each prospective applicant or licensee shall address the following baseline design criteria in the design of new facilities. Each existing licensee shall address the...
10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... behavior of items relied on for safety. (b) Facility and system design and facility layout must be based on... existing facilities. (a) Baseline design criteria. Each prospective applicant or licensee shall address the following baseline design criteria in the design of new facilities. Each existing licensee shall address the...
Ye, Yanmei; Wu, Cifang; Cheng, Chengbiao; Qiu, Lingzhang; Huang, Shengyu; Zheng, Ruihui
2002-09-01
The concept and characteristics of engineering designs on sustainable agricultural land consolidation project were discussed in this paper. Principles, basic methods and procedures of engineering designs on agricultural land consolidation project were put forward, which were successfully adopted for designing agricultural land consolidation in Xuemeiyang region of Changtai County, including diversity designs of sustainable land use, engineering designs of soil improvement, roads, ditches, and drains for protecting existent animal environments, and design of ecological shelter-forests in farmland. Moreover, from sustainable economic, ecological and social points, the results of these engineering designs were evaluated based on fouteen important indexes. After carrying out these engineeringdesigns, the eco-environments and agricultural production conditions were significantly improved, and the farm income was increased in planned regions.
MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR.
Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo
2015-11-16
Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Project Cyclops: a Design Study of a System for Detecting Extraterrestrial Intelligent Life
NASA Technical Reports Server (NTRS)
1972-01-01
The requirements in hardware, manpower, time and funding to conduct a realistic effort aimed at detecting the existence of extraterrestrial intelligent life are examined. The methods used are limited to present or near term future state-of-the-art techniques. Subjects discussed include: (1) possible methods of contact, (2) communication by electromagnetic waves, (3) antenna array and system facilities, (4) antenna elements, (5) signal processing, (6) search strategy, and (7) radio and radar astronomy.
A SINDA thermal model using CAD/CAE technologies
NASA Technical Reports Server (NTRS)
Rodriguez, Jose A.; Spencer, Steve
1992-01-01
The approach to thermal analysis described by this paper is a technique that incorporates Computer Aided Design (CAD) and Computer Aided Engineering (CAE) to develop a thermal model that has the advantages of Finite Element Methods (FEM) without abandoning the unique advantages of Finite Difference Methods (FDM) in the analysis of thermal systems. The incorporation of existing CAD geometry, the powerful use of a pre and post processor and the ability to do interdisciplinary analysis, will be described.
NASA Astrophysics Data System (ADS)
Kim, Nakwan
Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.
Laboratory investigations of earthquake dynamics
NASA Astrophysics Data System (ADS)
Xia, Kaiwen
In this thesis this will be attempted through controlled laboratory experiments that are designed to mimic natural earthquake scenarios. The earthquake dynamic rupturing process itself is a complicated phenomenon, involving dynamic friction, wave propagation, and heat production. Because controlled experiments can produce results without assumptions needed in theoretical and numerical analysis, the experimental method is thus advantageous over theoretical and numerical methods. Our laboratory fault is composed of carefully cut photoelastic polymer plates (Homahte-100, Polycarbonate) held together by uniaxial compression. As a unique unit of the experimental design, a controlled exploding wire technique provides the triggering mechanism of laboratory earthquakes. Three important components of real earthquakes (i.e., pre-existing fault, tectonic loading, and triggering mechanism) correspond to and are simulated by frictional contact, uniaxial compression, and the exploding wire technique. Dynamic rupturing processes are visualized using the photoelastic method and are recorded via a high-speed camera. Our experimental methodology, which is full-field, in situ, and non-intrusive, has better control and diagnostic capacity compared to other existing experimental methods. Using this experimental approach, we have investigated several problems: dynamics of earthquake faulting occurring along homogeneous faults separating identical materials, earthquake faulting along inhomogeneous faults separating materials with different wave speeds, and earthquake faulting along faults with a finite low wave speed fault core. We have observed supershear ruptures, subRayleigh to supershear rupture transition, crack-like to pulse-like rupture transition, self-healing (Heaton) pulse, and rupture directionality.
NASA Astrophysics Data System (ADS)
Au, How Meng
The aircraft design process traditionally starts with a given set of top-level requirements. These requirements can be aircraft performance related such as the fuel consumption, cruise speed, or takeoff field length, etc., or aircraft geometry related such as the cabin height or cabin volume, etc. This thesis proposes a new aircraft design process in which some of the top-level requirements are not explicitly specified. Instead, these previously specified parameters are now determined through the use of the Price-Per-Value-Factor (PPVF) index. This design process is well suited for design projects where general consensus of the top-level requirements does not exist. One example is the design of small commuter airliners. The above mentioned value factor is comprised of productivity, cabin volume, cabin height, cabin pressurization, mission fuel consumption, and field length, each weighted to a different exponent. The relative magnitude and positive/negative signs of these exponents are in agreement with general experience. The value factors of the commuter aircraft are shown to have improved over a period of four decades. In addition, the purchase price is shown to vary linearly with the value factor. The initial aircraft sizing process can be manpower intensive if the calculations are done manually. By incorporating automation into the process, the design cycle can be shortened considerably. The Fortran program functions and subroutines in this dissertation, in addition to the design and optimization methodologies described above, contribute to the reduction of manpower required for the initial sizing process. By combining the new design process mentioned above and the PPVF as the objective function, an optimization study is conducted on the design of a 20-seat regional jet. Handbook methods for aircraft design are written into a Fortran code. A genetic algorithm is used as the optimization scheme. The result of the optimization shows that aircraft designed to this PPVF index can be competitive compared to existing turboprop commuter aircraft. The process developed can be applied to other classes of aircraft with the designer modifying the cost function based upon the design goals.
Thompson, Steven K
2006-12-01
A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.
Evaluating the utility of two gestural discomfort evaluation methods
Son, Minseok; Jung, Jaemoon; Park, Woojin
2017-01-01
Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016
CS_TOTR: A new vertex centrality method for directed signed networks based on status theory
NASA Astrophysics Data System (ADS)
Ma, Yue; Liu, Min; Zhang, Peng; Qi, Xingqin
Measuring the importance (or centrality) of vertices in a network is a significant topic in complex network analysis, which has significant applications in diverse domains, for example, disease control, spread of rumors, viral marketing and so on. Existing studies mainly focus on social networks with only positive (or friendship) relations, while signed networks with also negative (or enemy) relations are seldom studied. Various signed networks commonly exist in real world, e.g. a network indicating friendship/enmity, love/hate or trust/mistrust relationships. In this paper, we propose a new centrality method named CS_TOTR to give a ranking of vertices in directed signed networks. To design this new method, we use the “status theory” for signed networks, and also adopt the vertex ranking algorithm for a tournament and the topological sorting algorithm for a general directed graph. We apply this new centrality method on the famous Sampson Monastery dataset and obtain a convincing result which shows its validity.
Barcode extension for analysis and reconstruction of structures
NASA Astrophysics Data System (ADS)
Myhrvold, Cameron; Baym, Michael; Hanikel, Nikita; Ong, Luvena L.; Gootenberg, Jonathan S.; Yin, Peng
2017-03-01
Collections of DNA sequences can be rationally designed to self-assemble into predictable three-dimensional structures. The geometric and functional diversity of DNA nanostructures created to date has been enhanced by improvements in DNA synthesis and computational design. However, existing methods for structure characterization typically image the final product or laboriously determine the presence of individual, labelled strands using gel electrophoresis. Here we introduce a new method of structure characterization that uses barcode extension and next-generation DNA sequencing to quantitatively measure the incorporation of every strand into a DNA nanostructure. By quantifying the relative abundances of distinct DNA species in product and monomer bands, we can study the influence of geometry and sequence on assembly. We have tested our method using 2D and 3D DNA brick and DNA origami structures. Our method is general and should be extensible to a wide variety of DNA nanostructures.
Barcode extension for analysis and reconstruction of structures.
Myhrvold, Cameron; Baym, Michael; Hanikel, Nikita; Ong, Luvena L; Gootenberg, Jonathan S; Yin, Peng
2017-03-13
Collections of DNA sequences can be rationally designed to self-assemble into predictable three-dimensional structures. The geometric and functional diversity of DNA nanostructures created to date has been enhanced by improvements in DNA synthesis and computational design. However, existing methods for structure characterization typically image the final product or laboriously determine the presence of individual, labelled strands using gel electrophoresis. Here we introduce a new method of structure characterization that uses barcode extension and next-generation DNA sequencing to quantitatively measure the incorporation of every strand into a DNA nanostructure. By quantifying the relative abundances of distinct DNA species in product and monomer bands, we can study the influence of geometry and sequence on assembly. We have tested our method using 2D and 3D DNA brick and DNA origami structures. Our method is general and should be extensible to a wide variety of DNA nanostructures.
Barcode extension for analysis and reconstruction of structures
Myhrvold, Cameron; Baym, Michael; Hanikel, Nikita; Ong, Luvena L; Gootenberg, Jonathan S; Yin, Peng
2017-01-01
Collections of DNA sequences can be rationally designed to self-assemble into predictable three-dimensional structures. The geometric and functional diversity of DNA nanostructures created to date has been enhanced by improvements in DNA synthesis and computational design. However, existing methods for structure characterization typically image the final product or laboriously determine the presence of individual, labelled strands using gel electrophoresis. Here we introduce a new method of structure characterization that uses barcode extension and next-generation DNA sequencing to quantitatively measure the incorporation of every strand into a DNA nanostructure. By quantifying the relative abundances of distinct DNA species in product and monomer bands, we can study the influence of geometry and sequence on assembly. We have tested our method using 2D and 3D DNA brick and DNA origami structures. Our method is general and should be extensible to a wide variety of DNA nanostructures. PMID:28287117
Robust Takagi-Sugeno fuzzy control for fractional order hydro-turbine governing system.
Wang, Bin; Xue, Jianyi; Wu, Fengjiao; Zhu, Delan
2016-11-01
A robust fuzzy control method for fractional order hydro-turbine governing system (FOHGS) in the presence of random disturbances is investigated in this paper. Firstly, the mathematical model of FOHGS is introduced, and based on Takagi-Sugeno (T-S) fuzzy rules, the generalized T-S fuzzy model of FOHGS is presented. Secondly, based on fractional order Lyapunov stability theory, a novel T-S fuzzy control method is designed for the stability control of FOHGS. Thirdly, the relatively loose sufficient stability condition is acquired, which could be transformed into a group of linear matrix inequalities (LMIs) via Schur complement as well as the strict mathematical derivation is given. Furthermore, the control method could resist random disturbances, which shows the good robustness. Simulation results indicate the designed fractional order T-S fuzzy control scheme works well compared with the existing method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
40 CFR 165.87 - Design and capacity requirements for existing structures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... existing containment structure: (1) The containment structure must be constructed of steel, reinforced... existing structures. 165.87 Section 165.87 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing...
40 CFR 165.87 - Design and capacity requirements for existing structures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... existing containment structure: (1) The containment structure must be constructed of steel, reinforced... existing structures. 165.87 Section 165.87 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing...
40 CFR 165.87 - Design and capacity requirements for existing structures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... existing containment structure: (1) The containment structure must be constructed of steel, reinforced... existing structures. 165.87 Section 165.87 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing...
40 CFR 165.87 - Design and capacity requirements for existing structures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... existing containment structure: (1) The containment structure must be constructed of steel, reinforced... existing structures. 165.87 Section 165.87 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Structures § 165.87 Design and capacity requirements for existing structures. (a) For all existing...
Overlapping illusions by transformation optics without any negative refraction material.
Sun, Fei; He, Sailing
2016-01-11
A novel method to achieve an overlapping illusion without any negative refraction index material is introduced with the help of the optic-null medium (ONM) designed by an extremely stretching spatial transformation. Unlike the previous methods to achieve such an optical illusion by transformation optics (TO), our method can achieve a power combination and reshape the radiation pattern at the same time. Unlike the overlapping illusion with some negative refraction index material, our method is not sensitive to the loss of the materials. Other advantages over existing methods are discussed. Numerical simulations are given to verify the performance of the proposed devices.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
Guo, Jia; Meakin, James A; Jezzard, Peter; Wong, Eric C
2015-03-01
Velocity-selective arterial spin labeling (VSASL) tags arterial blood on a velocity-selective (VS) basis and eliminates the tagging/imaging gap and associated transit delay sensitivity observed in other ASL tagging methods. However, the flow-weighting gradient pulses in VS tag preparation can generate eddy currents (ECs), which may erroneously tag the static tissue and create artificial perfusion signal, compromising the accuracy of perfusion quantification. A novel VS preparation design is presented using an eight-segment B1 insensitive rotation with symmetric radio frequency and gradient layouts (sym-BIR-8), combined with delays after gradient pulses to optimally reduce ECs of a wide range of time constants while maintaining B0 and B1 insensitivity. Bloch simulation, phantom, and in vivo experiments were carried out to determine robustness of the new and existing pulse designs to ECs, B0 , and B1 inhomogeneity. VSASL with reduced EC sensitivity across a wide range of EC time constants was achieved with the proposed sym-BIR-8 design, and the accuracy of cerebral blood flow measurement was improved. The sym-BIR-8 design performed the most robustly among the existing VS tagging designs, and should benefit studies using VS preparation with improved accuracy and reliability. © 2014 Wiley Periodicals, Inc.
Design of structurally distinct proteins using strategies inspired by evolution
Jacobs, T. M.; Williams, B.; Williams, T.; ...
2016-05-06
Natural recombination combines pieces of preexisting proteins to create new tertiary structures and functions. In this paper, we describe a computational protocol, called SEWING, which is inspired by this process and builds new proteins from connected or disconnected pieces of existing structures. Helical proteins designed with SEWING contain structural features absent from other de novo designed proteins and, in some cases, remain folded at more than 100°C. High-resolution structures of the designed proteins CA01 and DA05R1 were solved by x-ray crystallography (2.2 angstrom resolution) and nuclear magnetic resonance, respectively, and there was excellent agreement with the design models. Finally, thismore » method provides a new strategy to rapidly create large numbers of diverse and designable protein scaffolds.« less
Developing the DESCARTE Model: The Design of Case Study Research in Health Care.
Carolan, Clare M; Forbat, Liz; Smith, Annetta
2016-04-01
Case study is a long-established research tradition which predates the recent surge in mixed-methods research. Although a myriad of nuanced definitions of case study exist, seminal case study authors agree that the use of multiple data sources typify this research approach. The expansive case study literature demonstrates a lack of clarity and guidance in designing and reporting this approach to research. Informed by two reviews of the current health care literature, we posit that methodological description in case studies principally focuses on description of case study typology, which impedes the construction of methodologically clear and rigorous case studies. We draw from the case study and mixed-methods literature to develop the DESCARTE model as an innovative approach to the design, conduct, and reporting of case studies in health care. We examine how case study fits within the overall enterprise of qualitatively driven mixed-methods research, and the potential strengths of the model are considered. © The Author(s) 2015.
Geuna, S
2000-11-20
Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.
Empirical OPC rule inference for rapid RET application
NASA Astrophysics Data System (ADS)
Kulkarni, Anand P.
2006-10-01
A given technological node (45 nm, 65 nm) can be expected to process thousands of individual designs. Iterative methods applied at the node consume valuable days in determining proper placement of OPC features, and manufacturing and testing mask correspondence to wafer patterns in a trial-and-error fashion for each design. Repeating this fabrication process for each individual design is a time-consuming and expensive process. We present a novel technique which sidesteps the requirement to iterate through the model-based OPC analysis and pattern verification cycle on subsequent designs at the same node. Our approach relies on the inference of rules from a correct pattern at the wafer surface it relates to the OPC and pre-OPC pattern layout files. We begin with an offline phase where we obtain a "gold standard" design file that has been fab-tested at the node with a prepared, post-OPC layout file that corresponds to the intended on-wafer pattern. We then run an offline analysis to infer rules to be used in this method. During the analysis, our method implicitly identifies contextual OPC strategies for optimal placement of RET features on any design at that node. Using these strategies, we can apply OPC to subsequent designs at the same node with accuracy comparable to the original design file but significantly smaller expected runtimes. The technique promises to offer a rapid and accurate complement to existing RET application strategies.
Prediction of forces and moments for hypersonic flight vehicle control effectors
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.; Long, Lyle N.; Guilmette, Neal; Pagano, Peter
1993-01-01
This research project includes three distinct phases. For completeness, all three phases of the work are briefly described in this report. The goal was to develop methods of predicting flight control forces and moments for hypersonic vehicles which could be used in a preliminary design environment. The first phase included a preliminary assessment of subsonic/supersonic panel methods and hypersonic local flow inclination methods for such predictions. While these findings clearly indicated the usefulness of such methods for conceptual design activities, deficiencies exist in some areas. Thus, a second phase of research was conducted in which a better understanding was sought for the reasons behind the successes and failures of the methods considered, particularly for the cases at hypersonic Mach numbers. This second phase involved using computational fluid dynamics methods to examine the flow fields in detail. Through these detailed predictions, the deficiencies in the simple surface inclination methods were determined. In the third phase of this work, an improvement to the surface inclination methods was developed. This used a novel method for including viscous effects by modifying the geometry to include the viscous/shock layer.
Tabu Search enhances network robustness under targeted attacks
NASA Astrophysics Data System (ADS)
Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi
2016-03-01
We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.
NASA Astrophysics Data System (ADS)
Tuckness, D. G.; Jost, B.
1995-08-01
Current knowledge of the lunar gravity field is presented. The various methods used in determining these gravity fields are investigated and analyzed. It will be shown that weaknesses exist in the current models of the lunar gravity field. The dominant part of this weakness is caused by the lack of lunar tracking data information (farside, polar areas), which makes modeling the total lunar potential difficult. Comparisons of the various lunar models reveal an agreement in the low-order coefficients of the Legendre polynomials expansions. However, substantial differences in the models can exist in the higher-order harmonics. The main purpose of this study is to assess today's lunar gravity field models for use in tomorrow's lunar mission designs and operations.
The development of a primary dental care outreach course.
Waterhouse, P; Maguire, A; Tabari, D; Hind, V; Lloyd, J
2008-02-01
The aim of this work was to develop the first north-east based primary dental care outreach (PDCO) course for clinical dental undergraduate students at Newcastle University. The process of course design will be described and involved review of the existing Bachelor of Dental Surgery (BDS) degree course in relation to previously published learning outcomes. Areas were identified where the existing BDS course did not meet fully these outcomes. This was followed by setting the PDCO course aims and objectives, intended learning outcomes, curriculum and structure. The educational strategy and methods of teaching and learning were subsequently developed together with a strategy for overall quality control of the teaching and learning experience. The newly developed curriculum was aligned with appropriate student assessment methods, including summative, formative and ipsative elements.
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2000-01-01
Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.
Mineral-Resource Assessment of Northern Nye County, Nevada - A Progress Report
Ludington, Steve; John, David A.; Muntean, John L.; Hanson, Andrew D.; Castor, Stephen B.; Henry, Christopher D.; Wintzer, Niki; Cline, Jean S.; Simon, Adam C.
2009-01-01
The U.S. Geological Survey (USGS), University of Nevada, Las Vegas (UNLV), and Nevada Bureau of Mines and Geology (NBMG), which is a part of the University of Nevada, Reno (UNR), have completed the first year of data collection and analysis in preparation for a new mineral- and energy-resource assessment of northern Nye County, Nevada. This report provides information about work completed before October 1, 2009. Existing data are being compiled, including geology, geochemistry, geophysics, and mineral-deposit information. Field studies are underway, which are primarily designed to address issues raised during the review of existing information. In addition, new geochemical studies are in progress, including reanalyzing existing stream-sediment samples with modern methods, and analyzing metalliferous black shales.
Radiation shielding for gamma stereotactic radiosurgery units
2007-01-01
Shielding calculations for gamma stereotactic radiosurgery units are complicated by the fact that the radiation is highly anisotropic. Shielding design for these devices is unique. Although manufacturers will answer questions about the data that they provide for shielding evaluation, they will not perform calculations for customers. More than 237 such units are now installed in centers worldwide. Centers installing a gamma radiosurgery unit find themselves in the position of having to either invent or reinvent a method for performing shielding design. This paper introduces a rigorous and conservative method for barrier design for gamma stereotactic radiosurgery treatment rooms. This method should be useful to centers planning either to install a new unit or to replace an existing unit. The method described here is consistent with the principles outlined in Report No. 151 from the U.S. National Council on Radiation Protection and Measurements. In as little as 1 hour, a simple electronic spreadsheet can be set up, which will provide radiation levels on planes parallel to the barriers and 0.3 m outside the barriers. PACS numbers: 87.53.Ly, 87.56By, 87.52Tr
Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards
Smith, Justin D.
2013-01-01
This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874
Dellal, George; Peterson, Laura E; Provost, Lloyd; Gloor, Peter A; Fore, David Livingstone; Margolis, Peter A
2018-01-01
Background Our health care system fails to deliver necessary results, and incremental system improvements will not deliver needed change. Learning health systems (LHSs) are seen as a means to accelerate outcomes, improve care delivery, and further clinical research; yet, few such systems exist. We describe the process of codesigning, with all relevant stakeholders, an approach for creating a collaborative chronic care network (C3N), a peer-produced networked LHS. Objective The objective of this study was to report the methods used, with a diverse group of stakeholders, to translate the idea of a C3N to a set of actionable next steps. Methods The setting was ImproveCareNow, an improvement network for pediatric inflammatory bowel disease. In collaboration with patients and families, clinicians, researchers, social scientists, technologists, and designers, C3N leaders used a modified idealized design process to develop a design for a C3N. Results Over 100 people participated in the design process that resulted in (1) an overall concept design for the ImproveCareNow C3N, (2) a logic model for bringing about this system, and (3) 13 potential innovations likely to increase awareness and agency, make it easier to collect and share information, and to enhance collaboration that could be tested collectively to bring about the C3N. Conclusions We demonstrate methods that resulted in a design that has the potential to transform the chronic care system into an LHS. PMID:29472173
NASA Technical Reports Server (NTRS)
Katti, Romney R. (Inventor); Stadler, Henry L. (Inventor); Wu, Jiin-chuan (Inventor)
1995-01-01
A new read gate design for the vertical Bloch line (VBL) memory is disclosed which offers larger operating margin than the existing read gate designs. In the existing read gate designs, a current is applied to all the stripes. The stripes that contain a VBL pair are chopped, while the stripes that do not contain a VBL pair are not chopped. The information is then detected by inspecting the presence or absence of the bubble. The margin of the chopping current amplitude is very small, and sometimes non-existent. A new method of reading Vertical Bloch Line memory is also disclosed. Instead of using the wall chirality to separate the two binary states, the spatial deflection of the stripe head is used. Also disclosed herein is a compact memory which uses vertical Bloch line (VBL) memory technology for providing data storage. A three-dimensional arrangement in the form of stacks of VBL memory layers is used to achieve high volumetric storage density. High data transfer rate is achieved by operating all the layers in parallel. Using Hall effect sensing, and optical sensing via the Faraday effect to access the data from within the three-dimensional packages, an even higher data transfer rate can be achieved due to parallel operation within each layer.
NASA Astrophysics Data System (ADS)
Nusawardhana
2007-12-01
Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.
NASA Astrophysics Data System (ADS)
Verlinde, Christophe L. M. J.; Rudenko, Gabrielle; Hol, Wim G. J.
1992-04-01
A modular method for pursuing structure-based inhibitor design in the framework of a design cycle is presented. The approach entails four stages: (1) a design pathway is defined in the three-dimensional structure of a target protein; (2) this pathway is divided into subregions; (3) complementary building blocks, also called fragments, are designed in each subregion; complementarity is defined in terms of shape, hydrophobicity, hydrogen bond properties and electrostatics; and (4) fragments from different subregions are linked into potential lead compounds. Stages (3) and (4) are qualitatively guided by force-field calculations. In addition, the designed fragments serve as entries for retrieving existing compounds from chemical databases. This linked-fragment approach has been applied in the design of potentially selective inhibitors of triosephosphate isomerase from Trypanosoma brucei, the causative agent of sleeping sickness.
Sequence Bundles: a novel method for visualising, discovering and exploring sequence motifs
2014-01-01
Background We introduce Sequence Bundles--a novel data visualisation method for representing multiple sequence alignments (MSAs). We identify and address key limitations of the existing bioinformatics data visualisation methods (i.e. the Sequence Logo) by enabling Sequence Bundles to give salient visual expression to sequence motifs and other data features, which would otherwise remain hidden. Methods For the development of Sequence Bundles we employed research-led information design methodologies. Sequences are encoded as uninterrupted, semi-opaque lines plotted on a 2-dimensional reconfigurable grid. Each line represents a single sequence. The thickness and opacity of the stack at each residue in each position indicates the level of conservation and the lines' curved paths expose patterns in correlation and functionality. Several MSAs can be visualised in a composite image. The Sequence Bundles method is designed to favour a tangible, continuous and intuitive display of information. Results We have developed a software demonstration application for generating a Sequence Bundles visualisation of MSAs provided for the BioVis 2013 redesign contest. A subsequent exploration of the visualised line patterns allowed for the discovery of a number of interesting features in the dataset. Reported features include the extreme conservation of sequences displaying a specific residue and bifurcations of the consensus sequence. Conclusions Sequence Bundles is a novel method for visualisation of MSAs and the discovery of sequence motifs. It can aid in generating new insight and hypothesis making. Sequence Bundles is well disposed for future implementation as an interactive visual analytics software, which can complement existing visualisation tools. PMID:25237395