Science.gov

Sample records for adaptive design methods

  1. An adaptive two-stage dose-response design method for establishing Proof of Concept

    PubMed Central

    Franchetti, Yoko; Anderson, Stewart J.; Sampson, Allan R.

    2013-01-01

    We propose an adaptive two-stage dose-response design where a pre-specified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish ‘global’ PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs. PMID:23957520

  2. A Review on Effectiveness and Adaptability of the Design-Build Method

    NASA Astrophysics Data System (ADS)

    Kudo, Masataka; Miyatake, Ichiro; Baba, Kazuhito; Yokoi, Hiroyuki; Fueta, Toshiharu

    In the Ministry of Land, Infrastructure, Transport and Tourism (MLIT), various approaches have been taken for efficient implementation of public works projects, one of which is the ongoing use of the design-build method on a trial basis, as a means to utilize the technical skills and knowledge of private companies. In 2005, MLIT further introduced the a dvanced technical proposal type, a kind of the comprehensive evaluation method, as part of its efforts to improve tendering and contracting systems. Meanwhile, although the positive effect of the design build method has been reported, it has not been widely published, which may be one of the reasons that the number of MLIT projects using the design-build method is declining year by year. In this context, this paper discusses the result and review of the study concerning the extent of flexibility allowed for the process and design (proposal) of public work projects, and the follow-up surveys of the actual test case projects, conducted as basic researches to examine the measure to expand and promote the use of the design-build method. The study objects were selected from the tunnel construction projects using the shield tunneling method for developing the common utility duct, and the bridge construction projects ordering construction of supers tructure work and substructure work in a single contract. In providing the result and review of the studies, the structures and the temporary installations were separately examined, and effectiveness and adaptability of the design-build method was discussed for each, respectively.

  3. A texture-analysis-based design method for self-adaptive focus criterion function.

    PubMed

    Liang, Q; Qu, Y F

    2012-05-01

    Autofocusing (AF) criterion functions are critical to the performance of a passive autofocusing system in automatic video microscopy. Most of the autofocusing criterion functions proposed are dependent on the imaging system and image captured by the objective being focused or ranged. This dependence destabilizes the performance of the system when the criterion functions are applied to objectives with different characteristics. In this paper, a new design method for autofocusing criterion functions is introduced. This method enables the system to have the ability to tell the texture directional information of the objective. Based on this information, the optimal focus criterion function specific to one texture direction is designed, voiding blindly using autofocusing functions which cannot perform well when applied to the certain surface and can even lead to failure of the whole process. In this way, we improved the self-adaptability, robustness, reliability and focusing accuracy of the algorithm. First, the grey-level co-occurrence matrices of real-time images are calculated in four directions. Next, the contrast values of the four matrices are computed and then compared. The result reflects the directional information of the measured objective surfaces. Finally, with the directional information, an adaptive criterion function is constructed. To demonstrate the effectiveness of the new focus algorithm, we conducted experiments on different texture surfaces and compared the results with those obtained by existing algorithms. The proposed algorithm excellently performs with different measured objectives.

  4. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  5. Exploring adaptations to climate change with stakeholders: A participatory method to design grassland-based farming systems.

    PubMed

    Sautier, Marion; Piquet, Mathilde; Duru, Michel; Martin-Clouaire, Roger

    2017-05-15

    Research is expected to produce knowledge, methods and tools to enhance stakeholders' adaptive capacity by helping them to anticipate and cope with the effects of climate change at their own level. Farmers face substantial challenges from climate change, from changes in the average temperatures and the precipitation regime to an increased variability of weather conditions and the frequency of extreme events. Such changes can have dramatic consequences for many types of agricultural production systems such as grassland-based livestock systems for which climate change influences the seasonality and productivity of fodder production. We present a participatory design method called FARMORE (FARM-Oriented REdesign) that allows farmers to design and evaluate adaptations of livestock systems to future climatic conditions. It explicitly considers three climate features in the design and evaluation processes: climate change, climate variability and the limited predictability of weather. FARMORE consists of a sequence of three workshops for which a pre-existing game-like platform was adapted. Various year-round forage production and animal feeding requirements must be assembled by participants with a computerized support system. In workshop 1, farmers aim to produce a configuration that satisfies an average future weather scenario. They refine or revise the previous configuration by considering a sample of the between-year variability of weather in workshop 2. In workshop 3, they explicitly take the limited predictability of weather into account. We present the practical aspects of the method based on four case studies involving twelve farmers from Aveyron (France), and illustrate it through an in-depth description of one of these case studies with three dairy farmers. The case studies shows and discusses how workshop sequencing (1) supports a design process that progressively accommodates complexity of real management contexts by enlarging considerations of climate change

  6. Experimental Design and Primary Data Analysis Methods for Comparing Adaptive Interventions

    ERIC Educational Resources Information Center

    Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Gregory A.; Waxmonsky, James G.; Yu, Jihnhee; Murphy, Susan A.

    2012-01-01

    In recent years, research in the area of intervention development has been shifting from the traditional fixed-intervention approach to "adaptive interventions," which allow greater individualization and adaptation of intervention options (i.e., intervention type and/or dosage) over time. Adaptive interventions are operationalized via a sequence…

  7. Parallel multilevel adaptive methods

    NASA Technical Reports Server (NTRS)

    Dowell, B.; Govett, M.; Mccormick, S.; Quinlan, D.

    1989-01-01

    The progress of a project for the design and analysis of a multilevel adaptive algorithm (AFAC/HM/) targeted for the Navier Stokes Computer is discussed. The results of initial timing tests of AFAC, coupled with multigrid and an efficient load balancer, on a 16-node Intel iPSC/2 hypercube are included. The results of timing tests are presented.

  8. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  9. Accelerated Adaptive Integration Method

    PubMed Central

    2015-01-01

    Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083

  10. Model-Based Design Methods for Adaptive E-Learning Environments

    ERIC Educational Resources Information Center

    Adonis, Anastase; Drira, Khalil

    2007-01-01

    Purpose: This paper aims to provide a methodological road for the next generation of e-learning environments. Design/methodology/approach: This paper considers a survey of recent publications (1995-2002), which aim to provide practical and theoretical indications and advice, which are coupled with practical experimentations. Findings: The paper…

  11. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism.

  12. Advances in Adaptive Control Methods

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2009-01-01

    This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.

  13. Study adaptation, design, and methods of a web-based PTSD intervention for women Veterans.

    PubMed

    Lehavot, Keren; Litz, Brett; Millard, Steven P; Hamilton, Alison B; Sadler, Anne; Simpson, Tracy

    2017-02-01

    Women Veterans are a rapidly growing population with high risk of exposure to potentially traumatizing events and PTSD diagnoses. Despite the dissemination of evidence-based treatments for PTSD in the VA, most women Veteran VA users underutilize these treatments. Web-based PTSD treatment has the potential to reach and engage women Veterans with PTSD who do not receive treatment in VA settings. Our objective is to modify and evaluate Delivery of Self Training and Education for Stressful Situations (DESTRESSS), a web-based cognitive-behavioral intervention for PTSD, to target PTSD symptoms among women Veterans. The specific aims are to: (1) obtain feedback about DESTRESS, particularly on its relevance and sensitivity to women, using semi-structured interviews with expert clinicians and women Veterans with PTSD, and make modifications based on this feedback; (2) conduct a pilot study to finalize study procedures and make further refinements to the intervention; and (3) conduct a randomized clinical trial (RCT) evaluating a revised, telephone-assisted DESTRESS compared to telephone monitoring only. We describe the results from the first two aims, and the study design and procedures for the ongoing RCT. This line of research has the potential to result in a gender-sensitive, empirically-based, online treatment option for women Veterans with PTSD.

  14. An adaptive method with weight matrix as a function of the state to design the rotatory flexible system control law

    NASA Astrophysics Data System (ADS)

    Souza, Luiz C. G.; Bigot, P.

    2016-10-01

    One of the most well-known techniques of optimal control is the theory of Linear Quadratic Regulator (LQR). This method was originally applied only to linear systems but has been generalized for non-linear systems: the State Dependent Riccati Equation (SDRE) technique. One of the advantages of SDRE is that the weight matrix selection is the same as in LQR. The difference is that weights are not necessarily constant: they can be state dependent. Then, it gives an additional flexibility to design the control law. Many are applications of SDRE for simulation or real time control but generally SDRE weights are chosen constant so no advantage of this flexibility is taken. This work serves to show through simulation that state dependent weights matrix can improve SDRE control performance. The system is a non-linear flexible rotatory beam. In a brief first part SDRE theory will be explained and the non-linear model detailed. Then, influence of SDRE weight matrix associated with the state Q will be analyzed to get some insight in order to assume a state dependent law. Finally, these laws are tested and compared to constant weight matrix Q. Based on simulation results; one concludes showing the benefits of using an adaptive weight Q rather than a constant one.

  15. Designing Training for Temporal and Adaptive Transfer: A Comparative Evaluation of Three Training Methods for Process Control Tasks

    ERIC Educational Resources Information Center

    Kluge, Annette; Sauer, Juergen; Burkolter, Dina; Ritzmann, Sandrina

    2010-01-01

    Training in process control environments requires operators to be prepared for temporal and adaptive transfer of skill. Three training methods were compared with regard to their effectiveness in supporting transfer: Drill & Practice (D&P), Error Training (ET), and procedure-based and error heuristics training (PHT). Communication…

  16. From vision to action: roadmapping as a strategic method and tool to implement climate change adaptation - the example of the roadmap 'water sensitive urban design 2020'.

    PubMed

    Hasse, J U; Weingaertner, D E

    2016-01-01

    As the central product of the BMBF-KLIMZUG-funded Joint Network and Research Project (JNRP) 'dynaklim - Dynamic adaptation of regional planning and development processes to the effects of climate change in the Emscher-Lippe region (North Rhine Westphalia, Germany)', the Roadmap 2020 'Regional Climate Adaptation' has been developed by the various regional stakeholders and institutions containing specific regional scenarios, strategies and adaptation measures applicable throughout the region. This paper presents the method, elements and main results of this regional roadmap process by using the example of the thematic sub-roadmap 'Water Sensitive Urban Design 2020'. With a focus on the process support tool 'KlimaFLEX', one of the main adaptation measures of the WSUD 2020 roadmap, typical challenges for integrated climate change adaptation like scattered knowledge, knowledge gaps and divided responsibilities but also potential solutions and promising chances for urban development and urban water management are discussed. With the roadmap and the related tool, the relevant stakeholders of the Emscher-Lippe region have jointly developed important prerequisites to integrate their knowledge, to clarify vulnerabilities, adaptation goals, responsibilities and interests, and to foresightedly coordinate measures, resources, priorities and schedules for an efficient joint urban planning, well-grounded decision-making in times of continued uncertainties and step-by-step implementation of adaptation measures from now on.

  17. Adaptive clinical trial designs in oncology

    PubMed Central

    Zang, Yong; Lee, J. Jack

    2015-01-01

    Adaptive designs have become popular in clinical trial and drug development. Unlike traditional trial designs, adaptive designs use accumulating data to modify the ongoing trial without undermining the integrity and validity of the trial. As a result, adaptive designs provide a flexible and effective way to conduct clinical trials. The designs have potential advantages of improving the study power, reducing sample size and total cost, treating more patients with more effective treatments, identifying efficacious drugs for specific subgroups of patients based on their biomarker profiles, and shortening the time for drug development. In this article, we review adaptive designs commonly used in clinical trials and investigate several aspects of the designs, including the dose-finding scheme, interim analysis, adaptive randomization, biomarker-guided randomization, and seamless designs. For illustration, we provide examples of real trials conducted with adaptive designs. We also discuss practical issues from the perspective of using adaptive designs in oncology trials. PMID:25811018

  18. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    SciTech Connect

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-08-01

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose a Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.

  19. ERIS adaptive optics system design

    NASA Astrophysics Data System (ADS)

    Marchetti, Enrico; Le Louarn, Miska; Soenke, Christian; Fedrigo, Enrico; Madec, Pierre-Yves; Hubin, Norbert

    2012-07-01

    The Enhanced Resolution Imager and Spectrograph (ERIS) is the next-generation instrument planned for the Very Large Telescope (VLT) and the Adaptive Optics facility (AOF). It is an AO assisted instrument that will make use of the Deformable Secondary Mirror and the new Laser Guide Star Facility (4LGSF), and it is planned for the Cassegrain focus of the telescope UT4. The project is currently in its Phase A awaiting for approval to continue to the next phases. The Adaptive Optics system of ERIS will include two wavefront sensors (WFS) to maximize the coverage of the proposed sciences cases. The first is a high order 40x40 Pyramid WFS (PWFS) for on axis Natural Guide Star (NGS) observations. The second is a high order 40x40 Shack-Hartmann WFS for single Laser Guide Stars (LGS) observations. The PWFS, with appropriate sub-aperture binning, will serve also as low order NGS WFS in support to the LGS mode with a field of view patrolling capability of 2 arcmin diameter. Both WFSs will be equipped with the very low read-out noise CCD220 based camera developed for the AOF. The real-time reconstruction and control is provided by a SPARTA real-time platform adapted to support both WFS modes. In this paper we will present the ERIS AO system in all its main aspects: opto-mechanical design, real-time computer design, control and calibrations strategy. Particular emphasis will be given to the system performance obtained via dedicated numerical simulations.

  20. Some challenges with statistical inference in adaptive designs.

    PubMed

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  1. An adaptive level set method

    SciTech Connect

    Milne, Roger Brent

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  2. Method of adaptive artificial viscosity

    NASA Astrophysics Data System (ADS)

    Popov, I. V.; Fryazinov, I. V.

    2011-09-01

    A new finite-difference method for the numerical solution of gas dynamics equations is proposed. This method is a uniform monotonous finite-difference scheme of second-order approximation on time and space outside of domains of shock and compression waves. This method is based on inputting adaptive artificial viscosity (AAV) into gas dynamics equations. In this paper, this method is analyzed for 2D geometry. The testing computations of the movement of contact discontinuities and shock waves and the breakup of discontinuities are demonstrated.

  3. Teacher-Led Design of an Adaptive Learning Environment

    ERIC Educational Resources Information Center

    Mavroudi, Anna; Hadzilacos, Thanasis; Kalles, Dimitris; Gregoriades, Andreas

    2016-01-01

    This paper discusses a requirements engineering process that exemplifies teacher-led design in the case of an envisioned system for adaptive learning. Such a design poses various challenges and still remains an open research issue in the field of adaptive learning. Starting from a scenario-based elicitation method, the whole process was highly…

  4. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of

  5. Designing for Productive Adaptations of Curriculum Interventions

    ERIC Educational Resources Information Center

    Debarger, Angela Haydel; Choppin, Jeffrey; Beauvineau, Yves; Moorthy, Savitha

    2013-01-01

    Productive adaptations at the classroom level are evidence-based curriculum adaptations that are responsive to the demands of a particular classroom context and still consistent with the core design principles and intentions of a curriculum intervention. The model of design-based implementation research (DBIR) offers insights into complexities and…

  6. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  7. A Tutorial on Adaptive Design Optimization

    PubMed Central

    Myung, Jay I.; Cavagnaro, Daniel R.; Pitt, Mark A.

    2013-01-01

    Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. PMID:23997275

  8. Robust design of configurations and parameters of adaptable products

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua

    2014-03-01

    An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.

  9. Fast coeff_token decoding method and new memory architecture design for an efficient H.264/AVC context-based adaptive variable length coding decoder

    NASA Astrophysics Data System (ADS)

    Moon, Yong Ho; Yoon, Kun Su; Ha, Seok Wun

    2009-12-01

    A fast coeff_token decoding method based on new memory architecture is proposed to implement an efficient context-based adaptive variable length-coding (CAVLC) decoder. The heavy memory access needed in CAVLC decoding is a significant issue in designing a real system, such as digital multimedia broadcasting players, portable media players, and mobile phones with video, because it results in high power consumption and delay in operations. Recently, a new coeff_token variable-length decoding method has been suggested to achieve memory access reduction. However, it still requires a large portion of the total memory access in CAVLC decoding. In this work, an effective memory architecture is designed through careful examination of codewords in variable-length code tables. In addition, a novel fast decoding method is proposed to further reduce the memory accesses required for reconstructing the coeff_token element. Only one memory access is used for reconstructing each coeff_token element in the proposed method.

  10. Adaptive Design of Confirmatory Trials: Advances and Challenges

    PubMed Central

    Lai, Tze Leung; Lavori, Philip W.; Tsang, Ka Wai

    2015-01-01

    The past decade witnessed major developments in innovative designs of confirmatory clinical trials, and adaptive designs represent the most active area of these developments. We give an overview of the developments and associated statistical methods in several classes of adaptive designs of confirmatory trials. We also discuss their statistical difficulties and implementation challenges, and show how these problems are connected to other branches of mainstream Statistics, which we then apply to resolve the difficulties and bypass the bottlenecks in the development of adaptive designs for the next decade. PMID:26079372

  11. Valuation of design adaptability in aerospace systems

    NASA Astrophysics Data System (ADS)

    Fernandez Martin, Ismael

    As more information is brought into early stages of the design, more pressure is put on engineers to produce a reliable, high quality, and financially sustainable product. Unfortunately, requirements established at the beginning of a new project by customers, and the environment that surrounds them, continue to change in some unpredictable ways. The risk of designing a system that may become obsolete during early stages of production is currently tackled by the use of robust design simulation, a method that allows to simultaneously explore a plethora of design alternatives and requirements with the intention of accounting for uncertain factors in the future. Whereas this design technique has proven to be quite an improvement in design methods, under certain conditions, it fails to account for the change of uncertainty over time and the intrinsic value embedded in the system when certain design features are activated. This thesis introduces the concepts of adaptability and real options to manage risk foreseen in the face of uncertainty at early design stages. The method described herein allows decision-makers to foresee the financial impact of their decisions at the design level, as well as the final exposure to risk. In this thesis, cash flow models, traditionally used to obtain the forecast of a project's value over the years, were replaced with surrogate models that are capable of showing fluctuations on value every few days. This allowed a better implementation of real options valuation, optimization, and strategy selection. Through the option analysis model, an optimization exercise allows the user to obtain the best implementation strategy in the face of uncertainty as well as the overall value of the design feature. Here implementation strategy refers to the decision to include a new design feature in the system, after the design has been finalized, but before the end of its production life. The ability to do this in a cost efficient manner after the system

  12. Application of Adaptive Autopilot Designs for an Unmanned Aerial Vehicle

    NASA Technical Reports Server (NTRS)

    Shin, Yoonghyun; Calise, Anthony J.; Motter, Mark A.

    2005-01-01

    This paper summarizes the application of two adaptive approaches to autopilot design, and presents an evaluation and comparison of the two approaches in simulation for an unmanned aerial vehicle. One approach employs two-stage dynamic inversion and the other employs feedback dynamic inversions based on a command augmentation system. Both are augmented with neural network based adaptive elements. The approaches permit adaptation to both parametric uncertainty and unmodeled dynamics, and incorporate a method that permits adaptation during periods of control saturation. Simulation results for an FQM-117B radio controlled miniature aerial vehicle are presented to illustrate the performance of the neural network based adaptation.

  13. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  14. Design optimization of system level adaptive optical performance

    NASA Astrophysics Data System (ADS)

    Michels, Gregory J.; Genberg, Victor L.; Doyle, Keith B.; Bisson, Gary R.

    2005-09-01

    By linking predictive methods from multiple engineering disciplines, engineers are able to compute more meaningful predictions of a product's performance. By coupling mechanical and optical predictive techniques mechanical design can be performed to optimize optical performance. This paper demonstrates how mechanical design optimization using system level optical performance can be used in the development of the design of a high precision adaptive optical telescope. While mechanical design parameters are treated as the design variables, the objective function is taken to be the adaptively corrected optical imaging performance of an orbiting two-mirror telescope.

  15. Adaptive Automation Design and Implementation

    DTIC Science & Technology

    2015-09-17

    function instantiation. Finally, we develop five analysis tools for isolat ing effective AA points within a human -machine system. A function is an action... analysis tools allowing designers to identify points within a function network where the transitions between human and machine entities can facilitate...based on the four-stages of human information processing: sensory processing, perception /- working memory, decision making, and response selection

  16. Optimal Design of Item Banks for Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Stocking, Martha L.; Swanson, Len

    1998-01-01

    Applied optimal design methods to the item-bank design of adaptive testing for continuous testing situations using a version of the weighted-deviations model (M. Stocking and L. Swanson, 1993) in a simulation. Independent and overlapping item banks used items more efficiently than did a large item bank. (SLD)

  17. Adaptive envelope protection methods for aircraft

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Suraj

    Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control

  18. Stable adaptive control using new critic designs

    NASA Astrophysics Data System (ADS)

    Werbos, Paul J.

    1999-03-01

    Classical adaptive control proves total-system stability for control of linear plants, but only for plants meeting very restrictive assumptions. Approximate Dynamic Programming (ADP) has the potential, in principle, to ensure stability without such tight restrictions. It also offers nonlinear and neural extensions for optimal control, with empirically supported links to what is seen in the brain. However, the relevant ADP methods in use today--TD, HDP, DHP, GDHP--and the Galerkin-based versions of these all have serious limitations when used here as parallel distributed real-time learning systems; either they do not possess quadratic unconditional stability (to be defined) or they lead to incorrect results in the stochastic case. (ADAC or Q- learning designs do not help.) After explaining these conclusions, this paper describes new ADP designs which overcome these limitations. It also addresses the Generalized Moving Target problem, a common family of static optimization problems, and describes a way to stabilize large-scale economic equilibrium models, such as the old long-term energy mode of DOE.

  19. Adaptive Strategies for Materials Design using Uncertainties.

    PubMed

    Balachandran, Prasanna V; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young's (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don't. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  20. Adaptive control design for hysteretic smart systems

    NASA Astrophysics Data System (ADS)

    Fan, Xiang; Smith, Ralph C.

    2009-03-01

    Ferroelectric and ferromagnetic actuators are being considered for a range of industrial, aerospace, aeronautic and biomedical applications due to their unique transduction capabilities. However, they also exhibit hysteretic and nonlinear behavior that must be accommodated in models and control designs. If uncompensated, these effects can yield reduced system performance and, in the worst case, can produce unpredictable behavior of the control system. One technique for control design is to approximately linearize the actuator dynamics using an adaptive inverse compensator that is also able to accommodate model uncertainties and error introduced by the inverse algorithm. This paper describes the design of an adaptive inverse control technique based on the homogenized energy model for hysteresis. The resulting inverse filter is incorporated in an L1 control theory to provide a robust control algorithm capable of providing high speed, high accuracy tracking in the presence of actuator hysteresis and nonlinearities. Properties of the control design are illustrated through numerical examples.

  1. Flexible receiver adapter formal design review

    SciTech Connect

    Krieg, S.A.

    1995-06-13

    This memo summarizes the results of the Formal (90%) Design Review process and meetings held to evaluate the design of the Flexible Receiver Adapters, support platforms, and associated equipment. The equipment is part of the Flexible Receiver System used to remove, transport, and store long length contaminated equipment and components from both the double and single-shell underground storage tanks at the 200 area tank farms.

  2. An overview of the adaptive designs accelerating promising trials into treatments (ADAPT-IT) project.

    PubMed

    Meurer, William J; Lewis, Roger J; Tagle, Danilo; Fetters, Michael D; Legocki, Laurie; Berry, Scott; Connor, Jason; Durkalski, Valerie; Elm, Jordan; Zhao, Wenle; Frederiksen, Shirley; Silbergleit, Robert; Palesch, Yuko; Berry, Donald A; Barsan, William G

    2012-10-01

    Randomized clinical trials, which aim to determine the efficacy and safety of drugs and medical devices, are a complex enterprise with myriad challenges, stakeholders, and traditions. Although the primary goal is scientific discovery, clinical trials must also fulfill regulatory, clinical, and ethical requirements. Innovations in clinical trials methodology have the potential to improve the quality of knowledge gained from trials, the protection of human subjects, and the efficiency of clinical research. Adaptive clinical trial methods represent a broad category of innovations intended to address a variety of long-standing challenges faced by investigators, such as sensitivity to previous assumptions and delayed identification of ineffective treatments. The implementation of adaptive clinical trial methods, however, requires greater planning and simulation compared with a more traditional design, along with more advanced administrative infrastructure for trial execution. The value of adaptive clinical trial methods in exploratory phase (phase 2) clinical research is generally well accepted, but the potential value and challenges of applying adaptive clinical trial methods in large confirmatory phase clinical trials are relatively unexplored, particularly in the academic setting. In the Adaptive Designs Accelerating Promising Trials Into Treatments (ADAPT-IT) project, a multidisciplinary team is studying how adaptive clinical trial methods could be implemented in planning actual confirmatory phase trials in an established, National Institutes of Health-funded clinical trials network. The overarching objectives of ADAPT-IT are to identify and quantitatively characterize the adaptive clinical trial methods of greatest potential value in confirmatory phase clinical trials and to elicit and understand the enthusiasms and concerns of key stakeholders that influence their willingness to try these innovative strategies.

  3. A new orientation-adaptive interpolation method.

    PubMed

    Wang, Qing; Ward, Rabab Kreidieh

    2007-04-01

    We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free.

  4. The Method of Adaptive Comparative Judgement

    ERIC Educational Resources Information Center

    Pollitt, Alastair

    2012-01-01

    Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…

  5. Numerical design of an adaptive aileron

    NASA Astrophysics Data System (ADS)

    Amendola, Gianluca; Dimino, Ignazio; Concilio, Antonio; Magnifico, Marco; Pecora, Rosario

    2016-04-01

    The study herein described is aimed at investigating the feasibility of an innovative full-scale camber morphing aileron device. In the framework of the "Adaptive Aileron" project, an international cooperation between Italy and Canada, this goal was carried out with the integration of different morphing concepts in a wing-tip prototype. As widely demonstrated in recent European projects such as Clean Sky JTI and SARISTU, wing trailing edge morphing may lead to significant drag reduction (up to 6%) in off-design flight points by adapting chord-wise camber variations in cruise to compensate A/C weight reduction following fuel consumption. Those researches focused on the flap region as the most immediate solution to implement structural adaptations. However, there is also a growing interest in extending morphing functionalities to the aileron region preserving its main functionality in controlling aircraft directional stability. In fact, the external region of the wing seems to be the most effective in producing "lift over drag" improvements by morphing. Thus, the objective of the presented research is to achieve a certain drag reduction in off-design flight points by adapting wing shape and lift distribution following static deflections. In perspective, the developed device could also be used as a load alleviation system to reduce gust effects, augmenting its frequency bandwidth. In this paper, the preliminary design of the adaptive aileron is first presented, assessed on the base of the external aerodynamic loads. The primary structure is made of 5 segmented ribs, distributed along 4 bays, each splitted into three consecutive parts, connected with spanwise stringers. The aileron shape modification is then implemented by means of an actuation system, based on a classical quick-return mechanism, opportunely suited for the presented application. Finite element analyses were assessed for properly sizing the load-bearing structure and actuation systems and for

  6. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  7. The Potential of Adaptive Design in Animal Studies.

    PubMed

    Majid, Arshad; Bae, Ok-Nam; Redgrave, Jessica; Teare, Dawn; Ali, Ali; Zemke, Daniel

    2015-10-12

    Clinical trials are the backbone of medical research, and are often the last step in the development of new therapies for use in patients. Prior to human testing, however, preclinical studies using animal subjects are usually performed in order to provide initial data on the safety and effectiveness of prospective treatments. These studies can be costly and time consuming, and may also raise concerns about the ethical treatment of animals when potentially harmful procedures are involved. Adaptive design is a process by which the methods used in a study may be altered while it is being conducted in response to preliminary data or other new information. Adaptive design has been shown to be useful in reducing the time and costs associated with clinical trials, and may provide similar benefits in preclinical animal studies. The purpose of this review is to summarize various aspects of adaptive design and evaluate its potential for use in preclinical research.

  8. Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases

    SciTech Connect

    Archibald, Richard K; Fann, George I; Shelton Jr, William Allison

    2011-01-01

    We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.

  9. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  10. Team-Centered Perspective for Adaptive Automation Design

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III

    2003-01-01

    Automation represents a very active area of human factors research. The journal, Human Factors, published a special issue on automation in 1985. Since then, hundreds of scientific studies have been published examining the nature of automation and its interaction with human performance. However, despite a dramatic increase in research investigating human factors issues in aviation automation, there remain areas that need further exploration. This NASA Technical Memorandum describes a new area of automation design and research, called adaptive automation. It discusses the concepts and outlines the human factors issues associated with the new method of adaptive function allocation. The primary focus is on human-centered design, and specifically on ensuring that adaptive automation is from a team-centered perspective. The document shows that adaptive automation has many human factors issues common to traditional automation design. Much like the introduction of other new technologies and paradigm shifts, adaptive automation presents an opportunity to remediate current problems but poses new ones for human-automation interaction in aerospace operations. The review here is intended to communicate the philosophical perspective and direction of adaptive automation research conducted under the Aerospace Operations Systems (AOS), Physiological and Psychological Stressors and Factors (PPSF) project.

  11. Adaptive strategies for materials design using uncertainties

    SciTech Connect

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  12. Adaptive strategies for materials design using uncertainties

    DOE PAGES

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; ...

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  13. Adaptive Strategies for Materials Design using Uncertainties

    PubMed Central

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-01

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties. PMID:26792532

  14. Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.

    1979-01-01

    The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.

  15. Dual adaptive control: Design principles and applications

    NASA Technical Reports Server (NTRS)

    Mookerjee, Purusottam

    1988-01-01

    The design of an actively adaptive dual controller based on an approximation of the stochastic dynamic programming equation for a multi-step horizon is presented. A dual controller that can enhance identification of the system while controlling it at the same time is derived for multi-dimensional problems. This dual controller uses sensitivity functions of the expected future cost with respect to the parameter uncertainties. A passively adaptive cautious controller and the actively adaptive dual controller are examined. In many instances, the cautious controller is seen to turn off while the latter avoids the turn-off of the control and the slow convergence of the parameter estimates, characteristic of the cautious controller. The algorithms have been applied to a multi-variable static model which represents a simplified linear version of the relationship between the vibration output and the higher harmonic control input for a helicopter. Monte Carlo comparisons based on parametric and nonparametric statistical analysis indicate the superiority of the dual controller over the baseline controller.

  16. Adaptive optical antennas: design and evaluation

    NASA Astrophysics Data System (ADS)

    Weyrauch, Thomas; Vorontsov, Mikhail A.; Carhart, Gary W.; Simonova, Galina V.; Beresnev, Leonid A.; Polnau, Ernst E.

    2007-09-01

    We present the design and evaluation of compact adaptive optical antennas with apertures diameters of 16 mm and 100 mm for 5Gbit/s-class free-space optical communication systems. The antennas provide a bi-directional optically transparent link between fiber-optical wavelength-division multiplex systems and allow for mitigation of atmospheric-turbulence induced wavefront phase distortions with adaptive optics components. Beam steering is implemented in the antennas either with mirrors on novel tip/tilt platforms or a fiber-tip positioning system, both enabling operation bandwidths of more than 1 kHz. Bimorph piezoelectric actuated deformable mirrors are used for low-order phase-distortion compensation. An imaging system is integrated in the antennas for coarse pointing and tracking. Beam steering and wavefront control is based on blind maximization of the received signal level using a stochastic parallel gradient descent algorithm. The adaptive optics control architecture allowed the use of feedback signals provided locally within each transceiver system and remotely by the opposite transceiver system via an RF link. First atmospheric compensation results from communication experiments over a 250 m near-ground propagation path are presented.

  17. Adaptive enrichment designs for clinical trials.

    PubMed

    Simon, Noah; Simon, Richard

    2013-09-01

    Modern medicine has graduated from broad spectrum treatments to targeted therapeutics. New drugs recognize the recently discovered heterogeneity of many diseases previously considered to be fairly homogeneous. These treatments attack specific genetic pathways which are only dysregulated in some smaller subset of patients with the disease. Often this subset is only rudimentarily understood until well into large-scale clinical trials. As such, standard practice has been to enroll a broad range of patients and run post hoc subset analysis to determine those who may particularly benefit. This unnecessarily exposes many patients to hazardous side effects, and may vastly decrease the efficiency of the trial (especially if only a small subset of patients benefit). In this manuscript, we propose a class of adaptive enrichment designs that allow the eligibility criteria of a trial to be adaptively updated during the trial, restricting entry to patients likely to benefit from the new treatment. We show that our designs both preserve the type 1 error, and in a variety of cases provide a substantial increase in power.

  18. Reflections on the Adaptive Designs Accelerating Promising Trials Into Treatments (ADAPT-IT) Process—Findings from a Qualitative Study

    PubMed Central

    Guetterman, Timothy C.; Fetters, Michael D.; Legocki, Laurie J.; Mawocha, Samkeliso; Barsan, William G.; Lewis, Roger J.; Berry, Donald A.; Meurer, William J.

    2015-01-01

    Context The context for this study was the Adaptive Designs Advancing Promising Treatments Into Trials (ADAPT-IT) project, which aimed to incorporate flexible adaptive designs into pivotal clinical trials and to conduct an assessment of the trial development process. Little research provides guidance to academic institutions in planning adaptive trials. Objectives The purpose of this qualitative study was to explore the perspectives and experiences of stakeholders as they reflected back about the interactive ADAPT-IT adaptive design development process, and to understand their perspectives regarding lessons learned about the design of the trials and trial development. Materials and methods We conducted semi-structured interviews with ten key stakeholders and observations of the process. We employed qualitative thematic text data analysis to reduce the data into themes about the ADAPT-IT project and adaptive clinical trials. Results The qualitative analysis revealed four themes: education of the project participants, how the process evolved with participant feedback, procedures that could enhance the development of other trials, and education of the broader research community. Discussion and conclusions While participants became more likely to consider flexible adaptive designs, additional education is needed to both understand the adaptive methodology and articulate it when planning trials. PMID:26622163

  19. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  20. Multi-level participatory design of land use policies in African drylands: a method to embed adaptability skills of drylands societies in a policy framework.

    PubMed

    d'Aquino, Patrick; Bah, Alassane

    2014-01-01

    The participatory modelling method described here focuses on how to enable stakeholders to incorporate their own perception of environmental uncertainty and how to deal with it to design innovative environmental policies. This "self-design" approach uses role playing games and agent based modelling to let participants design their own conceptual framework, and so modelling supports, of issues. The method has a multi-scale focus I order to enable the whole multi-scale Sahelian logic to be expressed and on the other hand to encourage the players to deal with possible region-wide changes implied by their "local" policy objectives. This multi-level participatory design of land use policies has been under experimentation in Senegal since 2008 in different local and national arenas. The process has resulted in the "self-design" of a qualitative and relatively simple model of Sahelian uncertainty, which can be played like a role playing game as well a computerized model. Results are shown in perceptible autonomous organisational learning at the local level. Participants were also able to incorporate their own ideas for new rules for access to resources. They designed innovative collective rules, organised follow up and monitoring of these new land uses. Moreover, meaningful ideas for environmental policies are beginning to take shape. This work raises the epistemological question of what is meant by the term "indigenous knowledge" in environmental management, ranging from knowledge based on practical experience being included in the scholar's framing of knowledge, to a legitimate local ability to contextualize and re-arrange scientific expertise, to profoundly different worldviews which do not match ours.

  1. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  2. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  3. Parameter Plane Design Method

    DTIC Science & Technology

    1989-03-01

    Th usr a toente aninteer a thca sms b esta 1 Fp-ocsing 2. Enter P1 values, lwgt, ldig - > 9 Table I give us proper values. Table 1. PARAMETER TABLE...necessary and identify by block number) In this thesis a control systems analysis package is developed using parameter plane methods. It is an interactive...designer is able to choose values of the parameters which provide a good compromise between cost and dynamic behavior. 20 Distribution Availability of

  4. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  5. HIV-1 vaccines and adaptive trial designs.

    PubMed

    Corey, Lawrence; Nabel, Gary J; Dieffenbach, Carl; Gilbert, Peter; Haynes, Barton F; Johnston, Margaret; Kublin, James; Lane, H Clifford; Pantaleo, Giuseppe; Picker, Louis J; Fauci, Anthony S

    2011-04-20

    Developing a vaccine against the human immunodeficiency virus (HIV) poses an exceptional challenge. There are no documented cases of immune-mediated clearance of HIV from an infected individual, and no known correlates of immune protection. Although nonhuman primate models of lentivirus infection have provided valuable data about HIV pathogenesis, such models do not predict HIV vaccine efficacy in humans. The combined lack of a predictive animal model and undefined biomarkers of immune protection against HIV necessitate that vaccines to this pathogen be tested directly in clinical trials. Adaptive clinical trial designs can accelerate vaccine development by rapidly screening out poor vaccines while extending the evaluation of efficacious ones, improving the characterization of promising vaccine candidates and the identification of correlates of immune protection.

  6. Adaptive Method for Nonsmooth Nonnegative Matrix Factorization.

    PubMed

    Yang, Zuyuan; Xiang, Yong; Xie, Kan; Lai, Yue

    2017-04-01

    Nonnegative matrix factorization (NMF) is an emerging tool for meaningful low-rank matrix representation. In NMF, explicit constraints are usually required, such that NMF generates desired products (or factorizations), especially when the products have significant sparseness features. It is known that the ability of NMF in learning sparse representation can be improved by embedding a smoothness factor between the products. Motivated by this result, we propose an adaptive nonsmooth NMF (Ans-NMF) method in this paper. In our method, the embedded factor is obtained by using a data-related approach, so it matches well with the underlying products, implying a superior faithfulness of the representations. Besides, due to the usage of an adaptive selection scheme to this factor, the sparseness of the products can be separately constrained, leading to wider applicability and interpretability. Furthermore, since the adaptive selection scheme is processed through solving a series of typical linear programming problems, it can be easily implemented. Simulations using computer-generated data and real-world data show the advantages of the proposed Ans-NMF method over the state-of-the-art methods.

  7. Control system design method

    DOEpatents

    Wilson, David G [Tijeras, NM; Robinett, III, Rush D.

    2012-02-21

    A control system design method and concomitant control system comprising representing a physical apparatus to be controlled as a Hamiltonian system, determining elements of the Hamiltonian system representation which are power generators, power dissipators, and power storage devices, analyzing stability and performance of the Hamiltonian system based on the results of the determining step and determining necessary and sufficient conditions for stability of the Hamiltonian system, creating a stable control system based on the results of the analyzing step, and employing the resulting control system to control the physical apparatus.

  8. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  9. Parallel adaptive wavelet collocation method for PDEs

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.

  10. An Overview of the Adaptive Designs Accelerating Promising Trials Into Treatments (ADAPT-IT) Project

    PubMed Central

    Meurer, William J.; Lewis, Roger J.; Tagle, Danilo; Fetters, Michael D; Legocki, Laurie; Berry, Scott; Connor, Jason; Durkalski, Valerie; Elm, Jordan; Zhao, Wenle; Frederiksen, Shirley; Silbergleit, Robert; Palesch, Yuko; Berry, Donald A.; Barsan, William G.

    2013-01-01

    Randomized clinical trials, which aim to determine the efficacy and safety of drugs and medical devices, are a complex enterprise with myriad challenges, stakeholders, and traditions. While the primary goal is scientific discovery, clinical trials must also fulfill regulatory, clinical, and ethical requirements. Innovations in clinical trials methodology have the potential to improve the quality of knowledge gained from trials, the protection of human subjects, and the efficiency of clinical research. Adaptive clinical trial (ACT) methods represent a broad category of innovations intended to address a variety of long-standing challenges faced by investigators, such as sensitivity to prior assumptions and delayed identification of ineffective treatments. The implementation of ACT methods, however, requires greater planning and simulation compared to a more traditional design, along with more advanced administrative infrastructure for trial execution. The value of ACT methods in exploratory phase (phase II) clinical research is generally well accepted, but the potential value and challenges of applying ACT methods in large confirmatory phase clinical trials is relatively unexplored, particularly in the academic setting. In the Adaptive Designs Accelerating Promising Trials Into Treatments (ADAPT-IT) project, a multidisciplinary team is studying how ACT methods could be implemented in planning actual confirmatory phase trials in an established, NIH funded clinical trials network. The overarching objectives of ADAPT-IT are to identify and quantitatively characterize the ACT methods of greatest potential value in confirmatory phase clinical trials, and to elicit and understand the enthusiasms and concerns of key stakeholders that influence their willingness to try these innovative strategies. PMID:22424650

  11. An Adaptive Staggered Dose Design for a Normal Endpoint.

    PubMed

    Wu, Joseph; Menon, Sandeep; Chang, Mark

    2015-01-01

    In a clinical trial where several doses are compared to a control, a multi-stage design that combines both the selection of the best dose and the confirmation of this selected dose is desirable. An example is the two-stage drop-the-losers or pick-the-winner design, where inferior doses are dropped after interim analysis. Selection of target dose(s) can be based on ranking of observed effects, hypothesis testing with adjustment for multiplicity, or other criteria at interim stages. A number of methods have been proposed and have made significant gains in trial efficiency. However, many of these designs started off with all doses with equal allocation and did not consider prioritizing the doses using existing dose-response information. We propose an adaptive staggered dose procedure that allows explicit prioritization of doses and applies error spending scheme that favors doses with assumed better responses. This design starts off with only a subset of the doses and adaptively adds new doses depending on interim results. Using simulation, we have shown that this design performs better in terms of increased statistical power than the drop-the-losers design given strong prior information of dose response.

  12. Designing and Implementing Effective Adapted Physical Education Programs

    ERIC Educational Resources Information Center

    Kelly, Luke E.

    2011-01-01

    "Designing and Implementing Effective Adapted Physical Education Programs" was written to assist adapted and general physical educators who are dedicated to ensuring that the physical and motor needs of all their students are addressed in physical education. While it is anticipated that adapted physical educators, where available, will typically…

  13. Design for an Adaptive Library Catalog.

    ERIC Educational Resources Information Center

    Buckland, Michael K.; And Others

    1992-01-01

    Describes OASIS, a prototype adaptive online catalog implemented as a front end to the University of California MELVYL catalog. Topics addressed include the concept of adaptive retrieval systems, strategic search commands, feedback, prototyping using a front-end, the problem of excessive retrieval, commands to limit or increase search results, and…

  14. Designing and Generating Educational Adaptive Hypermedia Applications

    ERIC Educational Resources Information Center

    Retalis, Symeon; Papasalouros, Andreas

    2005-01-01

    Educational Adaptive Hypermedia Applications (EAHA) provide personalized views on the learning content to individual learners. They also offer adaptive sequencing (navigation) over the learning content based on rules that stem from the user model requirements and the instructional strategies. EAHA are gaining the focus of the research community as…

  15. Developing new online calibration methods for multidimensional computerized adaptive testing.

    PubMed

    Chen, Ping; Wang, Chun; Xin, Tao; Chang, Hua-Hua

    2017-02-01

    Multidimensional computerized adaptive testing (MCAT) has received increasing attention over the past few years in educational measurement. Like all other formats of CAT, item replenishment is an essential part of MCAT for its item bank maintenance and management, which governs retiring overexposed or obsolete items over time and replacing them with new ones. Moreover, calibration precision of the new items will directly affect the estimation accuracy of examinees' ability vectors. In unidimensional CAT (UCAT) and cognitive diagnostic CAT, online calibration techniques have been developed to effectively calibrate new items. However, there has been very little discussion of online calibration in MCAT in the literature. Thus, this paper proposes new online calibration methods for MCAT based upon some popular methods used in UCAT. Three representative methods, Method A, the 'one EM cycle' method and the 'multiple EM cycles' method, are generalized to MCAT. Three simulation studies were conducted to compare the three new methods by manipulating three factors (test length, item bank design, and level of correlation between coordinate dimensions). The results showed that all the new methods were able to recover the item parameters accurately, and the adaptive online calibration designs showed some improvements compared to the random design under most conditions.

  16. Adaptive computational methods for SSME internal flow analysis

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1986-01-01

    Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

  17. Ensemble transform sensitivity method for adaptive observations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan

    2016-01-01

    The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.

  18. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  19. Time domain and frequency domain design techniques for model reference adaptive control systems

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1971-01-01

    Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.

  20. Design of an adaptive controller for a telerobot manipulator

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1989-01-01

    The design of a joint-space adaptive control scheme is presented for controlling the slave arm motion of a dual-arm telerobot system developed at Goddard Space Flight Center (GSFC) to study telerobotic operations in space. Each slave arm of the dual-arm system is a kinematically redundant manipulator with 7 degrees of freedom (DOF). Using the concept of model reference adaptive control (MRAC) and Lyapunov direct method, an adatation algorithm is derived which adjusts the PD controller gains of the control scheme. The development of the adaptive control scheme assumes that the slave arm motion is non-compliant and slowly-varying. The implementation of the derived control scheme does not need the computation of the manipulator dynamics, which makes the control scheme sufficiently fast for real-time applications. Computer simulation study performed for the 7-DOF slave arm shows that the developed control scheme can efficiently adapt to sudden change in payloads while tracking various test trajectories such as ramp or sinusoids with negligible position errors.

  1. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  2. Adaptive Accommodation Control Method for Complex Assembly

    NASA Astrophysics Data System (ADS)

    Kang, Sungchul; Kim, Munsang; Park, Shinsuk

    Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.

  3. Adaptive strategies in designing the simultaneous global drug development program.

    PubMed

    Yuan, Zhilong; Chen, Gang; Huang, Qin

    2016-01-01

    Many methods have been proposed to account for the potential impact of ethnic/regional factors when extrapolating results from multiregional clinical trials (MRCTs) to targeted ethnic (TE) patients, i.e., "bridging." Most of them either focused on TE patients in the MRCT (i.e., internal bridging) or a separate local clinical trial (LCT) (i.e., external bridging). Huang et al. (2012) integrated both bridging concepts in their method for the Simultaneous Global Drug Development Program (SGDDP) which designs both the MRCT and the LCT prospectively and combines patients in both trials by ethnic origin, i.e., TE vs. non-TE (NTE). The weighted Z test was used to combine information from TE and NTE patients to test with statistical rigor whether a new treatment is effective in the TE population. Practically, the MRCT is often completed before the LCT. Thus to increase the power for the SGDDP and/or obtain more informative data in TE patients, we may use the final results from the MRCT to re-evaluate initial assumptions (e.g., effect sizes, variances, weight), and modify the LCT accordingly. We discuss various adaptive strategies for the LCT such as sample size reassessment, population enrichment, endpoint change, and dose adjustment. As an example, we extend a popular adaptive design method to re-estimate the sample size for the LCT, and illustrate it for a normally distributed endpoint.

  4. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  5. Method for Design Rotation

    DTIC Science & Technology

    1993-08-01

    desirability of a rotation as a function of the set of planar angles. Criteria for the symmetry of the design (such as the same set of factor levels for...P is -1. Hence there is no theoretical problem in obtaining rotations of a design; there are only the practical questions Why rotate a design? And...star points, which can be represented in a shorthand notation by the permutations of (±1,0, "’" , 0), and (c) factorial points, which are a two- level

  6. Optical Design for Extremely Large Telescope Adaptive Optics Systems

    SciTech Connect

    Bauman, Brian J.

    2003-01-01

    Designing an adaptive optics (AO) system for extremely large telescopes (ELT's) will present new optical engineering challenges. Several of these challenges are addressed in this work, including first-order design of multi-conjugate adaptive optics (MCAO) systems, pyramid wavefront sensors (PWFS's), and laser guide star (LGS) spot elongation. MCAO systems need to be designed in consideration of various constraints, including deformable mirror size and correction height. The y,{bar y} method of first-order optical design is a graphical technique that uses a plot with marginal and chief ray heights as coordinates; the optical system is represented as a segmented line. This method is shown to be a powerful tool in designing MCAO systems. From these analyses, important conclusions about configurations are derived. PWFS's, which offer an alternative to Shack-Hartmann (SH) wavefront sensors (WFS's), are envisioned as the workhorse of layer-oriented adaptive optics. Current approaches use a 4-faceted glass pyramid to create a WFS analogous to a quad-cell SH WFS. PWFS's and SH WFS's are compared and some newly-considered similarities and PWFS advantages are presented. Techniques to extend PWFS's are offered: First, PWFS's can be extended to more pixels in the image by tiling pyramids contiguously. Second, pyramids, which are difficult to manufacture, can be replaced by less expensive lenslet arrays. An approach is outlined to convert existing SH WFS's to PWFS's for easy evaluation of PWFS's. Also, a demonstration of PWFS's in sensing varying amounts of an aberration is presented. For ELT's, the finite altitude and finite thickness of LGS's means that the LGS will appear elongated from the viewpoint of subapertures not directly under the telescope. Two techniques for dealing with LGS spot elongation in SH WFS's are presented. One method assumes that the laser will be pulsed and uses a segmented micro-electromechanical system (MEMS) to track the LGS light subaperture by

  7. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  8. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  9. Adaptive filtering for the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Marié, Simon; Gloerfelt, Xavier

    2017-03-01

    In this study, a new selective filtering technique is proposed for the Lattice Boltzmann Method. This technique is based on an adaptive implementation of the selective filter coefficient σ. The proposed model makes the latter coefficient dependent on the shear stress in order to restrict the use of the spatial filtering technique in sheared stress region where numerical instabilities may occur. Different parameters are tested on 2D test-cases sensitive to numerical stability and on a 3D decaying Taylor-Green vortex. The results are compared to the classical static filtering technique and to the use of a standard subgrid-scale model and give significant improvements in particular for low-order filter consistent with the LBM stencil.

  10. Incorporation of Content Balancing Requirements in Stratification Designs for Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai

    2003-01-01

    Studied three stratification designs for computerized adaptive testing in conjunction with three well-developed content balancing methods. Simulation study results show substantial differences in item overlap rate and pool utilization among different methods. Recommends an optimal combination of stratification design and content balancing method.…

  11. A two-dimensional adaptive mesh generation method

    NASA Astrophysics Data System (ADS)

    Altas, Irfan; Stephenson, John W.

    1991-05-01

    The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.

  12. An adaptive penalty method for DIRECT algorithm in engineering optimization

    NASA Astrophysics Data System (ADS)

    Vilaça, Rita; Rocha, Ana Maria A. C.

    2012-09-01

    The most common approach for solving constrained optimization problems is based on penalty functions, where the constrained problem is transformed into a sequence of unconstrained problem by penalizing the objective function when constraints are violated. In this paper, we analyze the implementation of an adaptive penalty method, within the DIRECT algorithm, in which the constraints that are more difficult to be satisfied will have relatively higher penalty values. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.

  13. Adaptive design lessons from professional architects

    NASA Astrophysics Data System (ADS)

    Geiger, Ray W.; Snell, J. T.

    1993-09-01

    Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. In pursuing better ways to design cellular automata and qualification classifiers in a design process, we have found surprising success in exploring certain more esoteric approaches, e.g., the vision of interdisciplinary artistic discovery in and around creative problem solving. Our evaluation in research into vision and hybrid sensory systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and future oriented design practice. Discussion covers areas of case- based techniques of cognitive ergonomics, natural modeling sources, and an open architectural process of means/goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process.

  14. Adaptive and Adaptable Automation Design: A Critical Review of the Literature and Recommendations for Future Research

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kaber, David B.

    2006-01-01

    This report presents a review of literature on approaches to adaptive and adaptable task/function allocation and adaptive interface technologies for effective human management of complex systems that are likely to be issues for the Next Generation Air Transportation System, and a focus of research under the Aviation Safety Program, Integrated Intelligent Flight Deck Project. Contemporary literature retrieved from an online database search is summarized and integrated. The major topics include the effects of delegation-type, adaptable automation on human performance, workload and situation awareness, the effectiveness of various automation invocation philosophies and strategies to function allocation in adaptive systems, and the role of user modeling in adaptive interface design and the performance implications of adaptive interface technology.

  15. Optimal adaptive sequential designs for crossover bioequivalence studies.

    PubMed

    Xu, Jialin; Audet, Charles; DiLiberti, Charles E; Hauck, Walter W; Montague, Timothy H; Parr, Alan F; Potvin, Diane; Schuirmann, Donald J

    2016-01-01

    In prior works, this group demonstrated the feasibility of valid adaptive sequential designs for crossover bioequivalence studies. In this paper, we extend the prior work to optimize adaptive sequential designs over a range of geometric mean test/reference ratios (GMRs) of 70-143% within each of two ranges of intra-subject coefficient of variation (10-30% and 30-55%). These designs also introduce a futility decision for stopping the study after the first stage if there is sufficiently low likelihood of meeting bioequivalence criteria if the second stage were completed, as well as an upper limit on total study size. The optimized designs exhibited substantially improved performance characteristics over our previous adaptive sequential designs. Even though the optimized designs avoided undue inflation of type I error and maintained power at ≥ 80%, their average sample sizes were similar to or less than those of conventional single stage designs.

  16. An Adaptive Hybrid Genetic Algorithm for Improved Groundwater Remediation Design

    NASA Astrophysics Data System (ADS)

    Espinoza, F. P.; Minsker, B. S.; Goldberg, D. E.

    2001-12-01

    Identifying optimal designs for a groundwater remediation system is computationally intensive, especially for complex, nonlinear problems such as enhanced in situ bioremediation technology. To improve performance, we apply a hybrid genetic algorithm (HGA), which is a two-step solution method: a genetic algorithm (GA) for global search using the entire population and then a local search (LS) to improve search speed for only a few individuals in the population. We implement two types of HGAs: a non-adaptive HGA (NAHGA), whose operations are invariant throughout the run, and a self-adaptive HGA (SAHGA), whose operations adapt to the performance of the algorithm. The best settings of the two HGAs for optimal performance are then investigated for a groundwater remediation problem. The settings include the frequency of LS with respect to the normal GA evaluation, probability of individual selection for LS, evolution criterion for LS (Lamarckian or Baldwinian), and number of local search iterations. A comparison of the algorithms' performance under different settings will be presented.

  17. An adaptive multimeme algorithm for designing HIV multidrug therapies.

    PubMed

    Neri, Ferrante; Toivanen, Jari; Cascella, Giuseppe Leonardo; Ong, Yew-Soon

    2007-01-01

    This paper proposes a period representation for modeling the multidrug HIV therapies and an Adaptive Multimeme Algorithm (AMmA) for designing the optimal therapy. The period representation offers benefits in terms of flexibility and reduction in dimensionality compared to the binary representation. The AMmA is a memetic algorithm which employs a list of three local searchers adaptively activated by an evolutionary framework. These local searchers, having different features according to the exploration logic and the pivot rule, have the role of exploring the decision space from different and complementary perspectives and, thus, assisting the standard evolutionary operators in the optimization process. Furthermore, the AMmA makes use of an adaptation which dynamically sets the algorithmic parameters in order to prevent stagnation and premature convergence. The numerical results demonstrate that the application of the proposed algorithm leads to very efficient medication schedules which quickly stimulate a strong immune response to HIV. The earlier termination of the medication schedule leads to lesser unpleasant side effects for the patient due to strong antiretroviral therapy. A numerical comparison shows that the AMmA is more efficient than three popular metaheuristics. Finally, a statistical test based on the calculation of the tolerance interval confirms the superiority of the AMmA compared to the other methods for the problem under study.

  18. Aircraft digital control design methods

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Parsons, E.; Tashker, M. G.

    1976-01-01

    Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.

  19. Multidimensional Adaptive Testing with Optimal Design Criteria for Item Selection

    ERIC Educational Resources Information Center

    Mulder, Joris; van der Linden, Wim J.

    2009-01-01

    Several criteria from the optimal design literature are examined for use with item selection in multidimensional adaptive testing. In particular, it is examined what criteria are appropriate for adaptive testing in which all abilities are intentional, some should be considered as a nuisance, or the interest is in the testing of a composite of the…

  20. Towards Individualized Online Learning: The Design and Development of an Adaptive Web Based Learning Environment

    ERIC Educational Resources Information Center

    Inan, Fethi A.; Flores, Raymond; Ari, Fatih; Arslan-Ari, Ismahan

    2011-01-01

    The purpose of this study was to document the design and development of an adaptive system which individualizes instruction such as content, interfaces, instructional strategies, and resources dependent on two factors, namely student motivation and prior knowledge levels. Combining adaptive hypermedia methods with strategies proposed by…

  1. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  2. Clothing adaptations: the occupational therapist and the clothing designer collaborate.

    PubMed

    White, L W; Dallas, M J

    1977-02-01

    An occupational therapist and a clothing designer collaborated in solving the dressing problem of a child with multiple amputations. The dressing problems were identified and solutions for clothing adaptations relating to sleeves, closures, fasteners, fit, and design were incorporated into two test garments. Evaluation of the garments was based on ease in dressing and undressing, the effect on movement and mobility, the construction techniques, and their appearance. A description is given of the pattern adjustments, and considerations for clothing adaptations or selection or both are discussed. These clothing adaptations can be generalized to a wider population of handicapped persons.

  3. A novel adaptive force control method for IPMC manipulation

    NASA Astrophysics Data System (ADS)

    Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao

    2012-07-01

    IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.

  4. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  5. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  6. Dynamics of adaptive structures: Design through simulations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alexander, S.

    1993-01-01

    The use of a helical bi-morph actuator/sensor concept by mimicking the change of helical waveform in bacterial flagella is perhaps the first application of bacterial motions (living species) to longitudinal deployment of space structures. However, no dynamical considerations were analyzed to explain the waveform change mechanisms. The objective is to review various deployment concepts from the dynamics point of view and introduce the dynamical considerations from the outset as part of design considerations. Specifically, the impact of the incorporation of the combined static mechanisms and dynamic design considerations on the deployment performance during the reconfiguration stage is studied in terms of improved controllability, maneuvering duration, and joint singularity index. It is shown that intermediate configurations during articulations play an important role for improved joint mechanisms design and overall structural deployability.

  7. Robust Engineering Designs for Infrastructure Adaptation to a Changing Climate

    NASA Astrophysics Data System (ADS)

    Samaras, C.; Cook, L.

    2015-12-01

    Infrastructure systems are expected to be functional, durable and safe over long service lives - 50 to over 100 years. Observations and models of climate science show that greenhouse gas emissions resulting from human activities have changed climate, weather and extreme events. Projections of future changes (albeit with uncertainties caused by inadequacies of current climate/weather models) can be made based on scenarios for future emissions, but actual future emissions are themselves uncertain. Most current engineering standards and practices for infrastructure assume that the probabilities of future extreme climate and weather events will match those of the past. Climate science shows that this assumption is invalid, but is unable, at present, to define these probabilities over the service lives of existing and new infrastructure systems. Engineering designs, plans, and institutions and regulations will need to be adaptable for a range of future conditions (conditions of climate, weather and extreme events, as well as changing societal demands for infrastructure services). For their current and future projects, engineers should: Involve all stakeholders (owners, financers, insurance, regulators, affected public, climate/weather scientists, etc.) in key decisions; Use low regret, adaptive strategies, such as robust decision making and the observational method, comply with relevant standards and regulations, and exceed their requirements where appropriate; Publish design studies and performance/failure investigations to extend the body of knowledge for advancement of practice. The engineering community should conduct observational and modeling research with climate/weather/social scientists and the concerned communities and account rationally for climate change in revised engineering standards and codes. This presentation presents initial research on decisionmaking under uncertainty for climate resilient infrastructure design.

  8. Adaptive filter design using recurrent cerebellar model articulation controller.

    PubMed

    Lin, Chih-Min; Chen, Li-Yang; Yeung, Daniel S

    2010-07-01

    A novel adaptive filter is proposed using a recurrent cerebellar-model-articulation-controller (CMAC). The proposed locally recurrent globally feedforward recurrent CMAC (RCMAC) has favorable properties of small size, good generalization, rapid learning, and dynamic response, thus it is more suitable for high-speed signal processing. To provide fast training, an efficient parameter learning algorithm based on the normalized gradient descent method is presented, in which the learning rates are on-line adapted. Then the Lyapunov function is utilized to derive the conditions of the adaptive learning rates, so the stability of the filtering error can be guaranteed. To demonstrate the performance of the proposed adaptive RCMAC filter, it is applied to a nonlinear channel equalization system and an adaptive noise cancelation system. The advantages of the proposed filter over other adaptive filters are verified through simulations.

  9. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  10. Design of a digital adaptive control system for reentry vehicles.

    NASA Technical Reports Server (NTRS)

    Picon-Jimenez, J. L.; Montgomery, R. C.; Grigsby, L. L.

    1972-01-01

    The flying qualities of atmospheric reentry vehicles experience considerable variations due to the wide changes in flight conditions characteristic of reentry trajectories. A digital adaptive control system has been designed to modify the vehicle's dynamic characteristics and to provide desired flying qualities for all flight conditions. This adaptive control system consists of a finite-memory identifier which determines the vehicle's unknown parameters, and a gain computer which calculates feedback gains to satisfy flying quality requirements.

  11. An adaptive two-stage sequential design for sampling rare and clustered populations

    USGS Publications Warehouse

    Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.

    2008-01-01

    How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.

  12. Principles and Methods of Adapted Physical Education and Recreation.

    ERIC Educational Resources Information Center

    Arnheim, Daniel D.; And Others

    This text is designed for the elementary and secondary school physical educator and the recreation specialist in adapted physical education and, more specifically, as a text for college courses in adapted and corrective physical education and therapeutic recreation. The text is divided into four major divisions: scope, key teaching and therapy…

  13. An optimisation method for complex product design

    NASA Astrophysics Data System (ADS)

    Li, Ni; Yi, Wenqing; Bi, Zhuming; Kong, Haipeng; Gong, Guanghong

    2013-11-01

    Designing a complex product such as an aircraft usually requires both qualitative and quantitative data and reasoning. To assist the design process, a critical issue is how to represent qualitative data and utilise it in the optimisation. In this study, a new method is proposed for the optimal design of complex products: to make the full use of available data, information and knowledge, qualitative reasoning is integrated into the optimisation process. The transformation and fusion of qualitative and qualitative data are achieved via the fuzzy sets theory and a cloud model. To shorten the design process, parallel computing is implemented to solve the formulated optimisation problems. A parallel adaptive hybrid algorithm (PAHA) has been proposed. The performance of the new algorithm has been verified by a comparison with the results from PAHA and two other existing algorithms. Further, PAHA has been applied to determine the shape parameters of an aircraft model for aerodynamic optimisation purpose.

  14. The design of digital-adaptive controllers for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Broussard, J. R.; Berry, P. W.

    1976-01-01

    Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.

  15. A Method for Severely Constrained Item Selection in Adaptive Testing.

    ERIC Educational Resources Information Center

    Stocking, Martha L.; Swanson, Len

    1993-01-01

    A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)

  16. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  17. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  18. Adaptive method for electron bunch profile prediction

    NASA Astrophysics Data System (ADS)

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.

  19. Adaptive finite element methods in electrochemistry.

    PubMed

    Gavaghan, David J; Gillow, Kathryn; Süli, Endre

    2006-12-05

    In this article, we review some of our previous work that considers the general problem of numerical simulation of the currents at microelectrodes using an adaptive finite element approach. Microelectrodes typically consist of an electrode embedded (or recessed) in an insulating material. For all such electrodes, numerical simulation is made difficult by the presence of a boundary singularity at the electrode edge (where the electrode meets the insulator), manifested by the large increase in the current density at this point, often referred to as the edge effect. Our approach to overcoming this problem has involved the derivation of an a posteriori bound on the error in the numerical approximation for the current that can be used to drive an adaptive mesh-generation algorithm, allowing calculation of the quantity of interest (the current) to within a prescribed tolerance. We illustrate the generic applicability of the approach by considering a broad range of steady-state applications of the technique.

  20. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  1. Bayesian response-adaptive designs for basket trials.

    PubMed

    Ventz, Steffen; Barry, William T; Parmigiani, Giovanni; Trippa, Lorenzo

    2017-02-17

    We develop a general class of response-adaptive Bayesian designs using hierarchical models, and provide open source software to implement them. Our work is motivated by recent master protocols in oncology, where several treatments are investigated simultaneously in one or multiple disease types, and treatment efficacy is expected to vary across biomarker-defined subpopulations. Adaptive trials such as I-SPY-2 (Barker et al., 2009) and BATTLE (Zhou et al., 2008) are special cases within our framework. We discuss the application of our adaptive scheme to two distinct research goals. The first is to identify a biomarker subpopulation for which a therapy shows evidence of treatment efficacy, and to exclude other subpopulations for which such evidence does not exist. This leads to a subpopulation-finding design. The second is to identify, within biomarker-defined subpopulations, a set of cancer types for which an experimental therapy is superior to the standard-of-care. This goal leads to a subpopulation-stratified design. Using simulations constructed to faithfully represent ongoing cancer sequencing projects, we quantify the potential gains of our proposed designs relative to conventional non-adaptive designs.

  2. Adaptive methods, rolling contact, and nonclassical friction laws

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1989-01-01

    Results and methods on three different areas of contemporary research are outlined. These include adaptive methods, the rolling contact problem for finite deformation of a hyperelastic or viscoelastic cylinder, and non-classical friction laws for modeling dynamic friction phenomena.

  3. Adaptive methods: when and how should they be used in clinical trials?

    PubMed

    Porcher, Raphaël; Lecocq, Brigitte; Vray, Muriel

    2011-01-01

    Adaptive clinical trial designs are defined as designs that use data cumulated during trial to possibly modify certain aspects without compromising the validity and integrity of the said trial. Compared to more traditional trials, in theory, adaptive designs allow the same information to be generated but in a more efficient manner. The advantages and limits of this type of design together with the weight of the constraints, in particular of a logistic nature, that their use implies, differ depending on whether the trial is exploratory or confirmatory with a view to registration. One of the key elements ensuring trial integrity is the involvement of an independent committee to determine adaptations in terms of experimental design during the study. Adaptive methods for clinical trials are appealing and may be accepted by the relevant authorities. However, the constraints that they impose must be determined well in advance.

  4. Designing Adaptable Ships: Modularity and Flexibility in Future Ship Designs

    DTIC Science & Technology

    2016-01-01

    web page). v Contents Preface...55 Contents vii... integrated into the new design while reducing the construction cost of the ship. Recommendations We offer both short-term, ship-specific recommendations and

  5. Method and apparatus for adaptive force and position control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.

  6. Simple adaptive control system design for a quadrotor with an internal PFC

    SciTech Connect

    Mizumoto, Ikuro; Nakamura, Takuto; Kumon, Makoto; Takagi, Taro

    2014-12-10

    The paper deals with an adaptive control system design problem for a four rotor helicopter or quadrotor. A simple adaptive control design scheme with a parallel feedforward compensator (PFC) in the internal loop of the considered quadrotor will be proposed based on the backstepping strategy. As is well known, the backstepping control strategy is one of the advanced control strategy for nonlinear systems. However, the control algorithm will become complex if the system has higher order relative degrees. We will show that one can skip some design steps of the backstepping method by introducing a PFC in the inner loop of the considered quadrotor, so that the structure of the obtained controller will be simplified and a high gain based adaptive feedback control system will be designed. The effectiveness of the proposed method will be confirmed through numerical simulations.

  7. An Adaptive Discontinuous Galerkin Method for Modeling Atmospheric Convection (Preprint)

    DTIC Science & Technology

    2011-04-13

    Giraldo and Volkmar Wirth 5 SENSITIVITY STUDIES One important question for each adaptive numerical model is: how accurate is the adaptive method? For...this criterion that is used later for some sensitivity studies . These studies include a comparison between a simulation on an adaptive mesh with a...simulation on a uniform mesh and a sensitivity study concerning the size of the refinement region. 5.1 Comparison Criterion For comparing different

  8. Adaptive multibeam phased array design for a Spacelab experiment

    NASA Technical Reports Server (NTRS)

    Noji, T. T.; Fass, S.; Fuoco, A. M.; Wang, C. D.

    1977-01-01

    The parametric tradeoff analyses and design for an Adaptive Multibeam Phased Array (AMPA) for a Spacelab experiment are described. This AMPA Experiment System was designed with particular emphasis to maximize channel capacity and minimize implementation and cost impacts for future austere maritime and aeronautical users, operating with a low gain hemispherical coverage antenna element, low effective radiated power, and low antenna gain-to-system noise temperature ratio.

  9. Adaptive Control Law Design for Model Uncertainty Compensation

    DTIC Science & Technology

    1989-06-14

    AD-A211 712 WRDC-TR-89-3061 ADAPTIVE CONTROL LAW DESIGN FOR MODEL UNCERTAINTY COMPENSATION J. E. SORRELLS DYNETICS , INC. U 1000 EXPLORER BLVD. L Ell...MONITORING ORGANIZATION Dynetics , Inc. (If applicable) Wright Research and Development Center netics,_ _ I _nc.Flight Dynamics Laboratory, AFSC 6c. ADDRESS...controllers designed using Dynetics innovative aporoach were able to equal or surpass the STR and MRAC controllers in terms of performance robustness

  10. Examining Teacher Thinking: Constructing a Process to Design Curricular Adaptations.

    ERIC Educational Resources Information Center

    Udvari-Solner, Alice

    1996-01-01

    This description of a curricular adaptation decision-making process focuses on tenets of reflective practice as teachers design instruction for students in heterogeneous classrooms. A case example illustrates how an elementary teaching team transformed lessons to accommodate a wide range of learners in a multiage first- and second-grade classroom.…

  11. Instructional Design and Adaptation Issues in Distance Learning Via Satellite.

    ERIC Educational Resources Information Center

    Thach, Liz

    1995-01-01

    Discusses a qualitative research study conducted in a distance-learning environment using satellite delivery. Describes changes in instructional design and adaptation issues which faculty and professionals involved in satellite-delivery learning situations used to be successful. (Author/AEF)

  12. Transient analysis of an adaptive system for optimization of design parameters

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    Averaging methods are applied to analyzing and optimizing the transient response associated with the direct adaptive control of an oscillatory second-order minimum-phase system. The analytical design methods developed for a second-order plant can be applied with some approximation to a MIMO flexible structure having a single dominant mode.

  13. Direct and Inverse Problems of Item Pool Design for Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2009-01-01

    The recent literature on computerized adaptive testing (CAT) has developed methods for creating CAT item pools from a large master pool. Each CAT pool is designed as a set of nonoverlapping forms reflecting the skill levels of an assumed population of test takers. This article presents a Monte Carlo method to obtain these CAT pools and discusses…

  14. Design method of supercavitating pumps

    NASA Astrophysics Data System (ADS)

    Kulagin, V.; Likhachev, D.; Li, F. C.

    2016-05-01

    The problem of effective supercavitating (SC) pump is solved, and optimum load distribution along the radius of the blade is found taking into account clearance, degree of cavitation development, influence of finite number of blades, and centrifugal forces. Sufficient accuracy can be obtained using the equivalent flat SC-grid for design of any SC-mechanisms, applying the “grid effect” coefficient and substituting the skewed flow calculated for grids of flat plates with the infinite attached cavitation caverns. This article gives the universal design method and provides an example of SC-pump design.

  15. Adaptable radiation monitoring system and method

    DOEpatents

    Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.

    2006-06-20

    A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.

  16. Adaptive computational methods for aerothermal heating analysis

    NASA Technical Reports Server (NTRS)

    Price, John M.; Oden, J. Tinsley

    1988-01-01

    The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.

  17. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  18. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  19. Individually designed PALs vs. power optimized PALs adaptation comparison.

    PubMed

    Muždalo, Nataša Vujko; Mihelčič, Matjaž

    2015-03-01

    The practice shows that in everyday life we encounter ever-growing demand for better visual acuity at all viewing distances. The presbyopic population needs correction to far, near and intermediate distance with different dioptric powers. PAL lenses seem to be a comfortable solution. The object of the present study is the analysis of the factors determining adaptation to progressive addition lenses (PAL) of the first-time users. Only novice test persons were chosen in order to avoid the bias of previously worn particular lens design. For optimal results with this type of lens, several individual parameters must be considered: correct refraction, precise ocular and facial measures, and proper mounting of lenses into the frame. Nevertheless, first time wearers encounter various difficulties in the process of adapting to this type of glasses and adaptation time differs greatly between individual users. The question that arises is how much the individual parameters really affect the ease of adaptation and comfort when wearing progressive glasses. To clarify this, in the present study, the individual PAL lenses--Rodenstock's Impression FreeSign (with inclusion of all parameters related to the user's eye and spectacle frame: prescription, pupillary distance, fitting height, back vertex distance, pantoscopic angle and curvature of the frame) were compared to power optimized PAL--Rodenstock's Multigressiv MyView (respecting only prescription power and pupillary distance). Adaptation process was monitored over a period of four weeks. The collected results represent scores of user's subjective impressions, where the users themselves rated their adaptation to new progressive glasses and the degree of subjective visual impression. The results show that adaptation time to fully individually fit PAL is easier and quickly. The information obtained from users is valuable in everyday optometry practice because along with the manufacturer's specifications, the user's experience can

  20. Frequency Adaptability and Waveform Design for OFDM Radar Space-Time Adaptive Processing

    SciTech Connect

    Sen, Satyabrata; Glover, Charles Wayne

    2012-01-01

    We propose an adaptive waveform design technique for an orthogonal frequency division multiplexing (OFDM) radar signal employing a space-time adaptive processing (STAP) technique. We observe that there are inherent variabilities of the target and interference responses in the frequency domain. Therefore, the use of an OFDM signal can not only increase the frequency diversity of our system, but also improve the target detectability by adaptively modifying the OFDM coefficients in order to exploit the frequency-variabilities of the scenario. First, we formulate a realistic OFDM-STAP measurement model considering the sparse nature of the target and interference spectra in the spatio-temporal domain. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. With numerical examples we demonstrate that the resultant OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  1. Hybrid Adaptive Ray-Moment Method (HARM2): A highly parallel method for radiation hydrodynamics on adaptive grids

    NASA Astrophysics Data System (ADS)

    Rosen, A. L.; Krumholz, M. R.; Oishi, J. S.; Lee, A. T.; Klein, R. I.

    2017-02-01

    We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (HARM2) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to O (103) processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.

  2. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  3. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  4. Design of suboptimal adaptive filter for stochastic systems

    NASA Astrophysics Data System (ADS)

    Ahn, Jun Il; Shin, Vladimir

    2005-12-01

    In this paper, the problem of estimating the system state in for linear discrete-time systems with uncertainties is considered. In [1], [2], we have proposed the fusion formula (FF) for an arbitrary number of correlated and uncorrelated estimates. The FF is applied to detection and filtering problem. The new suboptimal adaptive filter with parallel structure is herein proposed. In consequence of parallel structure of the proposed filter, parallel computers can be used for their design. A lower computational complexity and lower memory demand are achieved with the proposed filter than in the optimal adaptive Lainiotis-Kalman filter. Example demonstrates the accuracy of the new filter.

  5. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  6. Adaptive grid methods for RLV environment assessment and nozzle analysis

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh J.

    1996-01-01

    Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation

  7. Optimal adaptive two-stage designs for early phase II clinical trials.

    PubMed

    Shan, Guogen; Wilding, Gregory E; Hutson, Alan D; Gerstenberger, Shawn

    2016-04-15

    Simon's optimal two-stage design has been widely used in early phase clinical trials for Oncology and AIDS studies with binary endpoints. With this approach, the second-stage sample size is fixed when the trial passes the first stage with sufficient activity. Adaptive designs, such as those due to Banerjee and Tsiatis (2006) and Englert and Kieser (2013), are flexible in the sense that the second-stage sample size depends on the response from the first stage, and these designs are often seen to reduce the expected sample size under the null hypothesis as compared with Simon's approach. An unappealing trait of the existing designs is that they are not associated with a second-stage sample size, which is a non-increasing function of the first-stage response rate. In this paper, an efficient intelligent process, the branch-and-bound algorithm, is used in extensively searching for the optimal adaptive design with the smallest expected sample size under the null, while the type I and II error rates are maintained and the aforementioned monotonicity characteristic is respected. The proposed optimal design is observed to have smaller expected sample sizes compared to Simon's optimal design, and the maximum total sample size of the proposed adaptive design is very close to that from Simon's method. The proposed optimal adaptive two-stage design is recommended for use in practice to improve the flexibility and efficiency of early phase therapeutic development.

  8. Adaptive Clinical Trial Designs for Simultaneous Testing of Matched Diagnostics and Therapeutics

    PubMed Central

    Scher, Howard I.; Nasso, Shelley Fuld; Rubin, Eric H.; Simon, Richard

    2013-01-01

    A critical challenge in the development of new molecularly targeted anticancer drugs is the identification of predictive biomarkers and the concurrent development of diagnostics for these biomarkers. Developing matched diagnostics and therapeutics will require new clinical trial designs and methods of data analysis. The use of adaptive design in phase III trials may offer new opportunities for matched diagnosis and treatment because the size of the trial can allow for subpopulation analysis. We present an adaptive phase III trial design that can identify a suitable target population during the early course of the trial, enabling the efficacy of an experimental therapeutic to be evaluated within the target population as a later part of the same trial. The use of such an adaptive approach to clinical trial design has the potential to greatly improve the field of oncology and facilitate the development of personalized medicine. PMID:22046024

  9. Adaptive Modeling, Engineering Analysis and Design of Advanced Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Hsu, Su-Yuen; Mason, Brian H.; Hicks, Mike D.; Jones, William T.; Sleight, David W.; Chun, Julio; Spangler, Jan L.; Kamhawi, Hilmi; Dahl, Jorgen L.

    2006-01-01

    This paper describes initial progress towards the development and enhancement of a set of software tools for rapid adaptive modeling, and conceptual design of advanced aerospace vehicle concepts. With demanding structural and aerodynamic performance requirements, these high fidelity geometry based modeling tools are essential for rapid and accurate engineering analysis at the early concept development stage. This adaptive modeling tool was used for generating vehicle parametric geometry, outer mold line and detailed internal structural layout of wing, fuselage, skin, spars, ribs, control surfaces, frames, bulkheads, floors, etc., that facilitated rapid finite element analysis, sizing study and weight optimization. The high quality outer mold line enabled rapid aerodynamic analysis in order to provide reliable design data at critical flight conditions. Example application for structural design of a conventional aircraft and a high altitude long endurance vehicle configuration are presented. This work was performed under the Conceptual Design Shop sub-project within the Efficient Aerodynamic Shape and Integration project, under the former Vehicle Systems Program. The project objective was to design and assess unconventional atmospheric vehicle concepts efficiently and confidently. The implementation may also dramatically facilitate physics-based systems analysis for the NASA Fundamental Aeronautics Mission. In addition to providing technology for design and development of unconventional aircraft, the techniques for generation of accurate geometry and internal sub-structure and the automated interface with the high fidelity analysis codes could also be applied towards the design of vehicles for the NASA Exploration and Space Science Mission projects.

  10. Adaptive Kernel Based Machine Learning Methods

    DTIC Science & Technology

    2012-10-15

    multiscale collocation method with a matrix compression strategy to discretize the system of integral equations and then use the multilevel...augmentation method to solve the resulting discrete system. A priori and a posteriori 1 parameter choice strategies are developed for thesemethods. The...performance of the proximity algo- rithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed

  11. Adaptive fuzzy switched control design for uncertain nonholonomic systems with input nonsmooth constraint

    NASA Astrophysics Data System (ADS)

    Li, Yongming; Tong, Shaocheng

    2016-10-01

    In this paper, a fuzzy adaptive switched control approach is proposed for a class of uncertain nonholonomic chained systems with input nonsmooth constraint. In the control design, an auxiliary dynamic system is designed to address the input nonsmooth constraint, and an adaptive switched control strategy is constructed to overcome the uncontrollability problem associated with x0(t0) = 0. By using fuzzy logic systems to tackle unknown nonlinear functions, a fuzzy adaptive control approach is explored based on the adaptive backstepping technique. By constructing the combination approximation technique and using Young's inequality scaling technique, the number of the online learning parameters is reduced to n and the 'explosion of complexity' problem is avoid. It is proved that the proposed method can guarantee that all variables of the closed-loop system converge to a small neighbourhood of zero. Two simulation examples are provided to illustrate the effectiveness of the proposed control approach.

  12. Design of Adaptive Policy Pathways under Deep Uncertainties

    NASA Astrophysics Data System (ADS)

    Babovic, Vladan

    2013-04-01

    The design of large-scale engineering and infrastructural systems today is growing in complexity. Designers need to consider sociotechnical uncertainties, intricacies, and processes in the long- term strategic deployment and operations of these systems. In this context, water and spatial management is increasingly challenged not only by climate-associated changes such as sea level rise and increased spatio-temporal variability of precipitation, but also by pressures due to population growth and particularly accelerating rate of urbanisation. Furthermore, high investment costs and long term-nature of water-related infrastructure projects requires long-term planning perspective, sometimes extending over many decades. Adaptation to such changes is not only determined by what is known or anticipated at present, but also by what will be experienced and learned as the future unfolds, as well as by policy responses to social and water events. As a result, a pathway emerges. Instead of responding to 'surprises' and making decisions on ad hoc basis, exploring adaptation pathways into the future provide indispensable support in water management decision-making. In this contribution, a structured approach for designing a dynamic adaptive policy based on the concepts of adaptive policy making and adaptation pathways is introduced. Such an approach provides flexibility which allows change over time in response to how the future unfolds, what is learned about the system, and changes in societal preferences. The introduced flexibility provides means for dealing with complexities of adaptation under deep uncertainties. It enables engineering systems to change in the face of uncertainty to reduce impacts from downside scenarios while capitalizing on upside opportunities. This contribution presents comprehensive framework for development and deployment of adaptive policy pathway framework, and demonstrates its performance under deep uncertainties on a case study related to urban

  13. The design of service-adaptive engine for robot middleware

    NASA Astrophysics Data System (ADS)

    Baek, BumHyeon; Choi, YongSoon; Park, Hong Seong

    2007-12-01

    In this paper, we propose a design of Service-Adaptive Engine for robot middleware. This middleware called the KOMoR (Korea Object-oriented Middleware of Robot) is a middleware for robot that composed of three layers (Service Layer, Network Adaptation Layer, Network Interface Layer). In particular, Service-Adaptive Engine in Service Layer is responsible for communication between distributed applications and provides a set of features that support development of realistic distributed applications for a robot. Also, it avoids unnecessary complexity, making the middleware easy to learn and to use. For writing application, both client and server consist of a mixture of application code, library code, and code generated from IDL definition called MIDL (Module Interface Definition Language). The Service-Adaptive Engine in SL contains the client-and server-side run-time support for remote communication. The generic part of the Service-Adaptive Engine (that is, the part that is independent of the specific types you have defined in MIDL) is accessed through the SL API. The proxy code is generated from MIDL definitions and, therefore specific to the types of objects and data you have defined in MIDL.

  14. Adaptive Designs for Randomized Trials in Public Health

    PubMed Central

    Brown, C. Hendricks; Have, Thomas R. Ten; Jo, Booil; Dagne, Getachew; Wyman, Peter A.; Muthén, Bengt; Gibbons, Robert D.

    2009-01-01

    In this article, we present a discussion of two general ways in which the traditional randomized trial can be modified or adapted in response to the data being collected. We use the term adaptive design to refer to a trial in which characteristics of the study itself, such as the proportion assigned to active intervention versus control, change during the trial in response to data being collected. The term adaptive sequence of trials refers to a decision-making process that fundamentally informs the conceptualization and conduct of each new trial with the results of previous trials. Our discussion below investigates the utility of these two types of adaptations for public health evaluations. Examples are provided to illustrate how adaptation can be used in practice. From these case studies, we discuss whether such evaluations can or should be analyzed as if they were formal randomized trials, and we discuss practical as well as ethical issues arising in the conduct of these new-generation trials. PMID:19296774

  15. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  16. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  17. A System Approach to Adaptive Multi-Modal Sensor Designs

    DTIC Science & Technology

    2010-02-01

    Email: rhody@cis.rit.edu Program Managers: Dr. Douglas Cochran <douglas.cochran@afosr.af.mil> Dr. Kitt C. Reinhardt <kitt.reinhardt...DEPARTMENT OF COMPUTER SCIENCE CONVENT AVE & 138TH ST SCHOOL OF ENGINEERING NEW YORK, NY 10031 Approved for public release...FA9550-08-1-0199 A System Approach to Adaptive Multi-Modal Sensor Designs 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  18. An adaptive optics imaging system designed for clinical use.

    PubMed

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A

    2015-06-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing.

  19. An adaptive optics imaging system designed for clinical use

    PubMed Central

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R.; Rossi, Ethan A.

    2015-01-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2–3 arc minutes, (arcmin) 2) ~0.5–0.8 arcmin and, 3) ~0.05–0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3–5 arcmin, 2) ~0.7–1.1 arcmin and 3) ~0.07–0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033

  20. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  1. Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai

    Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…

  2. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  3. A Model for Designing Adaptive Laboratory Evolution Experiments.

    PubMed

    LaCroix, Ryan A; Palsson, Bernhard O; Feist, Adam M

    2017-04-15

    The occurrence of mutations is a cornerstone of the evolutionary theory of adaptation, capitalizing on the rare chance that a mutation confers a fitness benefit. Natural selection is increasingly being leveraged in laboratory settings for industrial and basic science applications. Despite increasing deployment, there are no standardized procedures available for designing and performing adaptive laboratory evolution (ALE) experiments. Thus, there is a need to optimize the experimental design, specifically for determining when to consider an experiment complete and for balancing outcomes with available resources (i.e., laboratory supplies, personnel, and time). To design and to better understand ALE experiments, a simulator, ALEsim, was developed, validated, and applied to the optimization of ALE experiments. The effects of various passage sizes were experimentally determined and subsequently evaluated with ALEsim, to explain differences in experimental outcomes. Furthermore, a beneficial mutation rate of 10(-6.9) to 10(-8.4) mutations per cell division was derived. A retrospective analysis of ALE experiments revealed that passage sizes typically employed in serial passage batch culture ALE experiments led to inefficient production and fixation of beneficial mutations. ALEsim and the results described here will aid in the design of ALE experiments to fit the exact needs of a project while taking into account the resources required and will lower the barriers to entry for this experimental technique.IMPORTANCE ALE is a widely used scientific technique to increase scientific understanding, as well as to create industrially relevant organisms. The manner in which ALE experiments are conducted is highly manual and uniform, with little optimization for efficiency. Such inefficiencies result in suboptimal experiments that can take multiple months to complete. With the availability of automation and computer simulations, we can now perform these experiments in an optimized

  4. Launch vehicle payload adapter design with vibration isolation features

    NASA Astrophysics Data System (ADS)

    Thomas, Gareth R.; Fadick, Cynthia M.; Fram, Bryan J.

    2005-05-01

    Payloads, such as satellites or spacecraft, which are mounted on launch vehicles, are subject to severe vibrations during flight. These vibrations are induced by multiple sources that occur between liftoff and the instant of final separation from the launch vehicle. A direct result of the severe vibrations is that fatigue damage and failure can be incurred by sensitive payload components. For this reason a payload adapter has been designed with special emphasis on its vibration isolation characteristics. The design consists of an annular plate that has top and bottom face sheets separated by radial ribs and close-out rings. These components are manufactured from graphite epoxy composites to ensure a high stiffness to weight ratio. The design is tuned to keep the frequency of the axial mode of vibration of the payload on the flexibility of the adapter to a low value. This is the main strategy adopted for isolating the payload from damaging vibrations in the intermediate to higher frequency range (45Hz-200Hz). A design challenge for this type of adapter is to keep the pitch frequency of the payload above a critical value in order to avoid dynamic interactions with the launch vehicle control system. This high frequency requirement conflicts with the low axial mode frequency requirement and this problem is overcome by innovative tuning of the directional stiffnesses of the composite parts. A second design strategy that is utilized to achieve good isolation characteristics is the use of constrained layer damping. This feature is particularly effective at keeping the responses to a minimum for one of the most important dynamic loading mechanisms. This mechanism consists of the almost-tonal vibratory load associated with the resonant burn condition present in any stage powered by a solid rocket motor. The frequency of such a load typically falls in the 45-75Hz range and this phenomenon drives the low frequency design of the adapter. Detailed finite element analysis is

  5. The stochastic control of the F-8C aircraft using the Multiple Model Adaptive Control (MMAC) method

    NASA Technical Reports Server (NTRS)

    Athans, M.; Dunn, K. P.; Greene, E. S.; Lee, W. H.; Sandel, N. R., Jr.

    1975-01-01

    The purpose of this paper is to summarize results obtained for the adaptive control of the F-8C aircraft using the so-called Multiple Model Adaptive Control method. The discussion includes the selection of the performance criteria for both the lateral and the longitudinal dynamics, the design of the Kalman filters for different flight conditions, the 'identification' aspects of the design using hypothesis testing ideas, and the performance of the closed loop adaptive system.

  6. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    ERIC Educational Resources Information Center

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  7. Adaptive Optics System Design and Operation at Lick Observatory

    NASA Astrophysics Data System (ADS)

    Olivier, S. S.; Max, C. E.; Avicola, K.; Bissinger, H. D.; Brase, J. M.; Friedman, H. W.; Gavel, D. T.; Salmon, J. T.; Waltjen, K. E.

    1993-12-01

    An adaptive optics system developed for the 40 inch Nickel and 120 inch Shane telescopes at Lick Observatory is described. The adaptive optics system design is based on a 69 actuator continuous-surface deformable mirror and a Hartmann wavefront sensor equipped with a commercial intensified CCD fast-framing camera. The system has been tested at the Cassegrain focus of the 40 inch Nickel telescope where the subaperture diameter is 12 cm. The subaperture slope and mirror control calculations are performed on a four processor single board computer controlled by a Unix workstation. This configuration is capable of up to 1 KHz frame rates. The optical configuration of the system and its interface to the telescope is described. Details of the control system design, operation, and user interface are given. Initial test results emphasizing control system operations of this adaptive optics system using natural reference stars on the 40 inch Nickel telescope are presented. The initial test results are compared to predictions from analyses and simulations. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  8. The VIADUC project: innovation in climate adaptation through service design

    NASA Astrophysics Data System (ADS)

    Corre, L.; Dandin, P.; L'Hôte, D.; Besson, F.

    2015-07-01

    From the French National Adaptation to Climate Change Plan, the "Drias, les futurs du climat" service has been developed to provide easy access to French regional climate projections. This is a major step for the implementation of French Climate Services. The usefulness of this service for the end-users and decision makers involved with adaptation planning at a local scale is investigated. As such, the VIADUC project is: to evaluate and enhance Drias, as well as to imagine future development in support of adaptation. Climate scientists work together with end-users and a service designer. The designer's role is to propose an innovative approach based on the interaction between scientists and citizens. The chosen end-users are three Natural Regional Parks located in the South West of France. The latter parks are administrative entities which gather municipalities having a common natural and cultural heritage. They are also rural areas in which specific economic activities take place, and therefore are concerned and involved in both protecting their environment and setting-up sustainable economic development. The first year of the project has been dedicated to investigation including the questioning of relevant representatives. Three key local economic sectors have been selected: i.e. forestry, pastoral farming and building activities. Working groups were composed of technicians, administrative and maintenance staff, policy makers and climate researchers. The sectors' needs for climate information have been assessed. The lessons learned led to actions which are presented hereinafter.

  9. Anti-windup adaptive PID control design for a class of uncertain chaotic systems with input saturation.

    PubMed

    Tahoun, A H

    2017-01-01

    In this paper, the stabilization problem of actuators saturation in uncertain chaotic systems is investigated via an adaptive PID control method. The PID control parameters are auto-tuned adaptively via adaptive control laws. A multi-level augmented error is designed to account for the extra terms appearing due to the use of PID and saturation. The proposed control technique uses both the state-feedback and the output-feedback methodologies. Based on Lyapunov׳s stability theory, new anti-windup adaptive controllers are proposed. Demonstrative examples with MATLAB simulations are studied. The simulation results show the efficiency of the proposed adaptive PID controllers.

  10. A new and efficient method to obtain benzalkonium chloride adapted cells of Listeria monocytogenes.

    PubMed

    Saá Ibusquiza, Paula; Herrera, Juan J R; Vázquez-Sánchez, Daniel; Parada, Adelaida; Cabo, Marta L

    2012-10-01

    A new method to obtain benzalkonium chloride (BAC) adapted L. monocytogenes cells was developed. A factorial design was used to assess the effects of the inoculum size and BAC concentration on the adaptation (measured in terms of lethal dose 50 -LD50-) of 6 strains of Listeria monocytogenes after only one exposure. The proposed method could be applied successfully in the L. monocytogenes strains with higher adaptive capacity to BAC. In those cases, a significant empirical equation was obtained showing a positive effect of the inoculum size and a positive interaction between the effects of BAC and inoculum size on the level of adaptation achieved. However, a slight negative effect of BAC, due to the biocide, was also significant. The proposed method improves the classical method based on successive stationary phase cultures in sublethal BAC concentrations because it is less time-consuming and more effective. For the laboratory strain L. monocytogenes 5873, by applying the new procedure it was possible to increase BAC-adaptation 3.69-fold in only 33 h, whereas using the classical procedure 2.61-fold of increase was reached after 5 days. Moreover, with the new method, the maximum level of adaptation was determined for all the strains reaching surprisingly almost the same concentration of BAC (mg/l) for 5 out 6 strains. Thus, a good reference for establishing the effective concentrations of biocides to ensure the maximum level of adaptation was also determined.

  11. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  12. Optimal design of an unsupervised adaptive classifier with unknown priors

    NASA Technical Reports Server (NTRS)

    Kazakos, D.

    1974-01-01

    An adaptive detection scheme for M hypotheses was analyzed. It was assumed that the probability density function under each hypothesis was known, and that the prior probabilities of the M hypotheses were unknown and sequentially estimated. Each observation vector was classified using the current estimate of the prior probabilities. Using a set of nonlinear transformations, and applying stochastic approximation theory, an optimally converging adaptive detection and estimation scheme was designed. The optimality of the scheme lies in the fact that convergence to the true prior probabilities is ensured, and that the asymptotic error variance is minimum, for the class of nonlinear transformations considered. An expression for the asymptotic mean square error variance of the scheme was also obtained.

  13. Adaptive designs undertaken in clinical research: a review of registered clinical trials.

    PubMed

    Hatfield, Isabella; Allison, Annabel; Flight, Laura; Julious, Steven A; Dimairo, Munyaradzi

    2016-03-19

    Adaptive designs have the potential to improve efficiency in the evaluation of new medical treatments in comparison to traditional fixed sample size designs. However, they are still not widely used in practice in clinical research. Little research has been conducted to investigate what adaptive designs are being undertaken. This review highlights the current state of registered adaptive designs and their characteristics. The review looked at phase II, II/III and III trials registered on ClinicalTrials.gov from 29 February 2000 to 1 June 2014, supplemented with trials from the National Institute for Health Research register and known adaptive trials. A range of adaptive design search terms were applied to the trials extracted from each database. Characteristics of the adaptive designs were then recorded including funder, therapeutic area and type of adaptation. The results in the paper suggest that the use of adaptive designs has increased. They seem to be most often used in phase II trials and in oncology. In phase III trials, the most popular form of adaptation is the group sequential design. The review failed to capture all trials with adaptive designs, which suggests that the reporting of adaptive designs, such as in clinical trials registers, needs much improving. We recommend that clinical trial registers should contain sections dedicated to the type and scope of the adaptation and that the term 'adaptive design' should be included in the trial title or at least in the brief summary or design sections.

  14. Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling

    NASA Astrophysics Data System (ADS)

    Davis, B. N.; LeVeque, R. J.

    2016-12-01

    One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.

  15. Studies of an Adaptive Kaczmarz Method for Electrical Impedance Imaging

    NASA Astrophysics Data System (ADS)

    Li, Taoran; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.

    2013-04-01

    We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of memory storage in large scale problems, we propose to solve the inverse problem by adaptively updating both the optimal current pattern with improved distinguishability and the conductivity estimate at each iteration. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation and the Kaczmarz method can produce accurate and stable solutions adaptively compared to traditional Kaczmarz and Gauss-Newton type methods. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results.

  16. Design of Sequentially Randomized Trials for Testing Adaptive Treatment Strategies

    PubMed Central

    Ogbagaber, Semhar B.; Karp, Jordan; Wahed, Abdus S.

    2016-01-01

    An adaptive treatment strategy (ATS) is an outcome-guided algorithm that allows personalized treatment of complex diseases based on patients’ disease status and treatment history. Conditions such as AIDS, depression, and cancer usually require several stages of treatment due to the chronic, multifactorial nature of illness progression and management. Sequential multiple assignment randomized (SMAR) designs permit simultaneous inference about multiple ATSs, where patients are sequentially randomized to treatments at different stages depending upon response status. The purpose of the article is to develop a sample size formula to ensure adequate power for comparing two or more ATSs. Based on a Wald-type statistic for comparing multiple ATSs with a continuous endpoint, we develop a sample size formula and test it through simulation studies. We show via simulation that the proposed sample size formula maintains the nominal power. The proposed sample size formula is not applicable to designs with time-to-event endpoints but the formula will be useful for practitioners while designing SMAR trials to compare adaptive treatment strategies. PMID:26412033

  17. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  18. Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation

    NASA Astrophysics Data System (ADS)

    Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk

    2016-09-01

    This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).

  19. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.

    1998-12-10

    OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  20. An improved adaptive IHS method for image fusion

    NASA Astrophysics Data System (ADS)

    Wang, Ting

    2015-12-01

    An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.

  1. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2002-10-19

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  2. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2004-01-28

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  3. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  4. Temporally adaptive sampling: a case study in rare species survey design with marbled salamanders (Ambystoma opacum).

    PubMed

    Charney, Noah D; Kubel, Jacob E; Eiseman, Charles S

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds.

  5. Temporally Adaptive Sampling: A Case Study in Rare Species Survey Design with Marbled Salamanders (Ambystoma opacum)

    PubMed Central

    Charney, Noah D.; Kubel, Jacob E.; Eiseman, Charles S.

    2015-01-01

    Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224

  6. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  7. Adaptive clustering and adaptive weighting methods to detect disease associated rare variants.

    PubMed

    Sha, Qiuying; Wang, Shuaicheng; Zhang, Shuanglin

    2013-03-01

    Current statistical methods to test association between rare variants and phenotypes are essentially the group-wise methods that collapse or aggregate all variants in a predefined group into a single variant. Comparing with the variant-by-variant methods, the group-wise methods have their advantages. However, two factors may affect the power of these methods. One is that some of the causal variants may be protective. When both risk and protective variants are presented, it will lose power by collapsing or aggregating all variants because the effects of risk and protective variants will counteract each other. The other is that not all variants in the group are causal; rather, a large proportion is believed to be neutral. When a large proportion of variants are neutral, collapsing or aggregating all variants may not be an optimal solution. We propose two alternative methods, adaptive clustering (AC) method and adaptive weighting (AW) method, aiming to test rare variant association in the presence of neutral and/or protective variants. Both of AC and AW are applicable to quantitative traits as well as qualitative traits. Results of extensive simulation studies show that AC and AW have similar power and both of them have clear advantages from power to computational efficiency comparing with existing group-wise methods and existing data-driven methods that allow neutral and protective variants. We recommend AW method because AW method is computationally more efficient than AC method.

  8. Adapting Cognitive Walkthrough to Support Game Based Learning Design

    ERIC Educational Resources Information Center

    Farrell, David; Moffat, David C.

    2014-01-01

    For any given Game Based Learning (GBL) project to be successful, the player must learn something. Designers may base their work on pedagogical research, but actual game design is still largely driven by intuition. People are famously poor at unsupported methodical thinking and relying so much on instinct is an obvious weak point in GBL design…

  9. Optical design of the adaptive optics laser guide star system

    SciTech Connect

    Bissinger, H.

    1994-11-15

    The design of an adaptive optics package for the 3 meter Lick telescope is presented. This instrument package includes a 69 actuator deformable mirror and a Hartmann type wavefront sensor operating in the visible wavelength; a quadrant detector for the tip-tile sensor and a tip-tilt mirror to stabilize atmospheric first order tip-tile errors. A high speed computer drives the deformable mirror to achieve near diffraction limited imagery. The different optical components and their individual design constraints are described. motorized stages and diagnostics tools are used to operate and maintain alignment throughout observation time from a remote control room. The expected performance are summarized and actual results of astronomical sources are presented.

  10. Adaptive windowed range-constrained Otsu method using local information

    NASA Astrophysics Data System (ADS)

    Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie

    2016-01-01

    An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.

  11. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  12. New developments in adaptive methods for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Oden, J. T.; Bass, Jon M.

    1990-01-01

    New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.

  13. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  14. A Conditional Exposure Control Method for Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.

    2009-01-01

    In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…

  15. Implementation of time-efficient adaptive sampling function design for improved undersampled MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Choi, Jinhyeok; Kim, Hyeonjin

    2016-12-01

    To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI.

  16. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  17. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  18. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  19. Design of the Dual Conjugate Adaptive Optics Test-bed

    NASA Astrophysics Data System (ADS)

    Sharf, Inna; Bell, K.; Crampton, D.; Fitzsimmons, J.; Herriot, Glen; Jolissaint, Laurent; Lee, B.; Richardson, H.; van der Kamp, D.; Veran, Jean-Pierre

    In this paper, we describe the Multi-Conjugate Adaptive Optics laboratory test-bed presently under construction at the University of Victoria, Canada. The test-bench will be used to support research in the performance of multi-conjugate adaptive optics, turbulence simulators, laser guide stars and miniaturizing adaptive optics. The main components of the test-bed include two micro-machined deformable mirrors, a tip-tilt mirror, four wavefront sensors, a source simulator, a dual-layer turbulence simulator, as well as computational and control hardware. The paper will describe in detail the opto-mechanical design of the adaptive optics module, the design of the hot-air turbulence generator and the configuration chosen for the source simulator. Below, we present a summary of these aspects of the bench. The optical and mechanical design of the test-bed has been largely driven by the particular choice of the deformable mirrors. These are continuous micro-machined mirrors manufactured by Boston Micromachines Corporation. They have a clear aperture of 3.3 mm and are deformed with 140 actuators arranged in a square grid. Although the mirrors have an open-loop bandwidth of 6.6 KHz, their shape can be updated at a sampling rate of 100 Hz. In our optical design, the mirrors are conjugated at 0km and 10 km in the atmosphere. A planar optical layout was achieved by using four off-axis paraboloids and several folding mirrors. These optics will be mounted on two solid blocks which can be aligned with respect to each other. The wavefront path design accommodates 3 monochromatic guide stars that can be placed at either 90 km or at infinity. The design relies on the natural separation of the beam into 3 parts because of differences in locations of the guide stars in the field of view. In total four wavefront sensors will be procured from Adaptive Optics Associates (AOA) or built in-house: three for the guide stars and the fourth to collect data from the science source output in

  20. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  1. Workshop on adaptive grid methods for fusion plasmas

    SciTech Connect

    Wiley, J.C.

    1995-07-01

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  2. Free energy calculations: an efficient adaptive biasing potential method.

    PubMed

    Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul

    2010-05-06

    We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.

  3. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  4. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  5. Design of signal-adapted multidimensional lifting scheme for lossy coding.

    PubMed

    Gouze, Annabelle; Antonini, Marc; Barlaud, Michel; Macq, Benoît

    2004-12-01

    This paper proposes a new method for the design of lifting filters to compute a multidimensional nonseparable wavelet transform. Our approach is stated in the general case, and is illustrated for the 2-D separable and for the quincunx images. Results are shown for the JPEG2000 database and for satellite images acquired on a quincunx sampling grid. The design of efficient quincunx filters is a difficult challenge which has already been addressed for specific cases. Our approach enables the design of less expensive filters adapted to the signal statistics to enhance the compression efficiency in a more general case. It is based on a two-step lifting scheme and joins the lifting theory with Wiener's optimization. The prediction step is designed in order to minimize the variance of the signal, and the update step is designed in order to minimize a reconstruction error. Application for lossy compression shows the performances of the method.

  6. Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography

    PubMed Central

    Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.

    2013-01-01

    We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952

  7. Aircraft conceptual design - an adaptable parametric sizing methodology

    NASA Astrophysics Data System (ADS)

    Coleman, Gary John, Jr.

    Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to

  8. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron

    1998-12-08

    Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  9. Neural Network Control-Based Adaptive Learning Design for Nonlinear Systems With Full-State Constraints.

    PubMed

    Liu, Yan-Jun; Li, Jing; Tong, Shaocheng; Chen, C L Philip

    2016-07-01

    In order to stabilize a class of uncertain nonlinear strict-feedback systems with full-state constraints, an adaptive neural network control method is investigated in this paper. The state constraints are frequently emerged in the real-life plants and how to avoid the violation of state constraints is an important task. By introducing a barrier Lyapunov function (BLF) to every step in a backstepping procedure, a novel adaptive backstepping design is well developed to ensure that the full-state constraints are not violated. At the same time, one remarkable feature is that the minimal learning parameters are employed in BLF backstepping design. By making use of Lyapunov analysis, we can prove that all the signals in the closed-loop system are semiglobal uniformly ultimately bounded and the output is well driven to follow the desired output. Finally, a simulation is given to verify the effectiveness of the method.

  10. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test.

  11. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  12. Design of high temperature adaptability cassegrain collimation system

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Song, Yan; Liu, Xianhong; Xiao, Wenjian

    2014-12-01

    Collimation system is an indispensable part of the photoelectric detection equipment. Aimed at meeting the demand of field on-line detection for photoelectric system, the system must have higher requirements for its volume, quality and the anti-interference ability of all sorts of complex weather conditions. In order to solve this problem, this paper designed a kind of high temperature adaptability reflex cassegrain collimation system. First the technical indexes of the system was put forward according to the requirements of practical application, then the initial structure parameters was calculated by gaussian optical computing and optimized processing through Zemax. The simulation results showed that the MTF of the system was close to the diffraction limit, which had a good image quality. The system structure tube adopted hard steel material; the primary mirror and secondary mirror used low expansion coefficient of microcrystalline glass, which effectively reduced the deformation due to temperature difference and remained little change in quality and volume at the same time. The experiment results in high and low temperature environments also showed that the collimation system could keep within 30 "beam divergence angle, which proved to have good temperature adaptability, so that it can be used in the field of complex bad conditions.

  13. Mixed Methods in Intervention Research: Theory to Adaptation

    ERIC Educational Resources Information Center

    Nastasi, Bonnie K.; Hitchcock, John; Sarkar, Sreeroopa; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka

    2007-01-01

    The purpose of this article is to demonstrate the application of mixed methods research designs to multiyear programmatic research and development projects whose goals include integration of cultural specificity when generating or translating evidence-based practices. The authors propose a set of five mixed methods designs related to different…

  14. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  15. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  16. Adaptive antenna arrays for satellite communications: Design and testing

    NASA Technical Reports Server (NTRS)

    Gupta, I. J.; Swarner, W. G.; Walton, E. K.

    1985-01-01

    When two separate antennas are used with each feedback loop to decorrelate noise, the antennas should be located such that the phase of the interfering signal in the two antennas is the same while the noise in them is uncorrelated. Thus, the antenna patterns and spatial distribution of the auxiliary antennas are quite important and should be carefully selected. The selection and spatial distribution of auxiliary elements is discussed when the main antenna is a center fed reflector antenna. It is shown that offset feeds of the reflector antenna can be used as auxiliary elements of an adaptive array to suppress weak interfering signals. An experimental system is designed to verify the theoretical analysis. The details of the experimental systems are presented.

  17. Wireless thermal sensor network with adaptive low power design.

    PubMed

    Lee, Ho-Yin; Chen, Shih-Lun; Chen, Chiung-An; Huang, Hong-Yi; Luo, Ching-Hsing

    2007-01-01

    There is an increasing need to develop flexible, reconfigurable, and intelligent low power wireless sensor network (WSN) system for healthcare applications. Technical advancements in micro-sensors, MEMS devices, low power electronics, and radio frequency circuits have enabled the design and development of such highly integrated system. In this paper, we present our proposed wireless thermal sensor network system, which is separated into control and data paths. Both of these paths have their own transmission frequencies. The control path sends the power and function commands from computer to each sensor elements by 2.4GHz RF circuits and the data path transmits measured data by 2.4GHz in sensor layer and 60GHz in higher layers. This hierarchy architecture would make reconfigurable mapping and pipeline applications on WSN possibly, and the average power consumption can be efficiently reduced about 60% by using the adaptive technique.

  18. An Approach to Automated Fusion System Design and Adaptation

    PubMed Central

    Fritze, Alexander; Mönks, Uwe; Holst, Christoph-Alexander; Lohweg, Volker

    2017-01-01

    Industrial applications are in transition towards modular and flexible architectures that are capable of self-configuration and -optimisation. This is due to the demand of mass customisation and the increasing complexity of industrial systems. The conversion to modular systems is related to challenges in all disciplines. Consequently, diverse tasks such as information processing, extensive networking, or system monitoring using sensor and information fusion systems need to be reconsidered. The focus of this contribution is on distributed sensor and information fusion systems for system monitoring, which must reflect the increasing flexibility of fusion systems. This contribution thus proposes an approach, which relies on a network of self-descriptive intelligent sensor nodes, for the automatic design and update of sensor and information fusion systems. This article encompasses the fusion system configuration and adaptation as well as communication aspects. Manual interaction with the flexibly changing system is reduced to a minimum. PMID:28300762

  19. An adaptive-control switching buck regulator - Implementation, analysis, and design

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Yu, Y.

    1980-01-01

    Describing-function techniques and averaging methods have been employed to characterize a multiloop switching buck regulator by three functional blocks: power stage, analog signal processor, and pulse modulator. The model is employed to explore possible forms of pole-zero cancellation and the adaptive nature of the control to filter parameter changes. Analysis-based design guidelines are provided including a suggested additional RC-compensation loop to optimize regulator performances such as stability, audiosusceptibility, output impedance, and load transient response.

  20. Impacting patient outcomes through design: acuity adaptable care/universal room design.

    PubMed

    Brown, Katherine Kay; Gallant, Dennis

    2006-01-01

    To succeed in today's challenging healthcare environment, hospitals must examine their impact on customers--patients and families--staff and physicians. By using competitive facility design and incorporating evidence-based concepts such as the acuity adaptable care delivery model and the universal room, the hospital will realize an impact on patient satisfaction that will enhance market share, on physician satisfaction that will foster loyalty, and on staff satisfaction that will decrease turnover. At the same time, clinical outcomes such as a reduction in mortality and complications and efficiencies such as a reduction in length of stay and minimization of hospital costs through the elimination of transfers can be gained. The results achieved are dependent on the principles used in designing the patient room that should focus on maximizing patient safety and improving healing. This article will review key design elements that support the success of an acuity adaptable unit such as the use of a private room with zones dedicated to patients, families, and staff, healing environment, technology, and decentralized nursing stations that support the success of the acuity adaptable unit. Outcomes of institutions currently utilizing the acuity adaptable concept will be reviewed.

  1. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    SciTech Connect

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-29

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  2. Analysis and design of an adaptive lightweight satellite mirror

    NASA Astrophysics Data System (ADS)

    Duerr, Johannes K.; Honke, Robert; Alberti, Mathias V.; Sippel, Rudolf

    2002-07-01

    Future scientific space missions based on interferometric optical and infrared astronomical instruments are currently under development in the United States as well as in Europe. These instruments require optical path length accuracy in the order of a few nanometers across structural dimensions of several meters. This puts extreme demands on static and dynamic structural stability. It is expected that actively controlled, adaptive structures will increasingly have to be used for these satellite applications to overcome the limits of passive structural accuracy. Based on the evaluation of different piezo-active concepts presented two years ago analysis and design of an adaptive lightweight satellite mirror primarily made of carbon-fiber reinforced plastic with embedded piezoceramic actuators for shape control is being described. Simulation of global mirror performance takes different wavefront-sensors and controls for several cases of loading into account. In addition extensive finite-element optimization of various structural details has been performed. Local material properties of sub-assemblies or geometry effects at the edges of the structure are investigated with respect to their impact on mirror performance. One important result of the analysis was the lay-out of actuator arrays consisting of specifically designed and custom made piezoceramic actuators. Prototype manufacturing and testing of active sub-components is described in detail. The results obtained served as a basis for a final update of finite-element models. The paper concludes with an outline on manufacturing, testing, and space qualification of the prototype demonstrator of an actively controllable lightweight satellite mirror currently under way. The research work presented in this paper is part of the German industrial research project 'ADAPTRONIK'.

  3. Design, realization and structural testing of a compliant adaptable wing

    NASA Astrophysics Data System (ADS)

    Molinari, G.; Quack, M.; Arrieta, A. F.; Morari, M.; Ermanni, P.

    2015-10-01

    This paper presents the design, optimization, realization and testing of a novel wing morphing concept, based on distributed compliance structures, and actuated by piezoelectric elements. The adaptive wing features ribs with a selectively compliant inner structure, numerically optimized to achieve aerodynamically efficient shape changes while simultaneously withstanding aeroelastic loads. The static and dynamic aeroelastic behavior of the wing, and the effect of activating the actuators, is assessed by means of coupled 3D aerodynamic and structural simulations. To demonstrate the capabilities of the proposed morphing concept and optimization procedure, the wings of a model airplane are designed and manufactured according to the presented approach. The goal is to replace conventional ailerons, thus to achieve controllability in roll purely by morphing. The mechanical properties of the manufactured components are characterized experimentally, and used to create a refined and correlated finite element model. The overall stiffness, strength, and actuation capabilities are experimentally tested and successfully compared with the numerical prediction. To counteract the nonlinear hysteretic behavior of the piezoelectric actuators, a closed-loop controller is implemented, and its capability of accurately achieving the desired shape adaptation is evaluated experimentally. Using the correlated finite element model, the aeroelastic behavior of the manufactured wing is simulated, showing that the morphing concept can provide sufficient roll authority to allow controllability of the flight. The additional degrees of freedom offered by morphing can be also used to vary the plane lift coefficient, similarly to conventional flaps. The efficiency improvements offered by this technique are evaluated numerically, and compared to the performance of a rigid wing.

  4. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Technical Reports Server (NTRS)

    Kallinderis, Y.

    1995-01-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  5. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  6. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Astrophysics Data System (ADS)

    Kallinderis, Y.

    1995-10-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  7. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.

  8. A simplified self-adaptive grid method, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, C.; Venkatapathy, E.

    1989-01-01

    The formulation of the Self-Adaptive Grid Evolution (SAGE) code, based on the work of Nakahashi and Deiwert, is described in the first section of this document. The second section is presented in the form of a user guide which explains the input and execution of the code, and provides many examples. Application of the SAGE code, by Ames Research Center and by others, in the solution of various flow problems has been an indication of the code's general utility and success. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for single, zonal, and multiple grids. Modifications to the methodology and the simplified input options make this current version a flexible and user-friendly code.

  9. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  10. Design and Inference for the Intent to Treat Principle using Adaptive Treatment Strategies and Sequential Randomization

    PubMed Central

    Dawson, Ree; Lavori, Philip W.

    2015-01-01

    Nonadherence to assigned treatment jeopardizes the power and interpretability of intent-to-treat comparisons from clinical trial data and continues to be an issue for effectiveness studies, despite their pragmatic emphasis. We posit that new approaches to design need to complement developments in methods for causal inference to address nonadherence, in both experimental and practice settings. This paper considers the conventional study design for psychiatric research and other medical contexts, in which subjects are randomized to treatments that are fixed throughout the trial and presents an alternative that converts the fixed treatments into an adaptive intervention that reflects best practice. The key element is the introduction of an adaptive decision point midway into the study to address a patient's reluctance to remain on treatment before completing a full-length trial of medication. The clinical uncertainty about the appropriate adaptation prompts a second randomization at the new decision point to evaluate relevant options. Additionally, the standard ‘all-or-none’ principal stratification (PS) framework is applied to the first stage of the design to address treatment discontinuation that occurs too early for a mid-trial adaptation. Drawing upon the adaptive intervention features, we develop assumptions to identify the PS causal estimand and introduce restrictions on outcome distributions to simplify Expectation-Maximization calculations. We evaluate the performance of the PS setup, with particular attention to the role played by a binary covariate. The results emphasize the importance of collecting covariate data for use in design and analysis. We consider the generality of our approach beyond the setting of psychiatric research. PMID:25581413

  11. Design of sewage treatment system by applying fuzzy adaptive PID controller

    NASA Astrophysics Data System (ADS)

    Jin, Liang-Ping; Li, Hong-Chan

    2013-03-01

    In the sewage treatment system, the dissolved oxygen concentration control, due to its nonlinear, time-varying, large time delay and uncertainty, is difficult to establish the exact mathematical model. While the conventional PID controller only works with good linear not far from its operating point, it is difficult to realize the system control when the operating point far off. In order to solve the above problems, the paper proposed a method which combine fuzzy control with PID methods and designed a fuzzy adaptive PID controller based on S7-300 PLC .It employs fuzzy inference method to achieve the online tuning for PID parameters. The control algorithm by simulation and practical application show that the system has stronger robustness and better adaptability.

  12. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  13. Grid adaptation and remapping for arbitrary lagrangian eulerian (ALE) methods

    SciTech Connect

    Lapenta, G. M.

    2002-01-01

    Methods to include automatic grid adaptation tools within the Arbitrary Lagrangian Eulerian (ALE) method are described. Two main developments will be described. First, a new grid adaptation approach is described, based on an automatic and accurate estimate of the local truncation error. Second, a new method to remap the information between two grids is presented, based on the MPDATA approach. The Arbitrary Lagrangian Eulerian (ALE) method solves hyperbolic equations by splitting the operators is two phases. First, in the Lagrangian phase, the equations under consideration are written in a Lagrangian frame and are discretized. In this phase, the grid moves with the solution, the velocity of each node being the local fluid velocity. Second, in the Eulerian phase, a new grid is generated and the information is transferred to the new grid. The advantage of considering this second step is the possibility of avoiding mesh distortion and tangling typical of pure Lagrangian methods. The second phase of the ALE method is the primary topic of the present communication. In the Eulerian phase two tasks need to be completed. First, a new grid need to be created (we will refer to this task as rezoning). Second, the information is transferred from the grid available at the end of the Lagrangian phase to the new grid (we will refer to this task as remapping). New techniques are presented for the two tasks of the Eulerian phase: rezoning and remapping.

  14. Method study on fuzzy-PID adaptive control of electric-hydraulic hitch system

    NASA Astrophysics Data System (ADS)

    Li, Mingsheng; Wang, Liubu; Liu, Jian; Ye, Jin

    2017-03-01

    In this paper, fuzzy-PID adaptive control method is applied to the control of tractor electric-hydraulic hitch system. According to the characteristics of the system, a fuzzy-PID adaptive controller is designed and the electric-hydraulic hitch system model is established. Traction control and position control performance simulation are carried out with the common PID control method. A field test rig was set up to test the electric-hydraulic hitch system. The test results showed that, after the fuzzy-PID adaptive control is adopted, when the tillage depth steps from 0.1m to 0.3m, the system transition process time is 4s, without overshoot, and when the tractive force steps from 3000N to 7000N, the system transition process time is 5s, the system overshoot is 25%.

  15. The Applied Behavior Analysis Research Paradigm and Single-Subject Designs in Adapted Physical Activity Research.

    PubMed

    Haegele, Justin A; Hodge, Samuel Russell

    2015-10-01

    There are basic philosophical and paradigmatic assumptions that guide scholarly research endeavors, including the methods used and the types of questions asked. Through this article, kinesiology faculty and students with interests in adapted physical activity are encouraged to understand the basic assumptions of applied behavior analysis (ABA) methodology for conducting, analyzing, and presenting research of high quality in this paradigm. The purposes of this viewpoint paper are to present information fundamental to understanding the assumptions undergirding research methodology in ABA, describe key aspects of single-subject research designs, and discuss common research designs and data-analysis strategies used in single-subject studies.

  16. Design of acoustic metamaterials using the covariance matrix adaptation evolutionary strategy

    NASA Astrophysics Data System (ADS)

    Huang, Bei; Cheng, Qiang; Song, Gang Yong; Cui, Tie Jun

    2017-03-01

    Acoustic metamaterials can manipulate sound waves in surprising ways, including the focusing, cloaking, and extraordinary transmitting of sound waves. With the increasing requirements for acoustic metamaterials with extreme parameters, we propose the design of acoustic meta-atoms with a large refraction index using the covariance matrix adaptation evolutionary optimization strategy. To validate the procedure, we propose an optimized metamaterial to construct an acoustic deflection lens. The full-wave simulation results are consistent with the theoretical predictions, showing the efficacy and accuracy of the proposed method, and indicating that the optimization algorithm is a powerful tool for designing meta-atoms with excellent applications.

  17. Adaptive sliding mode control design for a class of uncertain singularly perturbed nonlinear systems

    NASA Astrophysics Data System (ADS)

    Lin, Kuo-Jung

    2014-02-01

    This paper addresses adaptive sliding mode control (ASMC) of uncertain singularly perturbed nonlinear (USPN) systems with guaranteed H∞ control performance. First, we use Takagi-Sugeno (T-S) fuzzy model to construct the USPN systems. Then, the sliding surface can be determined via linear matrix inequality (LMI) design procedure. Second, we propose neural network (NN)-based ASMC design to stabilise the USPN systems. The proposed methods are based on the Lyapunov stability theorem. The adaptive law can reduce the effect of uncertainty. The proposed NN-based ASMC will stabilise the USPN systems for all ɛ ∈ (0, ɛ*]. Simulation result reveals that the proposed NN-based ASMC scheme has better convergence time compared with the fuzzy control scheme (Li, T.-H.S., & Lin, K.J. (2004). Stabilization of singularly perturbed fuzzy systems, IEEE Transactions on Fuzzy Systems, 12, 579-595.).

  18. Adaptive Current Control Method for Hybrid Active Power Filter

    NASA Astrophysics Data System (ADS)

    Chau, Minh Thuyen

    2016-09-01

    This paper proposes an adaptive current control method for Hybrid Active Power Filter (HAPF). It consists of a fuzzy-neural controller, identification and prediction model and cost function. The fuzzy-neural controller parameters are adjusted according to the cost function minimum criteria. For this reason, the proposed control method has a capability on-line control clings to variation of the load harmonic currents. Compared to the single fuzzy logic control method, the proposed control method shows the advantages of better dynamic response, compensation error in steady-state is smaller, able to online control is better and harmonics cancelling is more effective. Simulation and experimental results have demonstrated the effectiveness of the proposed control method.

  19. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  20. Spacesuit Radiation Shield Design Methods

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Anderson, Brooke M.; Cucinotta, Francis A.; Ware, J.; Zeitlin, Cary J.

    2006-01-01

    Meeting radiation protection requirements during EVA is predominantly an operational issue with some potential considerations for temporary shelter. The issue of spacesuit shielding is mainly guided by the potential of accidental exposure when operational and temporary shelter considerations fail to maintain exposures within operational limits. In this case, very high exposure levels are possible which could result in observable health effects and even be life threatening. Under these assumptions, potential spacesuit radiation exposures have been studied using known historical solar particle events to gain insight on the usefulness of modification of spacesuit design in which the control of skin exposure is a critical design issue and reduction of blood forming organ exposure is desirable. Transition to a new spacesuit design including soft upper-torso and reconfigured life support hardware gives an opportunity to optimize the next generation spacesuit for reduced potential health effects during an accidental exposure.

  1. A novel adaptive noise filtering method for SAR images

    NASA Astrophysics Data System (ADS)

    Li, Weibin; He, Mingyi

    2009-08-01

    In the most application situation, signal or image always is corrupted by additive noise. As a result there are mass methods to remove the additive noise while few approaches can work well for the multiplicative noise. The paper presents an improved MAP-based filter for multiplicative noise by adaptive window denoising technique. A Gamma noise models is discussed and a preprocessing technique to differential the matured and un-matured pixel is applied to get accurate estimate for Equivalent Number of Looks. Also the adaptive local window growth and 3 different denoise strategies are applied to smooth noise while keep its subtle information according to its local statistics feature. The simulation results show that the performance is better than existing filter. Several image experiments demonstrate its theoretical performance.

  2. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  3. On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H∞ Control.

    PubMed

    Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen

    2017-02-01

    In this paper, based on the adaptive critic learning technique, the H∞ control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear H∞ control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear H∞ control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.

  4. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    SciTech Connect

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  5. Planetary gearbox fault diagnosis using an adaptive stochastic resonance method

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia

    2013-07-01

    Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.

  6. Adaptation of fast marching methods to intracellular signaling

    NASA Astrophysics Data System (ADS)

    Chikando, Aristide C.; Kinser, Jason M.

    2006-02-01

    Imaging of signaling phenomena within the intracellular domain is a well studied field. Signaling is the process by which all living cells communicate with their environment and with each other. In the case of signaling calcium waves, numerous computational models based on solving homogeneous reaction diffusion equations have been developed. Typically, the reaction diffusion approach consists of solving systems of partial differential equations at each update step. The traditional methods used to solve these reaction diffusion equations are very computationally expensive since they must employ small time steps in order to reduce the computational error. The presented research suggests the application of fast marching methods to imaging signaling calcium waves, more specifically fertilization calcium waves, in Xenopus laevis eggs. The fast marching approach provides fast and efficient means of tracking the evolution of monotonically advancing fronts. A model that employs biophysical properties of intracellular calcium signaling, and adapts fast marching methods to tracking the propagation of signaling calcium waves is presented. The developed model is used to reproduce simulation results obtained with reaction diffusion based model. Results obtained with our model agree with both the results obtained with reaction diffusion based models, and confocal microscopy observations during in vivo experiments. The adaptation of fast marching methods to intracellular protein or macromolecule trafficking is also briefly explored.

  7. Current Practice in Designing Training for Complex Skills: Implications for Design and Evaluation of ADAPT[IT].

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Schuver-van Blanken, Marian J.; Spector, J. Michael

    ADAPT[IT] (Advanced Design Approach for Personalized Training-Interactive Tools is a European project coordinated by the Dutch National Aerospace Laboratory. The aim of ADAPT[IT] is to create and validate an effective training design methodology, based on cognitive science and leading to the integration of advanced technologies, so that the…

  8. A fuzzy model based adaptive PID controller design for nonlinear and uncertain processes.

    PubMed

    Savran, Aydogan; Kahraman, Gokalp

    2014-03-01

    We develop a novel adaptive tuning method for classical proportional-integral-derivative (PID) controller to control nonlinear processes to adjust PID gains, a problem which is very difficult to overcome in the classical PID controllers. By incorporating classical PID control, which is well-known in industry, to the control of nonlinear processes, we introduce a method which can readily be used by the industry. In this method, controller design does not require a first principal model of the process which is usually very difficult to obtain. Instead, it depends on a fuzzy process model which is constructed from the measured input-output data of the process. A soft limiter is used to impose industrial limits on the control input. The performance of the system is successfully tested on the bioreactor, a highly nonlinear process involving instabilities. Several tests showed the method's success in tracking, robustness to noise, and adaptation properties. We as well compared our system's performance to those of a plant with altered parameters with measurement noise, and obtained less ringing and better tracking. To conclude, we present a novel adaptive control method that is built upon the well-known PID architecture that successfully controls highly nonlinear industrial processes, even under conditions such as strong parameter variations, noise, and instabilities.

  9. Accelerated search for materials with targeted properties by adaptive design

    PubMed Central

    Xue, Dezhen; Balachandran, Prasanna V.; Hogden, John; Theiler, James; Xue, Deqing; Lookman, Turab

    2016-01-01

    Finding new materials with targeted properties has traditionally been guided by intuition, and trial and error. With increasing chemical complexity, the combinatorial possibilities are too large for an Edisonian approach to be practical. Here we show how an adaptive design strategy, tightly coupled with experiments, can accelerate the discovery process by sequentially identifying the next experiments or calculations, to effectively navigate the complex search space. Our strategy uses inference and global optimization to balance the trade-off between exploitation and exploration of the search space. We demonstrate this by finding very low thermal hysteresis (ΔT) NiTi-based shape memory alloys, with Ti50.0Ni46.7Cu0.8Fe2.3Pd0.2 possessing the smallest ΔT (1.84 K). We synthesize and characterize 36 predicted compositions (9 feedback loops) from a potential space of ∼800,000 compositions. Of these, 14 had smaller ΔT than any of the 22 in the original data set. PMID:27079901

  10. Designing for Change: Interoperability in a scaling and adapting environment

    NASA Astrophysics Data System (ADS)

    Yarmey, L.

    2015-12-01

    The Earth Science cyberinfrastructure landscape is constantly changing. Technologies advance and technical implementations are refined or replaced. Data types, volumes, packaging, and use cases evolve. Scientific requirements emerge and mature. Standards shift while systems scale and adapt. In this complex and dynamic environment, interoperability remains a critical component of successful cyberinfrastructure. Through the resource- and priority-driven iterations on systems, interfaces, and content, questions fundamental to stable and useful Earth Science cyberinfrastructure arise. For instance, how are sociotechnical changes planned, tracked, and communicated? How should operational stability balance against 'new and shiny'? How can ongoing maintenance and mitigation of technical debt be managed in an often short-term resource environment? The Arctic Data Explorer is a metadata brokering application developed to enable discovery of international, interdisciplinary Arctic data across distributed repositories. Completely dependent on interoperable third party systems, the Arctic Data Explorer publicly launched in 2013 with an original 3000+ data records from four Arctic repositories. Since then the search has scaled to 25,000+ data records from thirteen repositories at the time of writing. In the final months of original project funding, priorities shift to lean operations with a strategic eye on the future. Here we present lessons learned from four years of Arctic Data Explorer design, development, communication, and maintenance work along with remaining questions and potential directions.

  11. Optical Design and Optimization of Translational Reflective Adaptive Optics Ophthalmoscopes

    NASA Astrophysics Data System (ADS)

    Sulai, Yusufu N. B.

    The retina serves as the primary detector for the biological camera that is the eye. It is composed of numerous classes of neurons and support cells that work together to capture and process an image formed by the eye's optics, which is then transmitted to the brain. Loss of sight due to retinal or neuro-ophthalmic disease can prove devastating to one's quality of life, and the ability to examine the retina in vivo is invaluable in the early detection and monitoring of such diseases. Adaptive optics (AO) ophthalmoscopy is a promising diagnostic tool in early stages of development, still facing significant challenges before it can become a clinical tool. The work in this thesis is a collection of projects with the overarching goal of broadening the scope and applicability of this technology. We begin by providing an optical design approach for AO ophthalmoscopes that reduces the aberrations that degrade the performance of the AO correction. Next, we demonstrate how to further improve image resolution through the use of amplitude pupil apodization and non-common path aberration correction. This is followed by the development of a viewfinder which provides a larger field of view for retinal navigation. Finally, we conclude with the development of an innovative non-confocal light detection scheme which improves the non-invasive visualization of retinal vasculature and reveals the cone photoreceptor inner segments in healthy and diseased eyes.

  12. Reliability Optimization Design for Contact Springs of AC Contactors Based on Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen

    The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.

  13. Algebraic Methods to Design Signals

    DTIC Science & Technology

    2015-08-27

    group theory are employed to investigate the theory of their construction methods leading to new families of these arrays and some generalizations...sequences and arrays with desirable correlation properties. The methods used are very algebraic and number theoretic. Many new families of sequences...context of optical quantum computing, we prove that infinite families of anticirculant block weighing matrices can be obtained from generic weighing

  14. Improved methods in neural network-based adaptive output feedback control, with applications to flight control

    NASA Astrophysics Data System (ADS)

    Kim, Nakwan

    Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.

  15. Designing Adaptive Instructional Environments: Insights from Empirical Evidence

    DTIC Science & Technology

    2011-10-01

    Similarly, Shute and Zapata -Rivera (2008) define adaptivity as the capability of a system to alter its behavior according to learner needs and...60(2), 265-306. Landsberg, C R., Van Buskirk, W. L., Astwood Jr., R. A., Mercado , A. D., & Aakre, A. J. (2010). Adaptive training considerations...22, 77-92. Shute, V. J., & Zapata -Rivera, D. (2008). Adaptive technologies. In J. M. Spector, M. D. Merril, J. J. G. van Merriënboer, & M. Driscoll

  16. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  17. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy L; Carmena, Jose M

    2016-04-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter

  18. An Adaptive Instability Suppression Controls Method for Aircraft Gas Turbine Engine Combustors

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; DeLaat, John C.; Chang, Clarence T.

    2008-01-01

    An adaptive controls method for instability suppression in gas turbine engine combustors has been developed and successfully tested with a realistic aircraft engine combustor rig. This testing was part of a program that demonstrated, for the first time, successful active combustor instability control in an aircraft gas turbine engine-like environment. The controls method is called Adaptive Sliding Phasor Averaged Control. Testing of the control method has been conducted in an experimental rig with different configurations designed to simulate combustors with instabilities of about 530 and 315 Hz. Results demonstrate the effectiveness of this method in suppressing combustor instabilities. In addition, a dramatic improvement in suppression of the instability was achieved by focusing control on the second harmonic of the instability. This is believed to be due to a phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling. These results may have implications for future research in combustor instability control.

  19. The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2016-04-01

    Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in

  20. A decentralized adaptive robust method for chaos control.

    PubMed

    Kobravi, Hamid-Reza; Erfanian, Abbas

    2009-09-01

    This paper presents a control strategy, which is based on sliding mode control, adaptive control, and fuzzy logic system for controlling the chaotic dynamics. We consider this control paradigm in chaotic systems where the equations of motion are not known. The proposed control strategy is robust against the external noise disturbance and system parameter variations and can be used to convert the chaotic orbits not only to the desired periodic ones but also to any desired chaotic motions. Simulation results of controlling some typical higher order chaotic systems demonstrate the effectiveness of the proposed control method.

  1. An Adaptive Damping Network Designed for Strapdown Fiber Optic Gyrocompass System for Ships

    PubMed Central

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao; Tong, Jinwu

    2017-01-01

    The strapdown fiber optic gyrocompass (strapdown FOGC) system for ships primarily works on external horizontal damping and undamping statuses. When there are large sea condition changes, the system will switch frequently between the external horizontal damping status and the undamping status. This means that the system is always in an adjustment status and influences the dynamic accuracy of the system. Aiming at the limitations of the conventional damping method, a new design idea is proposed, where the adaptive control method is used to design the horizontal damping network of the strapdown FOGC system. According to the size of acceleration, the parameters of the damping network are changed to make the system error caused by the ship’s maneuvering to a minimum. Furthermore, the jump in damping coefficient was transformed into gradual change to make a smooth system status switch. The adaptive damping network was applied for strapdown FOGC under the static and dynamic condition, and its performance was compared with the conventional damping, and undamping means. Experimental results showed that the adaptive damping network was effective in improving the dynamic performance of the strapdown FOGC. PMID:28257100

  2. Design for validation, based on formal methods

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1990-01-01

    Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.

  3. Case Studies in Computer Adaptive Test Design through Simulation.

    ERIC Educational Resources Information Center

    Eignor, Daniel R.; And Others

    The extensive computer simulation work done in developing the computer adaptive versions of the Graduate Record Examinations (GRE) Board General Test and the College Board Admissions Testing Program (ATP) Scholastic Aptitude Test (SAT) is described in this report. Both the GRE General and SAT computer adaptive tests (CATs), which are fixed length…

  4. Turbulence profiling methods applied to ESO's adaptive optics facility

    NASA Astrophysics Data System (ADS)

    Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.

    2014-07-01

    Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.

  5. An adaptive stepsize method for the chemical Langevin equation.

    PubMed

    Ilie, Silvana; Teslya, Alexandra

    2012-05-14

    Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.

  6. Design Methods for Clinical Systems

    PubMed Central

    Blum, B.I.

    1986-01-01

    This paper presents a brief introduction to the techniques, methods and tools used to implement clinical systems. It begins with a taxonomy of software systems, describes the classic approach to development, provides some guidelines for the planning and management of software projects, and finishes with a guide to further reading. The conclusions are that there is no single right way to develop software, that most decisions are based upon judgment built from experience, and that there are tools that can automate some of the better understood tasks.

  7. Adaptive design clinical trials and trial logistics models in CNS drug development.

    PubMed

    Wang, Sue-Jane; Hung, H M James; O'Neill, Robert

    2011-02-01

    In central nervous system therapeutic areas, there are general concerns with establishing efficacy thought to be sources of high attrition rate in drug development. For instance, efficacy endpoints are often subjective and highly variable. There is a lack of robust or operational biomarkers to substitute for soft endpoints. In addition, animal models are generally poor, unreliable or unpredictive. To increase the probability of success in central nervous system drug development program, adaptive design has been considered as an alternative designs that provides flexibility to the conventional fixed designs and has been viewed to have the potential to improve the efficiency in drug development processes. In addition, successful implementation of an adaptive design trial relies on establishment of a trustworthy logistics model that ensures integrity of the trial conduct. In accordance with the spirit of the U.S. Food and Drug Administration adaptive design draft guidance document recently released, this paper enlists the critical considerations from both methodological aspects and regulatory aspects in reviewing an adaptive design proposal and discusses two general types of adaptations, sample size planning and re-estimation, and two-stage adaptive design. Literature examples of adaptive designs in central nervous system are used to highlight the principles laid out in the U.S. FDA draft guidance. Four logistics models seen in regulatory adaptive design applications are introduced. In general, complex adaptive designs require simulation studies to access the design performance. For an adequate and well-controlled clinical trial, if a Learn-and-Confirm adaptive selection approach is considered, the study-wise type I error rate should be adhered to. However, it is controversial to use the simulated type I error rate to address a strong control of the study-wise type I error rate.

  8. Reduction in redundancy of multichannel telemetric information by the method of adaptive discretization with associative sorting

    NASA Technical Reports Server (NTRS)

    Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.

    1974-01-01

    The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.

  9. Methods for combinatorial and parallel library design.

    PubMed

    Schnur, Dora M; Beno, Brett R; Tebben, Andrew J; Cavallaro, Cullen

    2011-01-01

    Diversity has historically played a critical role in design of combinatorial libraries, screening sets and corporate collections for lead discovery. Large library design dominated the field in the 1990s with methods ranging anywhere from purely arbitrary through property based reagent selection to product based approaches. In recent years, however, there has been a downward trend in library size. This was due to increased information about the desirable targets gleaned from the genomics revolution and to the ever growing availability of target protein structures from crystallography and homology modeling. Creation of libraries directed toward families of receptors such as GPCRs, kinases, nuclear hormone receptors, proteases, etc., replaced the generation of libraries based primarily on diversity while single target focused library design has remained an important objective. Concurrently, computing grids and cpu clusters have facilitated the development of structure based tools that screen hundreds of thousands of molecules. Smaller "smarter" combinatorial and focused parallel libraries replaced those early un-focused large libraries in the twenty-first century drug design paradigm. While diversity still plays a role in lead discovery, the focus of current library design methods has shifted to receptor based methods, scaffold hopping/bio-isostere searching, and a much needed emphasis on synthetic feasibility. Methods such as "privileged substructures based design" and pharmacophore based design still are important methods for parallel and small combinatorial library design. This chapter discusses some of the possible design methods and presents examples where they are available.

  10. Adaptive Kalman filtering methods for tracking GPS signals in high noise/high dynamic environments

    NASA Astrophysics Data System (ADS)

    Zuo, Qiyao; Yuan, Hong; Lin, Baojun

    2007-11-01

    GPS C/A signal tracking algorithms have been developed based on adaptive Kalman filtering theory. In the research, an adaptive Kalman filter is used to substitute for standard tracking loop filters. The goal is to improve estimation accuracy and tracking stabilization in high noise and high dynamic environments. The linear dynamics model and the measurements model are designed to estimate code phase, carrier phase, Doppler shift, and rate of change of Doppler shift. Two adaptive algorithms are applied to improve robustness and adaptive faculty of the tracking, one is Sage adaptive filtering approach and the other is strong tracking method. Both the new algorithms and the conventional tracking loop have been tested by using simulation data. In the simulation experiment, the highest jerk of the receiver is set to 10G m/s 3 with the lowest C/No 30dBHz. The results indicate that the Kalman filtering algorithms are more robust than the standard tracking loop, and performance of tracking loop using the algorithms is satisfactory in such extremely adverse circumstances.

  11. Research on PGNAA adaptive analysis method with BP neural network

    NASA Astrophysics Data System (ADS)

    Peng, Ke-Xin; Yang, Jian-Bo; Tuo, Xian-Guo; Du, Hua; Zhang, Rui-Xue

    2016-11-01

    A new approach method to dealing with the puzzle of spectral analysis in prompt gamma neutron activation analysis (PGNAA) is developed and demonstrated. It consists of utilizing BP neural network to PGNAA energy spectrum analysis which is based on Monte Carlo (MC) simulation, the main tasks which we will accomplish as follows: (1) Completing the MC simulation of PGNAA spectrum library, we respectively set mass fractions of element Si, Ca, Fe from 0.00 to 0.45 with a step of 0.05 and each sample is simulated using MCNP. (2) Establishing the BP model of adaptive quantitative analysis of PGNAA energy spectrum, we calculate peak areas of eight characteristic gamma rays that respectively correspond to eight elements in each individual of 1000 samples and that of the standard sample. (3) Verifying the viability of quantitative analysis of the adaptive algorithm where 68 samples were used successively. Results show that the precision when using neural network to calculate the content of each element is significantly higher than the MCLLS.

  12. An h-adaptive finite element method for turbulent heat transfer

    SciTech Connect

    Carriington, David B

    2009-01-01

    A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.

  13. Laying the Groundwork for NCLEX Success: An Exploration of Adaptive Quizzing as an Examination Preparation Method.

    PubMed

    Cox-Davenport, Rebecca A; Phelan, Julia C

    2015-05-01

    First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed.

  14. Improved method for transonic airfoil design-by-optimization

    NASA Technical Reports Server (NTRS)

    Kennelly, R. A., Jr.

    1983-01-01

    An improved method for use of optimization techniques in transonic airfoil design is demonstrated. FLO6QNM incorporates a modified quasi-Newton optimization package, and is shown to be more reliable and efficient than the method developed previously at NASA-Ames, which used the COPES/CONMIN optimization program. The design codes are compared on a series of test cases with known solutions, and the effects of problem scaling, proximity of initial point to solution, and objective function precision are studied. In contrast to the older method, well-converged solutions are shown to be attainable in the context of engineering design using computational fluid dynamics tools, a new result. The improvements are due to better performance by the optimization routine and to the use of problem-adaptive finite difference step sizes for gradient evaluation.

  15. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  16. Issues and Challenges in the Design of Culturally Adapted Evidence-Based Interventions

    PubMed Central

    Castro, Felipe González; Barrera, Manuel; Holleran Steiker, Lori K.

    2014-01-01

    This article examines issues and challenges in the design of cultural adaptations that are developed from an original evidence-based intervention (EBI). Recently emerging multistep frameworks or stage models are examined, as these can systematically guide the development of culturally adapted EBIs. Critical issues are also presented regarding whether and how such adaptations may be conducted, and empirical evidence is presented regarding the effectiveness of such cultural adaptations. Recent evidence suggests that these cultural adaptations are effective when applied with certain subcultural groups, although they are less effective when applied with other subcultural groups. Generally, current evidence regarding the effectiveness of cultural adaptations is promising but mixed. Further research is needed to obtain more definitive conclusions regarding the efficacy and effectiveness of culturally adapted EBIs. Directions for future research and recommendations are presented to guide the development of a new generation of culturally adapted EBIs. PMID:20192800

  17. First-order design of a reflective viewfinder for adaptive optics ophthalmoscopy.

    PubMed

    Dubra, Alfredo; Sulai, Yusufu N

    2012-11-19

    Adaptive optics (AO) ophthalmoscopes with small fields of view have limited clinical utility. We propose to address this problem in reflective instruments by incorporating a viewfinder pupil relay designed by considering pupil and image centering and conjugation. Diverting light from an existing pupil optical relay to the viewfinder relay allows switching field of view size. Design methods that meet all four centering and conjugation conditions using either a single concave mirror or with two concave mirrors forming an off-axis afocal telescope are presented. Two different methods for calculating the focal length and orientation of the concave mirrors in the afocal viewfinder relay are introduced. Finally, a 2.2 × viewfinder mode is demonstrated in an AO scanning light ophthalmoscope.

  18. Bayesian optimal response-adaptive design for binary responses using stopping rule.

    PubMed

    Komaki, Fumiyasu; Biswas, Atanu

    2016-05-02

    Response-adaptive designs are used in phase III clinical trials to allocate a larger number of patients to the better treatment arm. Optimal designs are explored in the recent years in the context of response-adaptive designs, in the frequentist view point only. In the present paper, we propose some response-adaptive designs for two treatments based on Bayesian prediction for phase III clinical trials. Some properties are studied and numerically compared with some existing competitors. A real data set is used to illustrate the applicability of the proposed methodology where we redesign the experiment using parameters derived from the data set.

  19. Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen

    2017-02-01

    Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.

  20. An adaptive Cartesian grid generation method for Dirty geometry

    NASA Astrophysics Data System (ADS)

    Wang, Z. J.; Srinivasan, Kumar

    2002-07-01

    Traditional structured and unstructured grid generation methods need a water-tight boundary surface grid to start. Therefore, these methods are named boundary to interior (B2I) approaches. Although these methods have achieved great success in fluid flow simulations, the grid generation process can still be very time consuming if non-water-tight geometries are given. Significant user time can be taken to repair or clean a dirty geometry with cracks, overlaps or invalid manifolds before grid generation can take place. In this paper, we advocate a different approach in grid generation, namely the interior to boundary (I2B) approach. With an I2B approach, the computational grid is first generated inside the computational domain. Then this grid is intelligently connected to the boundary, and the boundary grid is a result of this connection. A significant advantage of the I2B approach is that dirty geometries can be handled without cleaning or repairing, dramatically reducing grid generation time. An I2B adaptive Cartesian grid generation method is developed in this paper to handle dirty geometries without geometry repair. Comparing with a B2I approach, the grid generation time with the I2B approach for a complex automotive engine can be reduced by three orders of magnitude. Copyright

  1. Applications of a transonic wing design method

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Smith, Leigh A.

    1989-01-01

    A method for designing wings and airfoils at transonic speeds using a predictor/corrector approach was developed. The procedure iterates between an aerodynamic code, which predicts the flow about a given geometry, and the design module, which compares the calculated and target pressure distributions and modifies the geometry using an algorithm that relates differences in pressure to a change in surface curvature. The modular nature of the design method makes it relatively simple to couple it to any analysis method. The iterative approach allows the design process and aerodynamic analysis to converge in parallel, significantly reducing the time required to reach a final design. Viscous and static aeroelastic effects can also be accounted for during the design or as a post-design correction. Results from several pilot design codes indicated that the method accurately reproduced pressure distributions as well as the coordinates of a given airfoil or wing by modifying an initial contour. The codes were applied to supercritical as well as conventional airfoils, forward- and aft-swept transport wings, and moderate-to-highly swept fighter wings. The design method was found to be robust and efficient, even for cases having fairly strong shocks.

  2. Adapting Dam and Reservoir Design and Operations to Climate Change

    NASA Astrophysics Data System (ADS)

    Roy, René; Braun, Marco; Chaumont, Diane

    2013-04-01

    In order to identify the potential initiatives that the dam, reservoir and water resources systems owners and operators may undertake to cope with climate change issues, it is essential to determine the current state of knowledge of their impacts on hydrological variables at regional and local scales. Future climate scenarios derived from climate model simulations can be combined with operational hydrological modeling tools and historical observations to evaluate realistic pathways of future hydrological conditions for specific drainage basins. In the case of hydropower production those changes in hydrological conditions may have significant economic impacts. For over a decade the state owned hydropower producer Hydro Québec has been exploring the physical impacts on their watersheds by relying on climate services in collaboration with Ouranos, a consortium on regional climatology and adaptation to climate change. Previous climate change impact analysis had been including different sources of climate simulation data, explored different post-processing approaches and used hydrological impact models. At a new stage of this collaboration the operational management of Hydro Quebec aspired to carry out a cost-benefit analysis of considering climate change in the refactoring of hydro-power installations. In the process of the project not only a set of scenarios of future runoff regimes had to be defined to support long term planning decisions of a dam and reservoir operator, but also the significance of uncertainties needed to be communicated and made understood. We provide insight into a case study that took some unexpected turns and leaps by bringing together climate scientists, hydrologists and hydro-power operation managers. The study includes the selection of appropriate climate scenarios, the correction of biases, the application of hydrological models and the assessment of uncertainties. However, it turned out that communicating the science properly and

  3. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  4. A forward method for optimal stochastic nonlinear and adaptive control

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1988-01-01

    A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.

  5. Application of Adaptive Design Methodology in Development of a Long-Acting Glucagon-Like Peptide-1 Analog (Dulaglutide): Statistical Design and Simulations

    PubMed Central

    Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda

    2012-01-01

    Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775

  6. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  7. Bayesian adaptive design: improving the effectiveness of monitoring of the Great Barrier Reef.

    PubMed

    Kang, Su Yun; McGree, James M; Drovandi, Christopher C; Caley, M Julian; Mengersen, Kerrie L

    2016-12-01

    Monitoring programs are essential for understanding patterns, trends, and threats in ecological and environmental systems. However, such programs are costly in terms of dollars, human resources, and technology, and complex in terms of balancing short- and long-term requirements. In this work, We develop new statistical methods for implementing cost-effective adaptive sampling and monitoring schemes for coral reef that can better utilize existing information and resources, and which can incorporate available prior information. Our research was motivated by developing efficient monitoring practices for Australia's Great Barrier Reef. We develop and implement two types of adaptive sampling schemes, static and sequential, and show that they can be more informative and cost-effective than an existing (nonadaptive) monitoring program. Our methods are developed in a Bayesian framework with a range of utility functions relevant to environmental monitoring. Our results demonstrate the considerable potential for adaptive design to support improved management outcomes in comparison to set-and-forget styles of surveillance monitoring.

  8. Impeller blade design method for centrifugal compressors

    NASA Technical Reports Server (NTRS)

    Jansen, W.; Kirschner, A. M.

    1974-01-01

    The design of a centrifugal impeller with blades that are aerodynamically efficient, easy to manufacture, and mechanically sound is discussed. The blade design method described here satisfies the first two criteria and with a judicious choice of certain variables will also satisfy stress considerations. The blade shape is generated by specifying surface velocity distributions and consists of straight-line elements that connect points at hub and shroud. The method may be used to design radially elemented and backward-swept blades. The background, a brief account of the theory, and a sample design are described.

  9. Developing a Bayesian adaptive design for a phase I clinical trial: a case study for a novel HIV treatment.

    PubMed

    Mason, Alexina J; Gonzalez-Maffe, Juan; Quinn, Killian; Doyle, Nicki; Legg, Ken; Norsworthy, Peter; Trevelion, Roy; Winston, Alan; Ashby, Deborah

    2017-02-28

    The design of phase I studies is often challenging, because of limited evidence to inform study protocols. Adaptive designs are now well established in cancer but much less so in other clinical areas. A phase I study to assess the safety, pharmacokinetic profile and antiretroviral efficacy of C34-PEG4 -Chol, a novel peptide fusion inhibitor for the treatment of HIV infection, has been set up with Medical Research Council funding. During the study workup, Bayesian adaptive designs based on the continual reassessment method were compared with a more standard rule-based design, with the aim of choosing a design that would maximise the scientific information gained from the study. The process of specifying and evaluating the design options was time consuming and required the active involvement of all members of the trial's protocol development team. However, the effort was worthwhile as the originally proposed rule-based design has been replaced by a more efficient Bayesian adaptive design. While the outcome to be modelled, design details and evaluation criteria are trial specific, the principles behind their selection are general. This case study illustrates the steps required to establish a design in a novel context. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  11. Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.

  12. Frequency-based design of Adaptive Optics systems

    NASA Astrophysics Data System (ADS)

    Agapito, Guido; Battistelli, Giorgio; Mari, Daniele; Selvi, Daniela; Tesi, Alberto; Tesi, Pietro

    2013-12-01

    The problem of reducing the effects of wavefront distortion and structural vibrations inground-based telescopes is addressed within a modal-control framework. The proposed approach aimsat optimizing the parameters of a given modal stabilizing controller with respect to a performance criterionwhich reflects the residual phase variance and is defined on a sampled frequency domain. Thisframework makes it possible to account for turbulence and vibration profiles of arbitrary complexity(even empirical power spectral densities from data), while the controller order can be kept at a desiredvalue. Moreover it is possible to take into account additional requirements, as robustness in the presenceof disturbances whose intensity and frequency profile vary with time. The proposed design procedureresults in solving a minmax problem and can be converted into a linear programming problem withquadratic constraints, for which there exist several standard optimization techniques. The optimizationstarts from a given stabilizing controller which can be either a non-model-based controller (in this caseno identification effort is required), or a model-based controller synthesized by means of turbulence andvibration models of limited complexity. In this sense the approach can be viewed not only as alternative,but also as cooperative with other control design approaches. The results obtained by means of anEnd-to-End simulator are shown to emphasize the power of the proposed method.

  13. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  14. Evaluation of Adaptive Subdivision Method on Mobile Device

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila

    2013-06-01

    Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.

  15. Design and Flight Tests of an Adaptive Control System Employing Normal-Acceleration Command

    NASA Technical Reports Server (NTRS)

    McNeill, Water E.; McLean, John D.; Hegarty, Daniel M.; Heinle, Donovan R.

    1961-01-01

    An adaptive control system employing normal-acceleration command has been designed with the aid of an analog computer and has been flight tested. The design of the system was based on the concept of using a mathematical model in combination with a high gain and a limiter. The study was undertaken to investigate the application of a system of this type to the task of maintaining nearly constant dynamic longitudinal response of a piloted airplane over the flight envelope without relying on air data measurements for gain adjustment. The range of flight conditions investigated was between Mach numbers of 0.36 and 1.15 and altitudes of 10,000 and 40,000 feet. The final adaptive system configuration was derived from analog computer tests, in which the physical airplane control system and much of the control circuitry were included in the loop. The method employed to generate the feedback signals resulted in a model whose characteristics varied somewhat with changes in flight condition. Flight results showed that the system limited the variation in longitudinal natural frequency of the adaptive airplane to about half that of the basic airplane and that, for the subsonic cases, the damping ratio was maintained between 0.56 and 0.69. The system also automatically compensated for the transonic trim change. Objectionable features of the system were an exaggerated sensitivity of pitch attitude to gust disturbances, abnormally large pitch attitude response for a given pilot input at low speeds, and an initial delay in normal-acceleration response to pilot control at all flight conditions. The adaptive system chatter of +/-0.05 to +/-0.10 of elevon at about 9 cycles per second (resulting in a maximum airplane normal-acceleration response of from +/-0.025 g to +/- 0.035 g) was considered by the pilots to be mildly objectionable but tolerable.

  16. Model reduction methods for control design

    NASA Technical Reports Server (NTRS)

    Dunipace, K. R.

    1988-01-01

    Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.

  17. Adaptive filter design based on the LMS algorithm for delay elimination in TCR/FC compensators.

    PubMed

    Hooshmand, Rahmat Allah; Torabian Esfahani, Mahdi

    2011-04-01

    Thyristor controlled reactor with fixed capacitor (TCR/FC) compensators have the capability of compensating reactive power and improving power quality phenomena. Delay in the response of such compensators degrades their performance. In this paper, a new method based on adaptive filters (AF) is proposed in order to eliminate delay and increase the response of the TCR compensator. The algorithm designed for the adaptive filters is performed based on the least mean square (LMS) algorithm. In this design, instead of fixed capacitors, band-pass LC filters are used. To evaluate the filter, a TCR/FC compensator was used for nonlinear and time varying loads of electric arc furnaces (EAFs). These loads caused occurrence of power quality phenomena in the supplying system, such as voltage fluctuation and flicker, odd and even harmonics and unbalancing in voltage and current. The above design was implemented in a realistic system model of a steel complex. The simulation results show that applying the proposed control in the TCR/FC compensator efficiently eliminated delay in the response and improved the performance of the compensator in the power system.

  18. An Automatic Online Calibration Design in Adaptive Testing

    ERIC Educational Resources Information Center

    Makransky, Guido; Glas, Cees A. W.

    2010-01-01

    An accurately calibrated item bank is essential for a valid computerized adaptive test. However, in some settings, such as occupational testing, there is limited access to test takers for calibration. As a result of the limited access to possible test takers, collecting data to accurately calibrate an item bank in an occupational setting is…

  19. Evolving RBF neural networks for adaptive soft-sensor design.

    PubMed

    Alexandridis, Alex

    2013-12-01

    This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.

  20. Mixed Methods Research Designs in Counseling Psychology

    ERIC Educational Resources Information Center

    Hanson, William E.; Creswell, John W.; Clark, Vicki L. Plano; Petska, Kelly S.; Creswell, David J.

    2005-01-01

    With the increased popularity of qualitative research, researchers in counseling psychology are expanding their methodologies to include mixed methods designs. These designs involve the collection, analysis, and integration of quantitative and qualitative data in a single or multiphase study. This article presents an overview of mixed methods…

  1. Airbreathing hypersonic vehicle design and analysis methods

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.

    1996-01-01

    The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.

  2. Iterative methods for design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Yoon, B. G.

    1989-01-01

    A numerical method is presented for design sensitivity analysis, using an iterative-method reanalysis of the structure generated by a small perturbation in the design variable; a forward-difference scheme is then employed to obtain the approximate sensitivity. Algorithms are developed for displacement and stress sensitivity, as well as for eignevalues and eigenvector sensitivity, and the iterative schemes are modified so that the coefficient matrices are constant and therefore decomposed only once.

  3. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, Joseph Thaddeus

    1998-01-01

    A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)

  4. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, J.T.

    1998-04-28

    A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.

  5. Adapted G-mode Clustering Method applied to Asteroid Taxonomy

    NASA Astrophysics Data System (ADS)

    Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.

    2013-11-01

    The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.

  6. Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.

    2010-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.

  7. A Self-Adaptive Projection and Contraction Method for Linear Complementarity Problems

    SciTech Connect

    Liao Lizhi Wang Shengli

    2003-10-15

    In this paper we develop a self-adaptive projection and contraction method for the linear complementarity problem (LCP). This method improves the practical performance of the modified projection and contraction method by adopting a self-adaptive technique. The global convergence of our new method is proved under mild assumptions. Our numerical tests clearly demonstrate the necessity and effectiveness of our proposed method.

  8. Context-Adaptive Learning Designs by Using Semantic Web Services

    ERIC Educational Resources Information Center

    Dietze, Stefan; Gugliotta, Alessio; Domingue, John

    2007-01-01

    IMS Learning Design (IMS-LD) is a promising technology aimed at supporting learning processes. IMS-LD packages contain the learning process metadata as well as the learning resources. However, the allocation of resources--whether data or services--within the learning design is done manually at design-time on the basis of the subjective appraisals…

  9. Principles underlying the design of "The Number Race", an adaptive computer game for remediation of dyscalculia

    PubMed Central

    Wilson, Anna J; Dehaene, Stanislas; Pinel, Philippe; Revkin, Susannah K; Cohen, Laurent; Cohen, David

    2006-01-01

    Background Adaptive game software has been successful in remediation of dyslexia. Here we describe the cognitive and algorithmic principles underlying the development of similar software for dyscalculia. Our software is based on current understanding of the cerebral representation of number and the hypotheses that dyscalculia is due to a "core deficit" in number sense or in the link between number sense and symbolic number representations. Methods "The Number Race" software trains children on an entertaining numerical comparison task, by presenting problems adapted to the performance level of the individual child. We report full mathematical specifications of the algorithm used, which relies on an internal model of the child's knowledge in a multidimensional "learning space" consisting of three difficulty dimensions: numerical distance, response deadline, and conceptual complexity (from non-symbolic numerosity processing to increasingly complex symbolic operations). Results The performance of the software was evaluated both by mathematical simulations and by five weeks of use by nine children with mathematical learning difficulties. The results indicate that the software adapts well to varying levels of initial knowledge and learning speeds. Feedback from children, parents and teachers was positive. A companion article [1] describes the evolution of number sense and arithmetic scores before and after training. Conclusion The software, open-source and freely available online, is designed for learning disabled children aged 5–8, and may also be useful for general instruction of normal preschool children. The learning algorithm reported is highly general, and may be applied in other domains. PMID:16734905

  10. Design of a Mobile Agent-Based Adaptive Communication Middleware for Federations of Critical Infrastructure Simulations

    NASA Astrophysics Data System (ADS)

    Görbil, Gökçe; Gelenbe, Erol

    The simulation of critical infrastructures (CI) can involve the use of diverse domain specific simulators that run on geographically distant sites. These diverse simulators must then be coordinated to run concurrently in order to evaluate the performance of critical infrastructures which influence each other, especially in emergency or resource-critical situations. We therefore describe the design of an adaptive communication middleware that provides reliable and real-time one-to-one and group communications for federations of CI simulators over a wide-area network (WAN). The proposed middleware is composed of mobile agent-based peer-to-peer (P2P) overlays, called virtual networks (VNets), to enable resilient, adaptive and real-time communications over unreliable and dynamic physical networks (PNets). The autonomous software agents comprising the communication middleware monitor their performance and the underlying PNet, and dynamically adapt the P2P overlay and migrate over the PNet in order to optimize communications according to the requirements of the federation and the current conditions of the PNet. Reliable communications is provided via redundancy within the communication middleware and intelligent migration of agents over the PNet. The proposed middleware integrates security methods in order to protect the communication infrastructure against attacks and provide privacy and anonymity to the participants of the federation. Experiments with an initial version of the communication middleware over a real-life networking testbed show that promising improvements can be obtained for unicast and group communications via the agent migration capability of our middleware.

  11. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  12. Decentralized adaptive control of robot manipulators with robust stabilization design

    NASA Technical Reports Server (NTRS)

    Yuan, Bau-San; Book, Wayne J.

    1988-01-01

    Due to geometric nonlinearities and complex dynamics, a decentralized technique for adaptive control for multilink robot arms is attractive. Lyapunov-function theory for stability analysis provides an approach to robust stabilization. Each joint of the arm is treated as a component subsystem. The adaptive controller is made locally stable with servo signals including proportional and integral gains. This results in the bound on the dynamical interactions with other subsystems. A nonlinear controller which stabilizes the system with uniform boundedness is used to improve the robustness properties of the overall system. As a result, the robot tracks the reference trajectories with convergence. This strategy makes computation simple and therefore facilitates real-time implementation.

  13. Application of optical diffraction method in designing phase plates

    NASA Astrophysics Data System (ADS)

    Lei, Ze-Min; Sun, Xiao-Yan; Lv, Feng-Nian; Zhang, Zhen; Lu, Xing-Qiang

    2016-11-01

    Continuous phase plate (CPP), which has a function of beam shaping in laser systems, is one kind of important diffractive optics. Based on the Fourier transform of the Gerchberg-Saxton (G-S) algorithm for designing CPP, we proposed an optical diffraction method according to the real system conditions. A thin lens can complete the Fourier transform of the input signal and the inverse propagation of light can be implemented in a program. Using both of the two functions can realize the iteration process to calculate the near-field distribution of light and the far-field repeatedly, which is similar to the G-S algorithm. The results show that using the optical diffraction method can design a CPP for a complicated laser system, and make the CPP have abilities of beam shaping and phase compensation for the phase aberration of the system. The method can improve the adaptation of the phase plate in systems with phase aberrations.

  14. A Web-Based Adaptive Tutor to Teach PCR Primer Design

    ERIC Educational Resources Information Center

    van Seters, Janneke R.; Wellink, Joan; Tramper, Johannes; Goedhart, Martin J.; Ossevoort, Miriam A.

    2012-01-01

    When students have varying prior knowledge, personalized instruction is desirable. One way to personalize instruction is by using adaptive e-learning to offer training of varying complexity. In this study, we developed a web-based adaptive tutor to teach PCR primer design: the PCR Tutor. We used part of the Taxonomy of Educational Objectives (the…

  15. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  16. ADAPT: A knowledge-based synthesis tool for digital signal processing system design

    SciTech Connect

    Cooley, E.S.

    1988-01-01

    A computer aided synthesis tool for expansion, compression, and filtration of digital images is described. ADAPT, the Autonomous Digital Array Programming Tool, uses an extensive design knowledge base to synthesize a digital signal processing (DSP) system. Input to ADAPT can be either a behavioral description in English, or a block level specification via Petri Nets. The output from ADAPT comprises code to implement the DSP system on an array of processors. ADAPT is constructed using C, Prolog, and X Windows on a SUN 3/280 workstation. ADAPT knowledge encompasses DSP component information and the design algorithms and heuristics of a competent DSP designer. The knowledge is used to form queries for design capture, to generate design constraints from the user's responses, and to examine the design constraints. These constraints direct the search for possible DSP components and target architectures. Constraints are also used for partitioning the target systems into less complex subsystems. The subsystems correspond to architectural building blocks of the DSP design. These subsystems inherit design constraints and DSP characteristics from their parent blocks. Thus, a DSP subsystem or parent block, as designed by ADAPT, must meet the user's design constraints. Design solutions are sought by searching the Components section of the design knowledge base. Component behavior which matches or is similar to that required by the DSP subsystems is sought. Each match, which corresponds to a design alternative, is evaluated in terms of its behavior. When a design is sufficiently close to the behavior required by the user, detailed mathematical simulations may be performed to accurately determine exact behavior.

  17. Preliminary aerothermodynamic design method for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    Harloff, G. J.; Petrie, S. L.

    1987-01-01

    Preliminary design methods are presented for vehicle aerothermodynamics. Predictions are made for Shuttle orbiter, a Mach 6 transport vehicle and a high-speed missile configuration. Rapid and accurate methods are discussed for obtaining aerodynamic coefficients and heat transfer rates for laminar and turbulent flows for vehicles at high angles of attack and hypersonic Mach numbers.

  18. Response-Adaptive Decision-Theoretic Trial Design: Operating Characteristics and Ethics

    PubMed Central

    Lipsky, Ari M.; Lewis, Roger J.

    2013-01-01

    Adaptive randomization is used in clinical trials to increase statistical efficiency. In addition, some clinicians and researchers believe that using adaptive randomization leads necessarily to more ethical treatment of subjects in a trial. We develop Bayesian, decision-theoretic, clinical trial designs with response-adaptive randomization and a primary goal of estimating treatment effect, and then contrast these designs with designs that also include in their loss function a cost for poor subject outcome. When the loss function did not incorporate a cost for poor subject outcome, the gains in efficiency from response-adaptive randomization were accompanied by ethically concerning subject allocations. Conversely, including a cost for poor subject outcome demonstrated a more acceptable balance between the competing needs in the trial. A subsequent, parallel set of trials designed to control explicitly type I and II error rates showed that much of the improvement achieved through modification of the loss function was essentially negated. Therefore, gains in efficiency from the use of a decision-theoretic, response-adaptive design using adaptive randomization may only be assumed to apply to those goals which are explicitly included in the loss function. Trial goals, including ethical ones, which do not appear in the loss function are ignored and may even be compromised; it is thus inappropriate to assume that all adaptive trials are necessarily more ethical. Controlling type I and II error rates largely negates the benefit of including competing needs in favor of the goal of parameter estimation. PMID:23558674

  19. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  20. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  1. Design of fuzzy system by NNs and realization of adaptability

    NASA Technical Reports Server (NTRS)

    Takagi, Hideyuki

    1993-01-01

    The issue of designing and tuning fuzzy membership functions by neural networks (NN's) was started by NN-driven Fuzzy Reasoning in 1988. NN-driven fuzzy reasoning involves a NN embedded in the fuzzy system which generates membership values. In conventional fuzzy system design, the membership functions are hand-crafted by trial and error for each input variable. In contrast, NN-driven fuzzy reasoning considers several variables simultaneously and can design a multidimensional, nonlinear membership function for the entire subspace.

  2. Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map

    SciTech Connect

    Frankie Li, Shiu Fai

    2014-06-01

    IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is also designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.

  3. Single and Multiresponse Adaptive Design of Experiments with Application to Design Optimization of Novel Heat Exchangers

    DTIC Science & Technology

    2009-01-01

    performance evaluation method for air- cooled heat exchangers in which conventional 3D Computational Fluid Dynamics (CFD) simulation is replaced with a 2D...parts to this research thrust. First, is a new multi-level performance evaluation method for air- cooled heat exchangers in which conventional 3D...performance of a novel air- cooled heat exchanger such as tube-fin or microchannels . The novel aspect generally refers to a new fin design or a tube

  4. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  5. Load-Adapted Design of Generative Manufactured Lattice Structures

    NASA Astrophysics Data System (ADS)

    Reinhart, Gunther; Teufelhart, Stefan

    Additive layer manufacturing offers many opportunities for the production of lightweight components, because of the high geometrical freedom that can be realized in comparison to conventional manufacturing processes. This potential gets demonstrated at the example of a bending beam. Therefore, a topology optimization is performed as well as the use of periodically arranged lattice structures. The latter ones show the constraint, that shear forces in the struts reduce the stiffness of the lattice. To avoid this, the structure has to be adapted to the flux of force. This thesis is supported by studies on a torqueloaded shaft.

  6. Adapting the Mathematical Task Framework to Design Online Didactic Objects

    ERIC Educational Resources Information Center

    Bowers, Janet; Bezuk, Nadine; Aguilar, Karen

    2011-01-01

    Designing didactic objects involves imagining how students can conceive of specific mathematical topics and then imagining what types of classroom discussions could support these mental constructions. This study investigated whether it was possible to design Java applets that might serve as didactic objects to support online learning where…

  7. A high-throughput multiplex method adapted for GMO detection.

    PubMed

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  8. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  9. Using engineering control principles to inform the design of adaptive interventions: a conceptual introduction.

    PubMed

    Rivera, Daniel E; Pew, Michael D; Collins, Linda M

    2007-05-01

    The goal of this paper is to describe the role that control engineering principles can play in developing and improving the efficacy of adaptive, time-varying interventions. It is demonstrated that adaptive interventions constitute a form of feedback control system in the context of behavioral health. Consequently, drawing from ideas in control engineering has the potential to significantly inform the analysis, design, and implementation of adaptive interventions, leading to improved adherence, better management of limited resources, a reduction of negative effects, and overall more effective interventions. This article illustrates how to express an adaptive intervention in control engineering terms, and how to use this framework in a computer simulation to investigate the anticipated impact of intervention design choices on efficacy. The potential benefits of operationalizing decision rules based on control engineering principles are particularly significant for adaptive interventions that involve multiple components or address co-morbidities, situations that pose significant challenges to conventional clinical practice.

  10. Design of smart composite platforms for adaptive trust vector control and adaptive laser telescope for satellite applications

    NASA Astrophysics Data System (ADS)

    Ghasemi-Nejhad, Mehrdad N.

    2013-04-01

    This paper presents design of smart composite platforms for adaptive trust vector control (TVC) and adaptive laser telescope for satellite applications. To eliminate disturbances, the proposed adaptive TVC and telescope systems will be mounted on two analogous smart composite platform with simultaneous precision positioning (pointing) and vibration suppression (stabilizing), SPPVS, with micro-radian pointing resolution, and then mounted on a satellite in two different locations. The adaptive TVC system provides SPPVS with large tip-tilt to potentially eliminate the gimbals systems. The smart composite telescope will be mounted on a smart composite platform with SPPVS and then mounted on a satellite. The laser communication is intended for the Geosynchronous orbit. The high degree of directionality increases the security of the laser communication signal (as opposed to a diffused RF signal), but also requires sophisticated subsystems for transmission and acquisition. The shorter wavelength of the optical spectrum increases the data transmission rates, but laser systems require large amounts of power, which increases the mass and complexity of the supporting systems. In addition, the laser communication on the Geosynchronous orbit requires an accurate platform with SPPVS capabilities. Therefore, this work also addresses the design of an active composite platform to be used to simultaneously point and stabilize an intersatellite laser communication telescope with micro-radian pointing resolution. The telescope is a Cassegrain receiver that employs two mirrors, one convex (primary) and the other concave (secondary). The distance, as well as the horizontal and axial alignment of the mirrors, must be precisely maintained or else the optical properties of the system will be severely degraded. The alignment will also have to be maintained during thruster firings, which will require vibration suppression capabilities of the system as well. The innovative platform has been

  11. Designing Forest Adaptation Experiments through Manager-Scientist Partnerships

    NASA Astrophysics Data System (ADS)

    Nagel, L. M.; Swanston, C.; Janowiak, M.

    2014-12-01

    Three common forest adaptation options discussed in the context of an uncertain future climate are: creating resistance, promoting resilience, and enabling forests to respond to change. Though there is consensus on the broad management goals addressed by each of these options, translating these concepts into management plans specific for individual forest types that vary in structure, composition, and function remains a challenge. We will describe a decision-making framework that we employed within a manager-scientist partnership to develop a suite of adaptation treatments for two contrasting forest types as part of a long-term forest management experiment. The first, in northern Minnesota, is a red pine-dominated forest with components of white pine, aspen, paper birch, and northern red oak, with a hazel understory. The second, in southwest Colorado, is a warm-dry mixed conifer forest dominated by ponderosa pine, white fir, and Douglas-fir, with scattered aspen and an understory of Gambel oak. The current conditions at both sites are characterized by overstocking with moderate-to-high fuel loading, vulnerability to numerous forest health threats, and are generally uncharacteristic of historic structure and composition. The desired future condition articulated by managers for each site included elements of historic structure and natural range of variability, but were greatly tempered by known vulnerabilities and projected changes to climate and disturbance patterns. The resultant range of treatments we developed are distinct for each forest type, and address a wide range of management objectives.

  12. Adaptive optoelectronic camouflage systems with designs inspired by cephalopod skins.

    PubMed

    Yu, Cunjiang; Li, Yuhang; Zhang, Xun; Huang, Xian; Malyarchuk, Viktor; Wang, Shuodao; Shi, Yan; Gao, Li; Su, Yewang; Zhang, Yihui; Xu, Hangxun; Hanlon, Roger T; Huang, Yonggang; Rogers, John A

    2014-09-09

    Octopus, squid, cuttlefish, and other cephalopods exhibit exceptional capabilities for visually adapting to or differentiating from the coloration and texture of their surroundings, for the purpose of concealment, communication, predation, and reproduction. Long-standing interest in and emerging understanding of the underlying ultrastructure, physiological control, and photonic interactions has recently led to efforts in the construction of artificial systems that have key attributes found in the skins of these organisms. Despite several promising options in active materials for mimicking biological color tuning, existing routes to integrated systems do not include critical capabilities in distributed sensing and actuation. Research described here represents progress in this direction, demonstrated through the construction, experimental study, and computational modeling of materials, device elements, and integration schemes for cephalopod-inspired flexible sheets that can autonomously sense and adapt to the coloration of their surroundings. These systems combine high-performance, multiplexed arrays of actuators and photodetectors in laminated, multilayer configurations on flexible substrates, with overlaid arrangements of pixelated, color-changing elements. The concepts provide realistic routes to thin sheets that can be conformally wrapped onto solid objects to modulate their visual appearance, with potential relevance to consumer, industrial, and military applications.

  13. Adaptive optoelectronic camouflage systems with designs inspired by cephalopod skins

    PubMed Central

    Yu, Cunjiang; Li, Yuhang; Zhang, Xun; Huang, Xian; Malyarchuk, Viktor; Wang, Shuodao; Shi, Yan; Gao, Li; Su, Yewang; Zhang, Yihui; Xu, Hangxun; Hanlon, Roger T.; Huang, Yonggang; Rogers, John A.

    2014-01-01

    Octopus, squid, cuttlefish, and other cephalopods exhibit exceptional capabilities for visually adapting to or differentiating from the coloration and texture of their surroundings, for the purpose of concealment, communication, predation, and reproduction. Long-standing interest in and emerging understanding of the underlying ultrastructure, physiological control, and photonic interactions has recently led to efforts in the construction of artificial systems that have key attributes found in the skins of these organisms. Despite several promising options in active materials for mimicking biological color tuning, existing routes to integrated systems do not include critical capabilities in distributed sensing and actuation. Research described here represents progress in this direction, demonstrated through the construction, experimental study, and computational modeling of materials, device elements, and integration schemes for cephalopod-inspired flexible sheets that can autonomously sense and adapt to the coloration of their surroundings. These systems combine high-performance, multiplexed arrays of actuators and photodetectors in laminated, multilayer configurations on flexible substrates, with overlaid arrangements of pixelated, color-changing elements. The concepts provide realistic routes to thin sheets that can be conformally wrapped onto solid objects to modulate their visual appearance, with potential relevance to consumer, industrial, and military applications. PMID:25136094

  14. Multidisciplinary Optimization Methods for Preliminary Design

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Weston, R. P.; Zang, T. A.

    1997-01-01

    An overview of multidisciplinary optimization (MDO) methodology and two applications of this methodology to the preliminary design phase are presented. These applications are being undertaken to improve, develop, validate and demonstrate MDO methods. Each is presented to illustrate different aspects of this methodology. The first application is an MDO preliminary design problem for defining the geometry and structure of an aerospike nozzle of a linear aerospike rocket engine. The second application demonstrates the use of the Framework for Interdisciplinary Design Optimization (FIDO), which is a computational environment system, by solving a preliminary design problem for a High-Speed Civil Transport (HSCT). The two sample problems illustrate the advantages to performing preliminary design with an MDO process.

  15. Method for designing and controlling compliant gripper

    NASA Astrophysics Data System (ADS)

    Spanu, A. R.; Besnea, D.; Avram, M.; Ciobanu, R.

    2016-08-01

    The compliant grippers are useful for high accuracy grasping of small objects with adaptive control of contact points along the active surfaces of the fingers. The spatial trajectories of the elements become a must, due to the development of MEMS. The paper presents the solution for the compliant gripper designed by the authors, so the planar and spatial movements are discussed. At the beginning of the process, the gripper could work as passive one just for the moment when it has to reach out the object surface. The forces provided by the elements have to avoid the damage. As part of the system, the camera is taken picture of the object, in order to facilitate the positioning of the system. When the contact is established, the mechanism is acting as an active gripper by using an electrical stepper motor, which has controlled movement.

  16. A decentralized linear quadratic control design method for flexible structures

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1990-01-01

    A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass

  17. [Novel method of noise power spectrum measurement for computed tomography images with adaptive iterative reconstruction method].

    PubMed

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru

    2012-01-01

    Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.

  18. Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls.

    PubMed

    Bauer, Peter; Bretz, Frank; Dragalin, Vladimir; König, Franz; Wassmer, Gernot

    2016-02-10

    'Multistage testing with adaptive designs' was the title of an article by Peter Bauer that appeared 1989 in the German journal Biometrie und Informatik in Medizin und Biologie. The journal does not exist anymore but the methodology found widespread interest in the scientific community over the past 25 years. The use of such multistage adaptive designs raised many controversial discussions from the beginning on, especially after the publication by Bauer and Köhne 1994 in Biometrics: Broad enthusiasm about potential applications of such designs faced critical positions regarding their statistical efficiency. Despite, or possibly because of, this controversy, the methodology and its areas of applications grew steadily over the years, with significant contributions from statisticians working in academia, industry and agencies around the world. In the meantime, such type of adaptive designs have become the subject of two major regulatory guidance documents in the US and Europe and the field is still evolving. Developments are particularly noteworthy in the most important applications of adaptive designs, including sample size reassessment, treatment selection procedures, and population enrichment designs. In this article, we summarize the developments over the past 25 years from different perspectives. We provide a historical overview of the early days, review the key methodological concepts and summarize regulatory and industry perspectives on such designs. Then, we illustrate the application of adaptive designs with three case studies, including unblinded sample size reassessment, adaptive treatment selection, and adaptive endpoint selection. We also discuss the availability of software for evaluating and performing such designs. We conclude with a critical review of how expectations from the beginning were fulfilled, and - if not - discuss potential reasons why this did not happen.

  19. Analysis Method for Quantifying Vehicle Design Goals

    NASA Technical Reports Server (NTRS)

    Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael

    2007-01-01

    A document discusses a method for using Design Structure Matrices (DSM), coupled with high-level tools representing important life-cycle parameters, to comprehensively conceptualize a flight/ground space transportation system design by dealing with such variables as performance, up-front costs, downstream operations costs, and reliability. This approach also weighs operational approaches based on their effect on upstream design variables so that it is possible to readily, yet defensively, establish linkages between operations and these upstream variables. To avoid the large range of problems that have defeated previous methods of dealing with the complex problems of transportation design, and to cut down the inefficient use of resources, the method described in the document identifies those areas that are of sufficient promise and that provide a higher grade of analysis for those issues, as well as the linkages at issue between operations and other factors. Ultimately, the system is designed to save resources and time, and allows for the evolution of operable space transportation system technology, and design and conceptual system approach targets.

  20. Axisymmetric inlet minimum weight design method

    NASA Technical Reports Server (NTRS)

    Nadell, Shari-Beth

    1995-01-01

    An analytical method for determining the minimum weight design of an axisymmetric supersonic inlet has been developed. The goal of this method development project was to improve the ability to predict the weight of high-speed inlets in conceptual and preliminary design. The initial model was developed using information that was available from inlet conceptual design tools (e.g., the inlet internal and external geometries and pressure distributions). Stiffened shell construction was assumed. Mass properties were computed by analyzing a parametric cubic curve representation of the inlet geometry. Design loads and stresses were developed at analysis stations along the length of the inlet. The equivalent minimum structural thicknesses for both shell and frame structures required to support the maximum loads produced by various load conditions were then determined. Preliminary results indicated that inlet hammershock pressures produced the critical design load condition for a significant portion of the inlet. By improving the accuracy of inlet weight predictions, the method will improve the fidelity of propulsion and vehicle design studies and increase the accuracy of weight versus cost studies.

  1. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  2. Studying the neural bases of prism adaptation using fMRI: A technical and design challenge.

    PubMed

    Bultitude, Janet H; Farnè, Alessandro; Salemme, Romeo; Ibarrola, Danielle; Urquizar, Christian; O'Shea, Jacinta; Luauté, Jacques

    2016-12-30

    Prism adaptation induces rapid recalibration of visuomotor coordination. The neural mechanisms of prism adaptation have come under scrutiny since the observations that the technique can alleviate hemispatial neglect following stroke, and can alter spatial cognition in healthy controls. Relative to non-imaging behavioral studies, fMRI investigations of prism adaptation face several challenges arising from the confined physical environment of the scanner and the supine position of the participants. Any researcher who wishes to administer prism adaptation in an fMRI environment must adjust their procedures enough to enable the experiment to be performed, but not so much that the behavioral task departs too much from true prism adaptation. Furthermore, the specific temporal dynamics of behavioral components of prism adaptation present additional challenges for measuring their neural correlates. We developed a system for measuring the key features of prism adaptation behavior within an fMRI environment. To validate our configuration, we present behavioral (pointing) and head movement data from 11 right-hemisphere lesioned patients and 17 older controls who underwent sham and real prism adaptation in an MRI scanner. Most participants could adapt to prismatic displacement with minimal head movements, and the procedure was well tolerated. We propose recommendations for fMRI studies of prism adaptation based on the design-specific constraints and our results.

  3. Optical design and active optics methods in astronomy

    NASA Astrophysics Data System (ADS)

    Lemaitre, Gerard R.

    2013-03-01

    Optical designs for astronomy involve implementation of active optics and adaptive optics from X-ray to the infrared. Developments and results of active optics methods for telescopes, spectrographs and coronagraph planet finders are presented. The high accuracy and remarkable smoothness of surfaces generated by active optics methods also allow elaborating new optical design types with high aspheric and/or non-axisymmetric surfaces. Depending on the goal and performance requested for a deformable optical surface analytical investigations are carried out with one of the various facets of elasticity theory: small deformation thin plate theory, large deformation thin plate theory, shallow spherical shell theory, weakly conical shell theory. The resulting thickness distribution and associated bending force boundaries can be refined further with finite element analysis.

  4. Water Infrastructure Adaptation in New Urban Design: Possibilities and Constraints

    EPA Science Inventory

    Natural constraints, including climate change and dynamic socioeconomic development, can significantly impact the way we plan, design, and operate water infrastructure, thus its sustainability to deliver reliable quality water supplies and comply with environmental regulations. ...

  5. A modified method for COD determination of solid waste, using a commercial COD kit and an adapted disposable weighing support.

    PubMed

    André, L; Pauss, A; Ribeiro, T

    2017-03-01

    The chemical oxygen demand (COD) is an essential parameter in waste management, particularly when monitoring wet anaerobic digestion processes. An adapted method to determine COD was developed for solid waste (total solids >15%). This method used commercial COD tubes and did not require sample dilution. A homemade plastic weighing support was used to transfer the solid sample into COD tubes. Potassium hydrogen phthalate and glucose used as standards showed an excellent repeatability. A small underestimation of the theoretical COD value (standard values around 5% lower than theoretical values) was also observed, mainly due to the intrinsic COD of the weighing support and to measurement uncertainties. The adapted COD method was tested using various solid wastes in the range of 1-8 mgCOD, determining the COD of dried and ground cellulose, cattle manure, straw and a mixed-substrate sample. This new adapted method could be used to monitor and design dry anaerobic digestion processes.

  6. Method and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.

  7. Adaptive neural control design for nonlinear distributed parameter systems with persistent bounded disturbances.

    PubMed

    Wu, Huai-Ning; Li, Han-Xiong

    2009-10-01

    In this paper, an adaptive neural network (NN) control with a guaranteed L(infinity)-gain performance is proposed for a class of parabolic partial differential equation (PDE) systems with unknown nonlinearities and persistent bounded disturbances. Initially, Galerkin method is applied to the PDE system to derive a low-order ordinary differential equation (ODE) system that accurately describes the dynamics of the dominant (slow) modes of the PDE system. Subsequently, based on the low-order slow model and the Lyapunov technique, an adaptive modal feedback controller is developed such that the closed-loop slow system is semiglobally input-to-state practically stable (ISpS) with an L(infinity)-gain performance. In the proposed control scheme, a radial basis function (RBF) NN is employed to approximate the unknown term in the derivative of the Lyapunov function due to the unknown system nonlinearities. The outcome of the adaptive L(infinity)-gain control problem is formulated as a linear matrix inequality (LMI) problem. Moreover, by using the existing LMI optimization technique, a suboptimal controller is obtained in the sense of minimizing an upper bound of the L(infinity)-gain, while control constraints are respected. Furthermore, it is shown that the proposed controller can ensure the semiglobal input-to-state practical stability and L(infinity)-gain performance of the closed-loop PDE system. Finally, by applying the developed design method to the temperature profile control of a catalytic rod, the achieved simulation results show the effectiveness of the proposed controller.

  8. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  9. Adaptive L₁/₂ shooting regularization method for survival analysis using gene expression data.

    PubMed

    Liu, Xiao-Ying; Liang, Yong; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2013-01-01

    A new adaptive L₁/₂ shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L₁/₂ shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L₁ penalties and a shooting strategy of L₁/₂ penalty. Simulation results based on high dimensional artificial data show that the adaptive L₁/₂ shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L₁/₂ regularization method performs competitively.

  10. Intelligent Adaptive Interface: A Design Tool for Enhancing Human-Machine System Performances

    DTIC Science & Technology

    2009-10-01

    environment) Monitoring • OMI Design Guidelines • Automation-design Principles • OMI Design Guidelines • HCI Principles Adapt OMI Automate / Aid...technical systems, there is still a lack of well-established design guidelines for these human-machine systems, especially for advanced operator...Additionally, a lack of integration between the Human Factors (HF) and Human Computer Interaction ( HCI ) domains has increased the tendency for terminology

  11. Regulatory perspectives on multiplicity in adaptive design clinical trials throughout a drug development program.

    PubMed

    Wang, Sue-Jane; Hung, H M James; O'Neill, Robert

    2011-07-01

    A clinical research program for drug development often consists of a sequence of clinical trials that may begin with uncontrolled and nonrandomized trials, followed by randomized trials or randomized controlled trials. Adaptive designs are not infrequently proposed for use. In the regulatory setting, the success of a drug development program can be defined to be that the experimental treatment at a specific dose level including regimen and frequency is approved based on replicated evidence from at least two confirmatory trials. In the early stage of clinical research, multiplicity issues are very broad. What is the maximum tolerable dose in an adaptive dose escalation trial? What should the dose range be to consider in an adaptive dose-ranging trial? What is the minimum effective dose in an adaptive dose-response study given the tolerability and the toxicity observable in short term or premarketing trials? Is establishing the dose-response relationship important or the ability to select a superior treatment with high probability more important? In the later stage of clinical research, multiplicity problems can be formulated with better focus, depending on whether the study is for exploration to estimate or select design elements or for labeling consideration. What is the study objective for an early-phase versus a later phase adaptive clinical trial? How many doses are to be studied in the early exploratory adaptive trial versus in the confirmatory adaptive trial? Is the intended patient population well defined or is the applicable patient population yet to be adaptively selected in the trial due to the potential patient and/or disease heterogeneity? Is the primary efficacy endpoint well defined or still under discussion providing room for adaptation? What are the potential treatment indications that may adaptively lead to an intended-to-treat patient population and the primary efficacy endpoint? In this work we stipulate the multiplicity issues with adaptive

  12. Adaptive urn designs for estimating several percentiles of a dose--response curve.

    PubMed

    Mugno, Raymond; Zhus, Wei; Rosenberger, William F

    2004-07-15

    Dose--response experiments are crucial in biomedical studies. There are usually multiple objectives in such experiments and among the goals is the estimation of several percentiles on the dose--response curve. Here we present the first non-parametric adaptive design approach to estimate several percentiles simultaneously via generalized Pólya urns. Theoretical properties of these designs are investigated and their performance is gaged by the locally compound optimal designs. As an example, we re-investigated a psychophysical experiment where one of the goals was to estimate the three quartiles. We show that these multiple-objective adaptive designs are more efficient than the original single-objective adaptive design targeting the median only. We also show that urn designs which target the optimal designs are slightly more efficient than those which target the desired percentiles directly. Guidelines are given as to when to use which type of design. Overall we are pleased with the efficiency results and hope compound adaptive designs proposed in this work or their variants may prove to be a viable non-parametric alternative in multiple-objective dose--response studies.

  13. Computer-Aided Drug Design Methods.

    PubMed

    Yu, Wenbo; MacKerell, Alexander D

    2017-01-01

    Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.

  14. A modified varying-stage adaptive phase II/III clinical trial design.

    PubMed

    Dong, Gaohong; Vandemeulebroecke, Marc

    2016-07-01

    Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd.

  15. MAST Propellant and Delivery System Design Methods

    NASA Technical Reports Server (NTRS)

    Nadeem, Uzair; Mc Cleskey, Carey M.

    2015-01-01

    A Mars Aerospace Taxi (MAST) concept and propellant storage and delivery case study is undergoing investigation by NASA's Element Design and Architectural Impact (EDAI) design and analysis forum. The MAST lander concept envisions landing with its ascent propellant storage tanks empty and supplying these reusable Mars landers with propellant that is generated and transferred while on the Mars surface. The report provides an overview of the data derived from modeling between different methods of propellant line routing (or "lining") and differentiate the resulting design and operations complexity of fluid and gaseous paths based on a given set of fluid sources and destinations. The EDAI team desires a rough-order-magnitude algorithm for estimating the lining characteristics (i.e., the plumbing mass and complexity) associated different numbers of vehicle propellant sources and destinations. This paper explored the feasibility of preparing a mathematically sound algorithm for this purpose, and offers a method for the EDAI team to implement.

  16. Standardized Radiation Shield Design Methods: 2005 HZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.

    2006-01-01

    Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.

  17. Systems and Methods for Derivative-Free Adaptive Control

    NASA Technical Reports Server (NTRS)

    Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.

  18. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  19. Statistical Methods in Algorithm Design and Analysis.

    ERIC Educational Resources Information Center

    Weide, Bruce W.

    The use of statistical methods in the design and analysis of discrete algorithms is explored. The introductory chapter contains a literature survey and background material on probability theory. In Chapter 2, probabilistic approximation algorithms are discussed with the goal of exposing and correcting some oversights in previous work. Chapter 3…

  20. Adaptive Pareto Set Estimation for Stochastic Mixed Variable Design Problems

    DTIC Science & Technology

    2009-03-01

    Improved Feature Extraction, Feature Selection, and Identification Techniques that Create a Fast Unsupervised Hyperspectral Target Detection Algorithm...optional): ______________________________________ General Category / Classification : [ ] core values [ ] command [ ] strategy...Optimization, 8(3), 631-657. 125 23. Davis, M. (2009). Using Multiple Robust Parameter Design Techniques to Improve Hyperspectral Anomaly

  1. Context-Aware Design for Process Flexibility and Adaptation

    ERIC Educational Resources Information Center

    Yao, Wen

    2012-01-01

    Today's organizations face continuous and unprecedented changes in their business environment. Traditional process design tools tend to be inflexible and can only support rigidly defined processes (e.g., order processing in the supply chain). This considerably restricts their real-world applications value, especially in the dynamic and…

  2. Adapting Wood Technology to Teach Design and Engineering

    ERIC Educational Resources Information Center

    Rummel, Robert A.

    2012-01-01

    Technology education has changed dramatically over the last few years. The transition of industrial arts to technology education and more recently the pursuit of design and engineering has resulted in technology education teachers often needing to change their curriculum and course activities to meet the demands of a rapidly changing profession.…

  3. A Testlet Assembly Design for Adaptive Multistage Tests

    ERIC Educational Resources Information Center

    Luecht, Richard; Brumfield, Terry; Breithaupt, Krista

    2006-01-01

    This article describes multistage tests and some practical test development considerations related to the design and implementation of a multistage test, using the Uniform CPA (certified public accountant) Examination as a case study. The article further discusses the use of automated test assembly procedures in an operational context to produce…

  4. MURI: Adaptive Waveform Design for Full Spectral Dominance

    DTIC Science & Technology

    2011-03-11

    Glaser Steffen J; Luy Burkhard “Exploring the limits of broadband excitation and inversion: II. Rf-power optimized pulses,” Journal of magnetic resonance...Luy Burkhard ; Glaser Steffen J “ Linear phase slope in pulse design: application to coherence transfer,”Journal of magnetic resonance 2008; 192(2):235

  5. Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces.

    PubMed

    Dangi, Siddharth; Orsborn, Amy L; Moorman, Helene G; Carmena, Jose M

    2013-07-01

    Closed-loop decoder adaptation (CLDA) is an emerging paradigm for achieving rapid performance improvements in online brain-machine interface (BMI) operation. Designing an effective CLDA algorithm requires making multiple important decisions, including choosing the timescale of adaptation, selecting which decoder parameters to adapt, crafting the corresponding update rules, and designing CLDA parameters. These design choices, combined with the specific settings of CLDA parameters, will directly affect the algorithm's ability to make decoder parameters converge to values that optimize performance. In this article, we present a general framework for the design and analysis of CLDA algorithms and support our results with experimental data of two monkeys performing a BMI task. First, we analyze and compare existing CLDA algorithms to highlight the importance of four critical design elements: the adaptation timescale, selective parameter adaptation, smooth decoder updates, and intuitive CLDA parameters. Second, we introduce mathematical convergence analysis using measures such as mean-squared error and KL divergence as a useful paradigm for evaluating the convergence properties of a prototype CLDA algorithm before experimental testing. By applying these measures to an existing CLDA algorithm, we demonstrate that our convergence analysis is an effective analytical tool that can ultimately inform and improve the design of CLDA algorithms.

  6. Technical Data Exchange Software Tools Adapted to Distributed Microsatellite Design

    NASA Astrophysics Data System (ADS)

    Pache, Charly

    2002-01-01

    One critical issue concerning distributed design of satellites, is the collaborative work it requires. In particular, the exchange of data between each group responsible for each subsystem can be complex and very time-consuming. The goal of this paper is to present a design collaborative tool, the SSETI Design Model (SDM), specifically developed for enabling satellite distributed design. SDM is actually used in the ongoing Student Space Exploration &Technology (SSETI) initiative (www.sseti.net). SSETI is lead by European Space Agency (ESA) outreach office (http://www.estec.esa.nl/outreach), involving student groups from all over Europe for design, construction and launch of a microsatellite. The first part of this paper presents the current version of the SDM tool, a collection of Microsoft Excel linked worksheets, one for each subsystem. An overview of the project framework/structure is given, explaining the different actors, the flows between them, as well as the different types of data and the links - formulas - between data sets. Unified Modeling Language (UML) diagrams give an overview of the different parts . Then the SDM's functionalities, developed in VBA scripts (Visual Basic for Application), are introduced, as well as the interactive features, user interfaces and administration tools. The second part discusses the capabilities and limitations of SDM current version. Taking into account these capabilities and limitations, the third part outlines the next version of SDM, a web-oriented, database-driven evolution of the current version. This new approach will enable real-time data exchange and processing between the different actors of the mission. Comprehensive UML diagrams will guide the audience through the entire modeling process of such a system. Tradeoffs simulation capabilities, security, reliability, hardware and software issues will also be thoroughly discussed.

  7. Building Adaptive Capacity with the Delphi Method and Mediated Modeling for Water Quality and Climate Change Adaptation in Lake Champlain Basin

    NASA Astrophysics Data System (ADS)

    Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.

    2014-12-01

    Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors

  8. Validation of an Adaptive Combustion Instability Control Method for Gas-Turbine Engines

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; DeLaat, John C.; Chang, Clarence T.

    2004-01-01

    This paper describes ongoing testing of an adaptive control method to suppress high frequency thermo-acoustic instabilities like those found in lean-burning, low emission combustors that are being developed for future aircraft gas turbine engines. The method called Adaptive Sliding Phasor Averaged Control, was previously tested in an experimental rig designed to simulate a combustor with an instability of about 530 Hz. Results published earlier, and briefly presented here, demonstrated that this method was effective in suppressing the instability. Because this test rig did not exhibit a well pronounced instability, a question remained regarding the effectiveness of the control methodology when applied to a more coherent instability. To answer this question, a modified combustor rig was assembled at the NASA Glenn Research Center in Cleveland, Ohio. The modified rig exhibited a more coherent, higher amplitude instability, but at a lower frequency of about 315 Hz. Test results show that this control method successfully reduced the instability pressure of the lower frequency test rig. In addition, due to a certain phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling, a dramatic suppression of the instability was achieved by focusing control on the second harmonic of the instability. These results and their implications are discussed, as well as a hypothesis describing the mechanism of intra-harmonic coupling.

  9. Design of a shape adaptive airfoil actuated by a Shape Memory Alloy strip for airplane tail

    NASA Astrophysics Data System (ADS)

    Shirzadeh, R.; Raissi Charmacani, K.; Tabesh, M.

    2011-04-01

    Of the factors that mainly affect the efficiency of the wing during a special flow regime, the shape of its airfoil cross section is the most significant. Airfoils are generally designed for a specific flight condition and, therefore, are not fully optimized in all flight conditions. It is very desirable to have an airfoil with the ability to change its shape based on the current regime. Shape memory alloy (SMA) actuators activate in response to changes in the temperature and can recover their original configuration after being deformed. This study presents the development of a method to control the shape of an airfoil using SMA actuators. To predict the thermomechanical behaviors of an SMA thin strip, 3D incremental formulation of the SMA constitutive model is implemented in FEA software package ABAQUS. The interactions between the airfoil structure and SMA thin strip actuator are investigated. Also, the aerodynamic performance of a standard airfoil with a plain flap is compared with an adaptive airfoil.

  10. A Phase I/II adaptive design for heterogeneous groups with application to a stereotactic body radiation therapy trial.

    PubMed

    Wages, Nolan A; Read, Paul W; Petroni, Gina R

    2015-01-01

    Dose-finding studies that aim to evaluate the safety of single agents are becoming less common, and advances in clinical research have complicated the paradigm of dose finding in oncology. A class of more complex problems, such as targeted agents, combination therapies and stratification of patients by clinical or genetic characteristics, has created the need to adapt early-phase trial design to the specific type of drug being investigated and the corresponding endpoints. In this article, we describe the implementation of an adaptive design based on a continual reassessment method for heterogeneous groups, modified to coincide with the objectives of a Phase I/II trial of stereotactic body radiation therapy in patients with painful osseous metastatic disease. Operating characteristics of the Institutional Review Board approved design are demonstrated under various possible true scenarios via simulation studies.

  11. Group-Work in the Design of Complex Adaptive Learning Strategies

    ERIC Educational Resources Information Center

    Mavroudi, Anna; Hadzilacos, Thanasis

    2013-01-01

    This paper presents a case study where twelve graduate students undertook the demanding role of the adaptive e-course developer and worked collaboratively on an authentic and complex design task in the context of open and distance tertiary education. The students had to work in groups in order to conceptualise and design a learning scenario for…

  12. A Framework for Adaptive Learning Design in a Web-Conferencing Environment

    ERIC Educational Resources Information Center

    Bower, Matt

    2016-01-01

    Many recent technologies provide the ability to dynamically adjust the interface depending on the emerging cognitive and collaborative needs of the learning episode. This means that educators can adaptively re-design the learning environment during the lesson, rather than purely relying on preemptive learning design thinking. Based on a…

  13. An adaptive Simon Two-Stage Design for Phase 2 studies of targeted therapies.

    PubMed

    Jones, Cheryl L; Holmgren, Eric

    2007-09-01

    The field of specialized medicine and clinical development programs for targeted cancer therapies are rapidly expanding. The proposed Phase 2 design allows for preliminary determination of efficacy that may be restricted to a particular sub-population defined by biomarker status (presence/absence). The design is an adaptation of the Simon Two-Stage Design. We provide examples where the adaptation can result in substantial savings in sample size and thus potentially lead to greater efficiency in decision making during the drug development process.

  14. A new approach for designing self-organizing systems and application to adaptive control

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Zhang, Shi; Lin, Yueqing; Huang, Song

    1993-01-01

    There is tremendous interest in the design of intelligent machines capable of autonomous learning and skillful performance under complex environments. A major task in designing such systems is to make the system plastic and adaptive when presented with new and useful information and stable in response to irrelevant events. A great body of knowledge, based on neuro-physiological concepts, has evolved as a possible solution to this problem. Adaptive resonance theory (ART) is a classical example under this category. The system dynamics of an ART network is described by a set of differential equations with nonlinear functions. An approach for designing self-organizing networks characterized by nonlinear differential equations is proposed.

  15. Acoustic Treatment Design Scaling Methods. Phase 2

    NASA Technical Reports Server (NTRS)

    Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.

    2003-01-01

    The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.

  16. On Adaptive Extended Compatibility Changing Type of Product Design Strategy

    NASA Astrophysics Data System (ADS)

    Wenwen, Jiang; Zhibin, Xie

    The article uses research ways of Enterprise localization and enterprise's development course to research strategy of company's product design and development. It announces at different stages for development, different kinds of enterprises will adopt product design and development policies of different modes. It also announces close causality between development course of company and central technology and product. The result indicated enterprises in leading position in market, technology and brand adopt pioneer strategy type of product research and development. These enterprise relying on the large-scale leading enterprise offering a complete set service adopts the passively duplicating type tactic of product research and development. Some enterprise in part of advantage in technology, market, management or brand adopt following up strategy of product research and development. The enterprises with relative advantage position adopt the strategy of technology applied taking optimizing services as centre in product research and development in fields of brand culture and market service.

  17. Adaptive optics in spinning disk microscopy: improved contrast and brightness by a simple and fast method.

    PubMed

    Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J

    2015-09-01

    Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples.

  18. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.

  19. Theory and Design of Adaptive Automation in Aviation Systems

    DTIC Science & Technology

    1992-07-17

    and KOALAS programs at the Naval Air Development Center. The latter is an intelligent, man-in-the-loop, architecture that is presently being...7 This approach has been applied as a model in the Knowledgeable Operator Analysis- Unked System ( KOALAS ) for decision aiding and has in turn been...Barrett,C.L. (1988) The Knowledgeable Ooerator Analysis-Linked Advisory System ( KOALAS ) A1oroach to Decision Suooort System Design. Analysis and Synthesis

  20. Flight control design using a blend of modern nonlinear adaptive and robust techniques

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolong

    In this dissertation, the modern control techniques of feedback linearization, mu synthesis, and neural network based adaptation are used to design novel control laws for two specific applications: F/A-18 flight control and reusable launch vehicle (an X-33 derivative) entry guidance. For both applications, the performance of the controllers is assessed. As a part of a NASA Dryden program to develop and flight test experimental controllers for an F/A-18 aircraft, a novel method of combining mu synthesis and feedback linearization is developed to design longitudinal and lateral-directional controllers. First of all, the open-loop and closed-loop dynamics of F/A-18 are investigated. The production F/A-18 controller as well as the control distribution mechanism are studied. The open-loop and closed-loop handling qualities of the F/A-18 are evaluated using low order transfer functions. Based on this information, a blend of robust mu synthesis and feedback linearization is used to design controllers for a low dynamic pressure envelope of flight conditions. For both the longitudinal and the lateral-directional axes, a robust linear controller is designed for a trim point in the center of the envelope. Then by including terms to cancel kinematic nonlinearities and variations in the aerodynamic forces and moments over the flight envelope, a complete nonlinear controller is developed. In addition, to compensate for the model uncertainty, linearization error and variations between operating points, neural network based adaptation is added to the designed longitudinal controller. The nonlinear simulations, robustness and handling qualities analysis indicate that the performance is similar to or better than that for the production F/A-18 controllers. When the dynamic pressure is very low, the performance of both the experimental and the production flight controllers is degraded, but Level I handling qualities are still achieved. A new generation of Reusable Launch Vehicles

  1. Inner string cementing adapter and method of use

    SciTech Connect

    Helms, L.C.

    1991-08-20

    This patent describes an inner string cementing adapter for use on a work string in a well casing having floating equipment therein. It comprises mandrel means for connecting to a lower end of the work string; and sealing means adjacent to the mandrel means for substantially flatly sealing against a surface of the floating equipment without engaging a central opening in the floating equipment.

  2. An adaptive precision gradient method for optimal control.

    NASA Technical Reports Server (NTRS)

    Klessig, R.; Polak, E.

    1973-01-01

    This paper presents a gradient algorithm for unconstrained optimal control problems. The algorithm is stated in terms of numerical integration formulas, the precision of which is controlled adaptively by a test that ensures convergence. Empirical results show that this algorithm is considerably faster than its fixed precision counterpart.-

  3. Adaptation of an ethnographic method for investigation of the task domain in diagnostic radiology

    NASA Astrophysics Data System (ADS)

    Ramey, Judith A.; Rowberg, Alan H.; Robinson, Carol

    1992-07-01

    A number of user-centered methods for designing radiology workstations have been described by researchers at Carleton University (Ottawa), Georgetown University, George Washington University, and University of Arizona, among others. The approach described here differs in that it enriches standard human-factors practices with methods adapted from ethnography to study users (in this case, diagnostic radiologists) as members of a distinct culture. The overall approach combines several methods; the core method, based on ethnographic ''stream of behavior chronicles'' and their analysis, has four phases: (1) first, we gather the stream of behavior by videotaping a radiologist as he or she works; (2) we view the tape ourselves and formulate questions and hypothesis about the work; and then (3) in a second videotaped session, we show the radiologist the original tape and ask for a running commentary on the work, into which, at the appropriate points, we interject our questions for clarification. We then (4) categorize/index the behavior on the ''raw data'' tapes for various kinds of follow-on analysis. We describe and illustrate this method in detail, describe how we analyze the ''raw data'' videotapes and the commentary tapes, and explain how the method can be integrated into an overall user-centered design process based on standard human-factors techniques.

  4. Reliability Methods for Shield Design Process

    NASA Technical Reports Server (NTRS)

    Tripathi, R. K.; Wilson, J. W.

    2002-01-01

    Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.

  5. A novel method to design flexible URAs

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Liu, Liren; Yang, Qingguo

    2007-05-01

    Aperture patterns play a vital role in coded aperture imaging (CAI) applications. In recent years, many approaches were presented to design optimum or near-optimum aperture patterns. Uniformly redundant arrays (URAs) are, undoubtedly, the most successful for constant sidelobe of their periodic autocorrelation function. Unfortunately, the existing methods can only be used to design URAs with a limited number of array sizes and fixed autocorrelation sidelobe-to-peak ratios. In this paper, we present a novel method to design more flexible URAs. Our approach is based on a searching program driven by DIRECT, a global optimization algorithm. We transform the design question to a mathematical model, based on the DIRECT algorithm, which is advantageous for computer implementation. By changing determinative conditions, we obtain two kinds of types of URAs, including the filled URAs which can be constructed by existing methods and the sparse URAs which have never been mentioned by other authors as far as we know. Finally, we carry out an experiment to demonstrate the imaging performance of the sparse URAs.

  6. The Study and Design of Adaptive Learning System Based on Fuzzy Set Theory

    NASA Astrophysics Data System (ADS)

    Jia, Bing; Zhong, Shaochun; Zheng, Tianyang; Liu, Zhiyong

    Adaptive learning is an effective way to improve the learning outcomes, that is, the selection of learning content and presentation should be adapted to each learner's learning context, learning levels and learning ability. Adaptive Learning System (ALS) can provide effective support for adaptive learning. This paper proposes a new ALS based on fuzzy set theory. It can effectively estimate the learner's knowledge level by test according to learner's target. Then take the factors of learner's cognitive ability and preference into consideration to achieve self-organization and push plan of knowledge. This paper focuses on the design and implementation of domain model and user model in ALS. Experiments confirmed that the system providing adaptive content can effectively help learners to memory the content and improve their comprehension.

  7. A New Method to Cancel RFI---The Adaptive Filter

    NASA Astrophysics Data System (ADS)

    Bradley, R.; Barnbaum, C.

    1996-12-01

    An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation

  8. Design and Preliminary Testing of the International Docking Adapter's Peripheral Docking Target

    NASA Technical Reports Server (NTRS)

    Foster, Christopher W.; Blaschak, Johnathan; Eldridge, Erin A.; Brazzel, Jack P.; Spehar, Peter T.

    2015-01-01

    The International Docking Adapter's Peripheral Docking Target (PDT) was designed to allow a docking spacecraft to judge its alignment relative to the docking system. The PDT was designed to be compatible with relative sensors using visible cameras, thermal imagers, or Light Detection and Ranging (LIDAR) technologies. The conceptual design team tested prototype designs and materials to determine the contrast requirements for the features. This paper will discuss the design of the PDT, the methodology and results of the tests, and the conclusions pertaining to PDT design that were drawn from testing.

  9. Optimization methods for alternative energy system design

    NASA Astrophysics Data System (ADS)

    Reinhardt, Michael Henry

    An electric vehicle heating system and a solar thermal coffee dryer are presented as case studies in alternative energy system design optimization. Design optimization tools are compared using these case studies, including linear programming, integer programming, and fuzzy integer programming. Although most decision variables in the designs of alternative energy systems are generally discrete (e.g., numbers of photovoltaic modules, thermal panels, layers of glazing in windows), the literature shows that the optimization methods used historically for design utilize continuous decision variables. Integer programming, used to find the optimal investment in conservation measures as a function of life cycle cost of an electric vehicle heating system, is compared to linear programming, demonstrating the importance of accounting for the discrete nature of design variables. The electric vehicle study shows that conservation methods similar to those used in building design, that reduce the overall UA of a 22 ft. electric shuttle bus from 488 to 202 (Btu/hr-F), can eliminate the need for fossil fuel heating systems when operating in the northeast United States. Fuzzy integer programming is presented as a means of accounting for imprecise design constraints such as being environmentally friendly in the optimization process. The solar thermal coffee dryer study focuses on a deep-bed design using unglazed thermal collectors (UTC). Experimental data from parchment coffee drying are gathered, including drying constants and equilibrium moisture. In this case, fuzzy linear programming is presented as a means of optimizing experimental procedures to produce the most information under imprecise constraints. Graphical optimization is used to show that for every 1 m2 deep-bed dryer, of 0.4 m depth, a UTC array consisting of 5, 1.1 m 2 panels, and a photovoltaic array consisting of 1, 0.25 m 2 panels produces the most dry coffee per dollar invested in the system. In general this study

  10. Experimental Investigation on Adaptive Robust Controller Designs Applied to Constrained Manipulators

    PubMed Central

    Nogueira, Samuel L.; Pazelli, Tatiana F. P. A. T.; Siqueira, Adriano A. G.; Terra, Marco H.

    2013-01-01

    In this paper, two interlaced studies are presented. The first is directed to the design and construction of a dynamic 3D force/moment sensor. The device is applied to provide a feedback signal of forces and moments exerted by the robotic end-effector. This development has become an alternative solution to the existing multi-axis load cell based on static force and moment sensors. The second one shows an experimental investigation on the performance of four different adaptive nonlinear ℋ∞ control methods applied to a constrained manipulator subject to uncertainties in the model and external disturbances. Coordinated position and force control is evaluated. Adaptive procedures are based on neural networks and fuzzy systems applied in two different modeling strategies. The first modeling strategy requires a well-known nominal model for the robot, so that the intelligent systems are applied only to estimate the effects of uncertainties, unmodeled dynamics and external disturbances. The second strategy considers that the robot model is completely unknown and, therefore, intelligent systems are used to estimate these dynamics. A comparative study is conducted based on experimental implementations performed with an actual planar manipulator and with the dynamic force sensor developed for this purpose. PMID:23598503

  11. Experimental investigation on adaptive robust controller designs applied to constrained manipulators.

    PubMed

    Nogueira, Samuel L; Pazelli, Tatiana F P A T; Siqueira, Adriano A G; Terra, Marco H

    2013-04-18

    In this paper, two interlaced studies are presented. The first is directed to the design and construction of a dynamic 3D force/moment sensor. The device is applied to provide a feedback signal of forces and moments exerted by the robotic end-effector. This development has become an alternative solution to the existing multi-axis load cell based on static force and moment sensors. The second one shows an experimental investigation on the performance of four different adaptive nonlinear H∞ control methods applied to a constrained manipulator subject to uncertainties in the model and external disturbances. Coordinated position and force control is evaluated. Adaptive procedures are based on neural networks and fuzzy systems applied in two different modeling strategies. The first modeling strategy requires a well-known nominal model for the robot, so that the intelligent systems are applied only to estimate the effects of uncertainties, unmodeled dynamics and external disturbances. The second strategy considers that the robot model is completely unknown and, therefore, intelligent systems are used to estimate these dynamics. A comparative study is conducted based on experimental implementations performed with an actual planar manipulator and with the dynamic force sensor developed for this purpose.

  12. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  13. Adaptive critic autopilot design of bank-to-turn missiles using fuzzy basis function networks.

    PubMed

    Lin, Chuan-Kai

    2005-04-01

    A new adaptive critic autopilot design for bank-to-turn missiles is presented. In this paper, the architecture of adaptive critic learning scheme contains a fuzzy-basis-function-network based associative search element (ASE), which is employed to approximate nonlinear and complex functions of bank-to-turn missiles, and an adaptive critic element (ACE) generating the reinforcement signal to tune the associative search element. In the design of the adaptive critic autopilot, the control law receives signals from a fixed gain controller, an ASE and an adaptive robust element, which can eliminate approximation errors and disturbances. Traditional adaptive critic reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment, however, the proposed tuning algorithm can significantly shorten the learning time by online tuning all parameters of fuzzy basis functions and weights of ASE and ACE. Moreover, the weight updating law derived from the Lyapunov stability theory is capable of guaranteeing both tracking performance and stability. Computer simulation results confirm the effectiveness of the proposed adaptive critic autopilot.

  14. Micro-Randomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions

    PubMed Central

    Klasnja, Predrag; Hekler, Eric B.; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    Objective This paper presents an experimental design, the micro-randomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals’ health behaviors. Micro-randomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. Methods The paper describes the micro-randomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Results Micro-randomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Conclusions Micro-randomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions’ effects, enabling creation of more effective JITAIs. PMID:26651463

  15. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  16. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  17. Tensor product model transformation based adaptive integral-sliding mode controller: equivalent control method.

    PubMed

    Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

  18. An adaptive design for updating the threshold value of a continuous biomarker

    PubMed Central

    Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2017-01-01

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407

  19. Waterflooding injectate design systems and methods

    DOEpatents

    Brady, Patrick V.; Krumhansl, James L.

    2014-08-19

    A method of designing an injectate to be used in a waterflooding operation is disclosed. One aspect includes specifying data representative of chemical characteristics of a liquid hydrocarbon, a connate, and a reservoir rock, of a subterranean reservoir. Charged species at an interface of the liquid hydrocarbon are determined based on the specified data by evaluating at least one chemical reaction. Charged species at an interface of the reservoir rock are determined based on the specified data by evaluating at least one chemical reaction. An extent of surface complexation between the charged species at the interfaces of the liquid hydrocarbon and the reservoir rock is determined by evaluating at least one surface complexation reaction. The injectate is designed and is operable to decrease the extent of surface complexation between the charged species at interfaces of the liquid hydrocarbon and the reservoir rock. Other methods, apparatus, and systems are disclosed.

  20. Low Level Waste Conceptual Design Adaption to Poor Geological Conditions

    SciTech Connect

    Bell, J.; Drimmer, D.; Giovannini, A.; Manfroy, P.; Maquet, F.; Schittekat, J.; Van Cotthem, A.; Van Echelpoel, E.

    2002-02-26

    Since the early eighties, several studies have been carried out in Belgium with respect to a repository for the final disposal of low-level radioactive waste (LLW). In 1998, the Belgian Government decided to restrict future investigations to the four existing nuclear sites in Belgium or sites that might show interest. So far, only two existing nuclear sites have been thoroughly investigated from a geological and hydrogeological point of view. These sites are located in the North-East (Mol-Dessel) and in the mid part (Fleurus-Farciennes) of the country. Both sites have the disadvantage of presenting poor geological and hydrogeological conditions, which are rather unfavorable to accommodate a surface disposal facility for LLW. The underground of the Mol-Dessel site consists of neogene sand layers of about 180 m thick which cover a 100 meters thick clay layer. These neogene sands contain, at 20 m depth, a thin clayey layer. The groundwater level is quite close to the surface (0-2m) and finally, the topography is almost totally flat. The upper layer of the Fleurus-Farciennes site consists of 10 m silt with poor geomechanical characteristics, overlying sands (only a few meters thick) and Westphalian shales between 15 and 20 m depth. The Westphalian shales are tectonized and strongly weathered. In the past, coal seams were mined out. This activity induced locally important surface subsidence. For both nuclear sites that were investigated, a conceptual design was made that could allow any unfavorable geological or hydrogeological conditions of the site to be overcome. In Fleurus-Farciennes, for instance, the proposed conceptual design of the repository is quite original. It is composed of a shallow, buried concrete cylinder, surrounded by an accessible concrete ring, which allows permanent inspection and control during the whole lifetime of the repository. Stability and drainage systems should be independent of potential differential settlements an d subsidences

  1. A Requirements-Driven Optimization Method for Acoustic Treatment Design

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2016-01-01

    Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.

  2. A new adaptive time step method for unsteady flow simulations in a human lung.

    PubMed

    Fenández-Tena, Ana; Marcos, Alfonso C; Martínez, Cristina; Keith Walters, D

    2017-04-07

    The innovation presented is a method for adaptive time-stepping that allows clustering of time steps in portions of the cycle for which flow variables are rapidly changing, based on the concept of using a uniform step in a relevant dependent variable rather than a uniform step in the independent variable time. A user-defined function was developed to adapt the magnitude of the time step (adaptive time step) to a defined rate of change in inlet velocity. Quantitative comparison indicates that the new adaptive time stepping method significantly improves accuracy for simulations using an equivalent number of time steps per cycle.

  3. The adapted augmented Lagrangian method: a new method for the resolution of the mechanical frictional contact problem

    NASA Astrophysics Data System (ADS)

    Bussetta, Philippe; Marceau, Daniel; Ponthot, Jean-Philippe

    2012-02-01

    The aim of this work is to propose a new numerical method for solving the mechanical frictional contact problem in the general case of multi-bodies in a three dimensional space. This method is called adapted augmented Lagrangian method (AALM) and can be used in a multi-physical context (like thermo-electro-mechanical fields problems). This paper presents this new method and its advantages over other classical methods such as penalty method (PM), adapted penalty method (APM) and, augmented Lagrangian method (ALM). In addition, the efficiency and the reliability of the AALM are proved with some academic problems and an industrial thermo-electromechanical problem.

  4. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  5. First-order design of off-axis reflective ophthalmic adaptive optics systems using afocal telescopes.

    PubMed

    Gómez-Vieyra, Armando; Dubra, Alfredo; Malacara-Hernández, Daniel; Williams, David R

    2009-10-12

    Expressions for minimal astigmatism in image and pupil planes in off-axis afocal reflective telescopes formed by pairs of spherical mirrors are presented. These formulae which are derived from the marginal ray fan equation can be used for designing laser cavities, spectrographs and adaptive optics retinal imaging systems. The use, range and validity of these formulae are limited by spherical aberration and coma for small and large angles respectively. This is discussed using examples from adaptive optics retinal imaging systems. The performance of the resulting optical designs are evaluated and compared against the configurations with minimal wavefront RMS, using the defocus-corrected wavefront RMS as a metric.

  6. A Bayesian adaptive blinded sample size adjustment method for risk differences.

    PubMed

    Hartley, Andrew Montgomery

    2015-01-01

    Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non-inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods.

  7. Quality by design compliant analytical method validation.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph

    2012-01-03

    The concept of quality by design (QbD) has recently been adopted for the development of pharmaceutical processes to ensure a predefined product quality. Focus on applying the QbD concept to analytical methods has increased as it is fully integrated within pharmaceutical processes and especially in the process control strategy. In addition, there is the need to switch from the traditional checklist implementation of method validation requirements to a method validation approach that should provide a high level of assurance of method reliability in order to adequately measure the critical quality attributes (CQAs) of the drug product. The intended purpose of analytical methods is directly related to the final decision that will be made with the results generated by these methods under study. The final aim for quantitative impurity assays is to correctly declare a substance or a product as compliant with respect to the corresponding product specifications. For content assays, the aim is similar: making the correct decision about product compliance with respect to their specification limits. It is for these reasons that the fitness of these methods should be defined, as they are key elements of the analytical target profile (ATP). Therefore, validation criteria, corresponding acceptance limits, and method validation decision approaches should be settled in accordance with the final use of these analytical procedures. This work proposes a general methodology to achieve this in order to align method validation within the QbD framework and philosophy. β-Expectation tolerance intervals are implemented to decide about the validity of analytical methods. The proposed methodology is also applied to the validation of analytical procedures dedicated to the quantification of impurities or active product ingredients (API) in drug substances or drug products, and its applicability is illustrated with two case studies.

  8. Development and implementation of a coupled computational muscle force optimization bone shape adaptation modeling method.

    PubMed

    Florio, C S

    2015-04-01

    Improved methods to analyze and compare the muscle-based influences that drive bone strength adaptation can aid in the understanding of the wide array of experimental observations about the effectiveness of various mechanical countermeasures to losses in bone strength that result from age, disuse, and reduced gravity environments. The coupling of gradient-based and gradientless numerical optimization routines with finite element methods in this work results in a modeling technique that determines the individual magnitudes of the muscle forces acting in a multisegment musculoskeletal system and predicts the improvement in the stress state uniformity and, therefore, strength, of a targeted bone through simulated local cortical material accretion and resorption. With a performance-based stopping criteria, no experimentally based or system-based parameters, and designed to include the direct and indirect effects of muscles attached to the targeted bone as well as to its neighbors, shape and strength alterations resulting from a wide range of boundary conditions can be consistently quantified. As demonstrated in a representative parametric study, the developed technique effectively provides a clearer foundation for the study of the relationships between muscle forces and the induced changes in bone strength. Its use can lead to the better control of such adaptive phenomena.

  9. Surface estimation methods with phased-arrays for adaptive ultrasonic imaging in complex components

    NASA Astrophysics Data System (ADS)

    Robert, S.; Calmon, P.; Calvo, M.; Le Jeune, L.; Iakovleva, E.

    2015-03-01

    Immersion ultrasonic testing of structures with complex geometries may be significantly improved by using phased-arrays and specific adaptive algorithms that allow to image flaws under a complex and unknown interface. In this context, this paper presents a comparative study of different Surface Estimation Methods (SEM) available in the CIVA software and used for adaptive imaging. These methods are based either on time-of-flight measurements or on image processing. We also introduce a generalized adaptive method where flaws may be fully imaged with half-skip modes. In this method, both the surface and the back-wall of a complex structure are estimated before imaging flaws.

  10. Spatial-light-modulator-based adaptive optical system for the use of multiple phase retrieval methods.

    PubMed

    Lingel, Christian; Haist, Tobias; Osten, Wolfgang

    2016-12-20

    We propose an adaptive optical setup using a spatial light modulator (SLM), which is suitable to perform different phase retrieval methods with varying optical features and without mechanical movement. By this approach, it is possible to test many different phase retrieval methods and their parameters (optical and algorithmic) using one stable setup and without hardware adaption. We show exemplary results for the well-known transport of intensity equation (TIE) method and a new iterative adaptive phase retrieval method, where the object phase is canceled by an inverse phase written into part of the SLM. The measurement results are compared to white light interferometric measurements.

  11. Design of an LVDS to USB3.0 adapter and application

    NASA Astrophysics Data System (ADS)

    Qiu, Xiaohan; Wang, Yu; Zhao, Xin; Chang, Zhen; Zhang, Quan; Tian, Yuze; Zhang, Yunyi; Lin, Fang; Liu, Wenqing

    2016-10-01

    USB 3.0 specification was published in 2008. With the development of technology, USB 3.0 is becoming popular. LVDS(Low Voltage Differential Signaling) to USB 3.0 Adapter connects the communication port of spectrometer device and the USB 3.0 port of a computer, and converts the output of an LVDS spectrometer device data to USB. In order to adapt to the changing and developing of technology, LVDS to USB3.0 Adapter was designed and developed based on LVDS to USB2.0 Adapter. The CYUSB3014, a new generation of USB bus interface chip produced by Cypress and conforming to USB3.0 communication protocol, utilizes GPIF-II (GPIF, general programmable interface) to connect the FPGA and increases effective communication speed to 2Gbps. Therefore, the adapter, based on USB3.0 technology, is able to connect more spectrometers to single computer and provides technical basis for the development of the higher speed industrial camera. This article describes the design and development process of the LVDS to USB3.0 adapter.

  12. Laser pulse design using optimal control theory-based adaptive simulated annealing technique: vibrational transitions and photo-dissociation

    NASA Astrophysics Data System (ADS)

    Nath, Bikram; Mondal, Chandan Kumar

    2014-08-01

    We have designed and optimised a combined laser pulse using optimal control theory-based adaptive simulated annealing technique for selective vibrational excitations and photo-dissociation. Since proper choice of pulses for specific excitation and dissociation phenomena is very difficult, we have designed a linearly combined pulse for such processes and optimised the different parameters involved in those pulses so that we can get an efficient combined pulse. The technique makes us free from choosing any arbitrary type of pulses and makes a ground to check their suitability. We have also emphasised on how we can improve the performance of simulated annealing technique by introducing an adaptive step length of the different variables during the optimisation processes. We have also pointed out on how we can choose the initial temperature for the optimisation process by introducing heating/cooling step to reduce the annealing steps so that the method becomes cost effective.

  13. Design of multi-view stereoscopic HD video transmission system based on MPEG-21 digital item adaptation

    NASA Astrophysics Data System (ADS)

    Lee, Seokhee; Lee, Kiyoung; Kim, Man Bae; Kim, JongWon

    2005-11-01

    In this paper, we propose a design of multi-view stereoscopic HD video transmission system based on MPEG-21 Digital Item Adaptation (DIA). It focuses on the compatibility and scalability to meet various user preferences and terminal capabilities. There exist a large variety of multi-view 3D HD video types according to the methods for acquisition, display, and processing. By following the MPEG-21 DIA framework, the multi-view stereoscopic HD video is adapted according to user feedback. A user can be served multi-view stereoscopic video which corresponds with his or her preferences and terminal capabilities. In our preliminary prototype, we verify that the proposed design can support two deferent types of display device (stereoscopic and auto-stereoscopic) and switching viewpoints between two available viewpoints.

  14. Adaptation of the quality by design concept in early pharmaceutical development of an intranasal nanosized formulation.

    PubMed

    Pallagi, Edina; Ambrus, Rita; Szabó-Révész, Piroska; Csóka, Ildikó

    2015-08-01

    Regulatory science based pharmaceutical development and product manufacturing is highly recommended by the authorities nowadays. The aim of this study was to adapt regulatory science even in the nano-pharmaceutical early development. Authors applied the quality by design (QbD) concept in the early development phase of nano-systems, where the illustration material was meloxicam. The meloxicam nanoparticles produced by co-grinding method for nasal administration were studied according to the QbD policy and the QbD based risk assessment (RA) was performed. The steps were implemented according to the relevant regulatory guidelines (quality target product profile (QTPP) determination, selection of critical quality attributes (CQAs) and critical process parameters (CPPs)) and a special software (Lean QbD Software(®)) was used for the RA, which represents a novelty in this field. The RA was able to predict and identify theoretically the factors (e.g. sample composition, production method parameters, etc.) which have the highest impact on the desired meloxicam-product quality. The results of the practical research justified the theoretical prediction. This method can improve pharmaceutical nano-developments by achieving shorter development time, lower cost, saving human resource efforts and more effective target-orientation. It makes possible focusing the resources on the selected parameters and area during the practical product development.

  15. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  16. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  17. Design of a Model Reference Adaptive Controller for an Unmanned Air Vehicle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.

    2010-01-01

    This paper presents the "Adaptive Control Technology for Safe Flight (ACTS)" architecture, which consists of a non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off nominal ones. The design and implementation procedures of both controllers are presented. The aim of these procedures, which encompass both theoretical and practical considerations, is to develop a controller suitable for flight. The ACTS architecture is applied to the Generic Transport Model developed by NASA-Langley Research Center. The GTM is a dynamically scaled test model of a transport aircraft for which a flight-test article and a high-fidelity simulation are available. The nominal controller at the core of the ACTS architecture has a multivariable LQR-PI structure while the adaptive one has a direct, model reference structure. The main control surfaces as well as the throttles are used as control inputs. The inclusion of the latter alleviates the pilot s workload by eliminating the need for cancelling the pitch coupling generated by changes in thrust. Furthermore, the independent usage of the throttles by the adaptive controller enables their use for attitude control. Advantages and potential drawbacks of adaptation are demonstrated by performing high fidelity simulations of a flight-validated controller and of its adaptive augmentation.

  18. Investigating Item Exposure Control Methods in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Ozturk, Nagihan Boztunc; Dogan, Nuri

    2015-01-01

    This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…

  19. Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.

    PubMed

    Gao, Hui; Song, Yongduan; Wen, Changyun

    2016-08-24

    In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in L[₀,∞]. In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.

  20. 75 FR 8968 - Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-26

    ..., statistical, and regulatory aspects of a wide range of adaptive design clinical studies that can be proposed... design clinical trials (i.e., clinical, statistical, regulatory) call for special consideration, when to interact with FDA while planning and conducting adaptive design studies, what information to include in...

  1. Conceptual design for a user-friendly adaptive optics system at Lick Observatory

    SciTech Connect

    Bissinger, H.D.; Olivier, S.; Max, C.

    1996-03-08

    In this paper, we present a conceptual design for a general-purpose adaptive optics system, usable with all Cassegrain facility instruments on the 3 meter Shane telescope at the University of California`s Lick Observatory located on Mt. Hamilton near San Jose, California. The overall design goal for this system is to take the sodium-layer laser guide star adaptive optics technology out of the demonstration stage and to build a user-friendly astronomical tool. The emphasis will be on ease of calibration, improved stability and operational simplicity in order to allow the system to be run routinely by observatory staff. A prototype adaptive optics system and a 20 watt sodium-layer laser guide star system have already been built at Lawrence Livermore National Laboratory for use at Lick Observatory. The design presented in this paper is for a next- generation adaptive optics system that extends the capabilities of the prototype system into the visible with more degrees of freedom. When coupled with a laser guide star system that is upgraded to a power matching the new adaptive optics system, the combined system will produce diffraction-limited images for near-IR cameras. Atmospheric correction at wavelengths of 0.6-1 mm will significantly increase the throughput of the most heavily used facility instrument at Lick, the Kast Spectrograph, and will allow it to operate with smaller slit widths and deeper limiting magnitudes. 8 refs., 2 figs.

  2. Methods for structural design at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Ellison, A. M.; Jones, W. E., Jr.; Leimbach, K. R.

    1973-01-01

    A procedure which can be used to design elevated temperature structures is discussed. The desired goal is to have the same confidence in the structural integrity at elevated temperature as the factor of safety gives on mechanical loads at room temperature. Methods of design and analysis for creep, creep rupture, and creep buckling are presented. Example problems are included to illustrate the analytical methods. Creep data for some common structural materials are presented. Appendix B is description, user's manual, and listing for the creep analysis program. The program predicts time to a given creep or to creep rupture for a material subjected to a specified stress-temperature-time spectrum. Fatigue at elevated temperature is discussed. Methods of analysis for high stress-low cycle fatigue, fatigue below the creep range, and fatigue in the creep range are included. The interaction of thermal fatigue and mechanical loads is considered, and a detailed approach to fatigue analysis is given for structures operating below the creep range.

  3. Adaptive Signal Recovery on Graphs via Harmonic Analysis for Experimental Design in Neuroimaging

    PubMed Central

    Kim, Won Hwa; Hwang, Seong Jae; Adluru, Nagesh; Johnson, Sterling C.; Singh, Vikas

    2016-01-01

    Consider an experimental design of a neuroimaging study, where we need to obtain p measurements for each participant in a setting where p′ (< p) are cheaper and easier to acquire while the remaining (p – p′) are expensive. For example, the p′ measurements may include demographics, cognitive scores or routinely offered imaging scans while the (p – p′) measurements may correspond to more expensive types of brain image scans with a higher participant burden. In this scenario, it seems reasonable to seek an “adaptive” design for data acquisition so as to minimize the cost of the study without compromising statistical power. We show how this problem can be solved via harmonic analysis of a band-limited graph whose vertices correspond to participants and our goal is to fully recover a multi-variate signal on the nodes, given the full set of cheaper features and a partial set of more expensive measurements. This is accomplished using an adaptive query strategy derived from probing the properties of the graph in the frequency space. To demonstrate the benefits that this framework can provide, we present experimental evaluations on two independent neuroimaging studies and show that our proposed method can reliably recover the true signal with only partial observations directly yielding substantial financial savings. PMID:27807594

  4. An examination of an adapter method for measuring the vibration transmitted to the human arms.

    PubMed

    Xu, Xueyan S; Dong, Ren G; Welcome, Daniel E; Warren, Christopher; McDowell, Thomas W

    2015-09-01

    The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system.

  5. An examination of an adapter method for measuring the vibration transmitted to the human arms

    PubMed Central

    Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.

    2016-01-01

    The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system. PMID:26834309

  6. Design analysis, robust methods, and stress classification

    SciTech Connect

    Bees, W.J.

    1993-01-01

    This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.

  7. Block designs in method transfer experiments.

    PubMed

    Altan, Stan; Shoung, Jyh-Ming

    2008-01-01

    Method transfer is a part of the pharmaceutical development process in which an analytical (chemical) procedure developed in one laboratory (typically the research laboratory) is about to be adopted by one or more recipient laboratories (production or commercial operations). The objective is to show that the recipient laboratory is capable of performing the procedure in an acceptable manner. In the course of carrying out a method transfer, other questions may arise related to fixed or random factors of interest, such as analyst, apparatus, batch, supplier of analytical reagents, and so forth. Estimates of reproducibility and repeatability may also be of interest. This article focuses on the application of various block designs that have been found useful in the comprehensive study of method transfer beyond the laboratory effect alone. An equivalence approach to the comparison of laboratories can still be carried out on either the least squares means or subject-specific means of the laboratories to justify a method transfer or to compare analytical methods.

  8. Design of a Fat-Based Adaptive Visual Servoing for Robots with Time Varying Uncertainties

    NASA Astrophysics Data System (ADS)

    Chien, Ming-Chih; Huang, An-Chyau

    2010-05-01

    Most present adaptive control strategies for visual servoing of robots have assumed that the unknown camera parameters, kinematics, and dynamics of visual servoing system should be linearly parameterized in the regressor matrix form. This is because the limitation of the traditional adaptive design in which the uncertainties should be time-invariant such that all time varying terms in the visual servoing system are collected inside the regressor matrix. However, derivation of the regressor matrix is tedious. In this article, a FAT (function approximation technique) based adaptive controller is designed for visual servo robots without the need for the regressor matrix. A Lyapunov-like analysis is used to justify the closed-loop stability and boundedness of internal signals. Moreover, the upper bounds of tracking errors in the transient state are also derived. Computer simulation results are presented to demonstrate the usefulness of the proposed scheme.

  9. Robust adaptive self-structuring fuzzy control design for nonaffine, nonlinear systems

    NASA Astrophysics Data System (ADS)

    Chen, Pin-Cheng; Wang, Chi-Hsu; Lee, Tsu-Tian

    2011-01-01

    In this article, a robust adaptive self-structuring fuzzy control (RASFC) scheme for the uncertain or ill-defined nonlinear, nonaffine systems is proposed. The RASFC scheme is composed of a robust adaptive controller and a self-structuring fuzzy controller. In the self-structuring fuzzy controller design, a novel self-structuring fuzzy system (SFS) is used to approximate the unknown plant nonlinearity, and the SFS can automatically grow and prune fuzzy rules to realise a compact fuzzy rule base. The robust adaptive controller is designed to achieve an L 2 tracking performance to stabilise the closed-loop system. This L 2 tracking performance can provide a clear expression of tracking error in terms of the sum of lumped uncertainty and external disturbance, which has not been shown in previous works. Finally, five examples are presented to show that the proposed RASFC scheme can achieve favourable tracking performance, yet heavy computational burden is relieved.

  10. Domain Adaptation Methods for Improving Lab-to-field Generalization of Cocaine Detection using Wearable ECG

    PubMed Central

    Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M.

    2016-01-01

    Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data. PMID:28090605

  11. Domain Adaptation Methods for Improving Lab-to-field Generalization of Cocaine Detection using Wearable ECG.

    PubMed

    Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M

    2016-09-01

    Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data.

  12. Impedance adaptation methods of the piezoelectric energy harvesting

    NASA Astrophysics Data System (ADS)

    Kim, Hyeoungwoo

    In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling

  13. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  14. Development of an Assistance Environment for Tutors Based on a Co-Adaptive Design Approach

    ERIC Educational Resources Information Center

    Lavoue, Elise; George, Sebastien; Prevot, Patrick

    2012-01-01

    In this article, we present a co-adaptive design approach named TE-Cap (Tutoring Experience Capitalisation) that we applied for the development of an assistance environment for tutors. Since tasks assigned to tutors in educational contexts are not well defined, we are developing an environment which responds to needs which are not precisely…

  15. Can Approaches to Research in Art and Design Be Beneficially Adapted for Research into Higher Education?

    ERIC Educational Resources Information Center

    Trowler, Paul

    2013-01-01

    This paper examines the research practices in Art and Design that are distinctively different from those common in research into higher education outside those fields. It considers whether and what benefit could be derived from their adaptation by the latter. The paper also examines the factors that are conducive and obstructive to adaptive…

  16. Lessons Learned in Designing and Implementing a Computer-Adaptive Test for English

    ERIC Educational Resources Information Center

    Burston, Jack; Neophytou, Maro

    2014-01-01

    This paper describes the lessons learned in designing and implementing a computer-adaptive test (CAT) for English. The early identification of students with weak L2 English proficiency is of critical importance in university settings that have compulsory English language course graduation requirements. The most efficient means of diagnosing the L2…

  17. Comparing oncology clinical programs by use of innovative designs and expected net present value optimization: Which adaptive approach leads to the best result?

    PubMed

    Parke, Tom; Marchenko, Olga; Anisimov, Vladimir; Ivanova, Anastasia; Jennison, Christopher; Perevozskaya, Inna; Song, Guochen

    2017-01-01

    Designing an oncology clinical program is more challenging than designing a single study. The standard approaches have been proven to be not very successful during the last decade; the failure rate of Phase 2 and Phase 3 trials in oncology remains high. Improving a development strategy by applying innovative statistical methods is one of the major objectives of a drug development process. The oncology sub-team on Adaptive Program under the Drug Information Association Adaptive Design Scientific Working Group (DIA ADSWG) evaluated hypothetical oncology programs with two competing treatments and published the work in the Therapeutic Innovation and Regulatory Science journal in January 2014. Five oncology development programs based on different Phase 2 designs, including adaptive designs and a standard two parallel arm Phase 3 design were simulated and compared in terms of the probability of clinical program success and expected net present value (eNPV). In this article, we consider eight Phase2/Phase3 development programs based on selected combinations of five Phase 2 study designs and three Phase 3 study designs. We again used the probability of program success and eNPV to compare simulated programs. For the development strategies, we considered that the eNPV showed robust improvement for each successive strategy, with the highest being for a three-arm response adaptive randomization design in Phase 2 and a group sequential design with 5 analyses in Phase 3.

  18. Which uncertainty? Using expert elicitation and expected value of information to design an adaptive program

    USGS Publications Warehouse

    Runge, Michael C.; Converse, Sarah J.; Lyons, James E.

    2011-01-01

    Natural resource management is plagued with uncertainty of many kinds, but not all uncertainties are equally important to resolve. The promise of adaptive management is that learning in the short-term will improve management in the long-term; that promise is best kept if the focus of learning is on those uncertainties that most impede achievement of management objectives. In this context, an existing tool of decision analysis, the expected value of perfect information (EVPI), is particularly valuable in identifying the most important uncertainties. Expert elicitation can be used to develop preliminary predictions of management response under a series of hypotheses, as well as prior weights for those hypotheses, and the EVPI can be used to determine how much management could improve if uncertainty was resolved. These methods were applied to management of whooping cranes (Grus americana), an endangered migratory bird that is being reintroduced in several places in North America. The Eastern Migratory Population of whooping cranes had exhibited almost no successful reproduction through 2009. Several dozen hypotheses can be advanced to explain this failure, and many of them lead to very different management responses. An expert panel articulated the hypotheses, provided prior weights for them, developed potential management strategies, and made predictions about the response of the population to each strategy under each hypothesis. Multi-criteria decision analysis identified a preferred strategy in the face of uncertainty, and analysis of the expected value of information identified how informative each strategy could be. These results provide the foundation for design of an adaptive management program.

  19. Design of adaptive reconfigurable control systems using extended-Kalman-filter-based system identification and eigenstructure assignments

    NASA Astrophysics Data System (ADS)

    Wang, Xudong; Syrmos, Vassilis L.

    2004-07-01

    In this paper, an adaptive reconfigurable control system based on extended Kalman filter approach and eigenstructure assignments is proposed. System identification is carried out using an extended Kalman filter (EKF) approach. An eigenstructure assignment (EA) technique is applied for reconfigurable feedback control law design to recover the system dynamic performance. The reconfigurable feedforward controllers are designed to achieve the steady-state tracking using input weighting approach. The proposed scheme can identify not only actuator and sensor variations, but also changes in the system structures using the extended Kalman filtering method. The overall design is robust with respect to uncertainties in the state-space matrices of the reconfigured system. To illustrate the effectiveness of the proposed reconfigurable control system design technique, an aircraft longitudinal vertical takeoff and landing (VTOL) control system is used to demonstrate the reconfiguration procedure.

  20. Jacobi-like method for a control algorithm in adaptive-optics imaging

    NASA Astrophysics Data System (ADS)

    Pitsianis, Nikos P.; Ellerbroek, Brent L.; Van Loan, Charles; Plemmons, Robert J.

    1998-10-01

    A study is made of a non-smooth optimization problem arising in adaptive-optics, which involves the real-time control of a deformable mirror designed to compensate for atmospheric turbulence and other dynamic image degradation factors. One formulation of this problem yields a functional f(U) equals (Sigma) iequals1n maxj[(UTMjU)ii] to be maximized over orthogonal matrices U for a fixed collection of n X n symmetric matrices Mj. We consider first the situation which can arise in practical applications where the matrices Mj are nearly pairwise commutative. Besides giving useful bounds, results for this case lead to a simple corollary providing a theoretical closed-form solution for globally maximizing f if the Mj are simultaneously diagonalizable. However, even here conventional optimization methods for maximizing f are not practical in a real-time environment. The genal optimization problem is quite difficult and is approached using a heuristic Jacobi-like algorithm. Numerical test indicate that the algorithm provides an effective means to optimize performance for some important adaptive-optics systems.

  1. Design of artificial genetic regulatory networks with multiple delayed adaptive responses*

    NASA Astrophysics Data System (ADS)

    Kaluza, Pablo; Inoue, Masayo

    2016-06-01

    Genetic regulatory networks with adaptive responses are widely studied in biology. Usually, models consisting only of a few nodes have been considered. They present one input receptor for activation and one output node where the adaptive response is computed. In this work, we design genetic regulatory networks with many receptors and many output nodes able to produce delayed adaptive responses. This design is performed by using an evolutionary algorithm of mutations and selections that minimizes an error function defined by the adaptive response in signal shapes. We present several examples of network constructions with a predefined required set of adaptive delayed responses. We show that an output node can have different kinds of responses as a function of the activated receptor. Additionally, complex network structures are presented since processing nodes can be involved in several input-output pathways. Supplementary material in the form of one nets file available from the Journal web page at http://dx.doi.org/10.1140/epjb/e2016-70172-9

  2. Analysis and design of a high power laser adaptive phased array transmitter

    NASA Technical Reports Server (NTRS)

    Mevers, G. E.; Soohoo, J. F.; Winocur, J.; Massie, N. A.; Southwell, W. H.; Brandewie, R. A.; Hayes, C. L.

    1977-01-01

    The feasibility of delivering substantial quantities of optical power to a satellite in low earth orbit from a ground based high energy laser (HEL) coupled to an adaptive antenna was investigated. Diffraction effects, atmospheric transmission efficiency, adaptive compensation for atmospheric turbulence effects, including the servo bandwidth requirements for this correction, and the adaptive compensation for thermal blooming were examined. To evaluate possible HEL sources, atmospheric investigations were performed for the CO2, (C-12)(O-18)2 isotope, CO and DF wavelengths using output antenna locations of both sea level and mountain top. Results indicate that both excellent atmospheric and adaption efficiency can be obtained for mountain top operation with a micron isotope laser operating at 9.1 um, or a CO laser operating single line (P10) at about 5.0 (C-12)(O-18)2um, which was a close second in the evaluation. Four adaptive power transmitter system concepts were generated and evaluated, based on overall system efficiency, reliability, size and weight, advanced technology requirements and potential cost. A multiple source phased array was selected for detailed conceptual design. The system uses a unique adaption technique of phase locking independent laser oscillators which allows it to be both relatively inexpensive and most reliable with a predicted overall power transfer efficiency of 53%.

  3. Neural network-based adaptive controller design of robotic manipulators with an observer.

    PubMed

    Sun, F; Sun, Z; Woo, P Y

    2001-01-01

    A neural network (NN)-based adaptive controller with an observer is proposed for the trajectory tracking of robotic manipulators with unknown dynamics nonlinearities. It is assumed that the robotic manipulator has only joint angle position measurements. A linear observer is used to estimate the robot joint angle velocity, while NNs are employed to further improve the control performance of the controlled system through approximating the modified robot dynamics function. The adaptive controller for robots with an observer can guarantee the uniform ultimate bounds of the tracking errors and the observer errors as well as the bounds of the NN weights. For performance comparisons, the conventional adaptive algorithm with an observer using linearity in parameters of the robot dynamics is also developed in the same control framework as the NN approach for online approximating unknown nonlinearities of the robot dynamics. Main theoretical results for designing such an observer-based adaptive controller with the NN approach using multilayer NNs with sigmoidal activation functions, as well as with the conventional adaptive approach using linearity in parameters of the robot dynamics are given. The performance comparisons between the NN approach and the conventional adaptation approach with an observer is carried out to show the advantages of the proposed control approaches through simulation studies.

  4. E-ELT M4 adaptive unit final design and construction: a progress report

    NASA Astrophysics Data System (ADS)

    Biasi, Roberto; Manetti, Mauro; Andrighettoni, Mario; Angerer, Gerald; Pescoller, Dietrich; Patauner, Christian; Gallieni, Daniele; Tintori, Matteo; Mantegazza, Marco; Fumi, Pierluigi; Lazzarini, Paolo; Briguglio, Runa; Xompero, Marco; Pariani, Giorgio; Riccardi, Armando; Vernet, Elise; Pettazzi, Lorenzo; Lilley, Paul; Cayrel, Marc

    2016-07-01

    The E-ELT M4 adaptive unit is a fundamental part of the E-ELT: it provides the facility level adaptive optics correction that compensates the wavefront distortion induced by atmospheric turbulence and partially corrects the structural deformations caused by wind. The unit is based on the contactless, voice-coil technology already successfully deployed on several large adaptive mirrors, like the LBT, Magellan and VLT adaptive secondary mirrors. It features a 2.4m diameter flat mirror, controlled by 5316 actuators and divided in six segments. The reference structure is monolithic and the cophasing between the segments is guaranteed by the contactless embedded metrology. The mirror correction commands are usually transferred as modal amplitudes, that are checked by the M4 controller through a smart real-time algorithm that is capable to handle saturation effects. A large hexapod provides the fine positioning of the unit, while a rotational mechanism allows switching between the two Nasmyth foci. The unit has entered the final design and construction phase in July 2015, after an advanced preliminary design. The final design review is planned for fall 2017; thereafter, the unit will enter the construction and test phase. Acceptance in Europe after full optical calibration is planned for 2022, while the delivery to Cerro Armazones will occur in 2023. Even if the fundamental concept has remained unchanged with respect to the other contactless large deformable mirrors, the specific requirements of the E-ELT unit posed new design challenges that required very peculiar solutions. Therefore, a significant part of the design phase has been focused on the validation of the new aspects, based on analysis, numerical simulations and experimental tests. Several experimental tests have been executed on the Demonstration Prototype, which is the 222 actuators prototype developed in the frame of the advanced preliminary design. We present the main project phases, the current design

  5. Analysis of modified SMI method for adaptive array weight control

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Moses, R. L.

    1989-01-01

    An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.

  6. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  7. Adaptive Discontinuous Evolution Galerkin Method for Dry Atmospheric Flow

    DTIC Science & Technology

    2013-04-02

    standard one-dimensional approximate Riemann solver used for the flux integration demonstrate better stability, accuracy as well as reliability of the...discontinuous evolution Galerkin method for dry atmospheric convection. Comparisons with the standard one-dimensional approximate Riemann solver used...instead of a standard one- dimensional approximate Riemann solver, the flux integration within the discontinuous Galerkin method is now realized by

  8. Providing Adaptation and Guidance for Design Learning by Problem Solving: The Design Planning Approach in DomoSim-TPC Environment

    ERIC Educational Resources Information Center

    Redondo, Miguel A.; Bravo, Crescencio; Ortega, Manuel; Verdejo, M. Felisa

    2007-01-01

    Experimental learning environments based on simulation usually require monitoring and adaptation to the actions the users carry out. Some systems provide this functionality, but they do so in a way which is static or cannot be applied to problem solving tasks. In response to this problem, we propose a method based on the use of intermediate…

  9. Adaptive Imaging Methods using a Rotating Modulation Collimator

    DTIC Science & Technology

    2011-03-01

    an example of two mask designs with similar pitch. The pitch is measured from the left edge of one slit to the left edge of the next slit. The...This block diagram traces the RMC data acquisition process which includes the pulse processing [3...comparison technique developed by Zhou Wang used to measure the relative performance of reconstructed images [5]. The third chapter includes a

  10. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  11. A structural design decomposition method utilizing substructuring

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1994-01-01

    A new method of design decomposition for structural analysis and optimization is described. For this method, the structure is divided into substructures where each substructure has its structural response described by a structural-response subproblem, and its structural sizing determined from a structural-sizing subproblem. The structural responses of substructures that have rigid body modes when separated from the remainder of the structure are further decomposed into displacements that have no rigid body components, and a set of rigid body modes. The structural-response subproblems are linked together through forces determined within a structural-sizing coordination subproblem which also determines the magnitude of any rigid body displacements. Structural-sizing subproblems having constraints local to the substructures are linked together through penalty terms that are determined by a structural-sizing coordination subproblem. All the substructure structural-response subproblems are totally decoupled from each other, as are all the substructure structural-sizing subproblems, thus there is significant potential for use of parallel solution methods for these subproblems.

  12. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  13. Design and progress toward a multi-conjugate adaptive optics system for distributed aberration correction

    SciTech Connect

    Baker, K; Olivier, S; Tucker, J; Silva, D; Gavel, D; Lim, R; Gratrix, E

    2004-08-17

    This article investigates the use of a multi-conjugate adaptive optics system to improve the field-of-view for the system. The emphasis of this research is to develop techniques to improve the performance of optical systems with applications to horizontal imaging. The design and wave optics simulations of the proposed system are given. Preliminary results from the multi-conjugate adaptive optics system are also presented. The experimental system utilizes a liquid-crystal spatial light modulator and an interferometric wave-front sensor for correction and sensing of the phase aberrations, respectively.

  14. DI2ADEM: an adaptive hypermedia designed to improve access to relevant medical information.

    PubMed

    Pagesy, R; Soula, G; Fieschi, M

    2000-01-01

    The World Wide Web (web) provides the same type of information to widely different users and these users must then find the information suitable for their use in the package offered. The authors present the DI2ADEM project designed to take the user into account and intended to provide this user with appropriate medical information. To do that, DI2ADEM is suggesting an adaptive hypermedia based on the management of a meta-knowledge of the user and a knowledge of the information that can be circulated. An adaptive hypermedia prototype devoted to paediatric oncology was implemented on the intranet network of a university hospital.

  15. Design of adaptive filter amplifier in UV communication based on DSP

    NASA Astrophysics Data System (ADS)

    Lv, Zhaoshun; Wu, Hanping; Li, Junyu

    2016-10-01

    According to the problem of the weak signal at receiving end in UV communication, we design a high gain, continuously adjustable adaptive filter amplifier. Based on proposing overall technical indicators and analyzing its working principle of the signal amplifier, we use chip LMH6629MF and two chips of AD797BN to achieve three-level cascade amplification. And apply hardware of DSP TMS320VC5509A to implement digital filtering. Design and verification by Multisim, Protel 99SE and CCS, the results show that: the amplifier can realize continuously adjustable amplification from 1000 to 10000 times without distortion. Magnification error is <=%4@1000 10000. And equivalent input noise voltage of amplification circuit is <=6 nV/ √Hz @30KHz 45KHz, and realizing function of adaptive filtering. The design provides theoretical reference and technical support for the UV weak signal processing.

  16. MONITORING METHODS ADAPTABLE TO VAPOR INTRUSION MONITORING - USEPA COMPENDIUM METHODS TO-15, TO-15 SUPPLEMENT (DRAFT), AND TO-17

    EPA Science Inventory

    USEPA ambient air monitoring methods for volatile organic compounds (VOCs) using specially-prepared canisters and solid adsorbents are directly adaptable to monitoring for vapors in the indoor environment. The draft Method TO-15 Supplement, an extension of the USEPA Method TO-15,...

  17. Adapting Western research methods to indigenous ways of knowing.

    PubMed

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  18. TU-C-17A-07: FusionARC Treatment with Adaptive Beam Selection Method

    SciTech Connect

    Kim, H; Li, R; Xing, L; Lee, R

    2014-06-15

    Purpose: Recently, a new treatment scheme, FusionARC, has been introduced to compensate for the pitfalls in single-arc VMAT planning. It basically allows for the static field treatment in selected locations, while the remaining is treated by single-rotational arc delivery. The important issue is how to choose the directions for static field treatment. This study presents an adaptive beam selection method to formulate fusionARC treatment scheme. Methods: The optimal plan for single-rotational arc treatment is obtained from two-step approach based on the reweighted total-variation (TV) minimization. To choose the directions for static field treatment with extra segments, a value of our proposed cost function at each field is computed on the new fluence-map, which adds an extra segment to the designated field location only. The cost function is defined as a summation of equivalent uniform dose (EUD) of all structures with the fluence-map, while assuming that the lower cost function value implies the enhancement of plan quality. Finally, the extra segments for static field treatment would be added to the selected directions with low cost function values. A prostate patient data was applied and evaluated with three different plans: conventional VMAT, fusionARC, and static IMRT. Results: The 7 field locations, corresponding to the lowest cost function values, are chosen to insert extra segment for step-and-shoot dose delivery. Our proposed fusionARC plan with the selected angles improves the dose sparing to the critical organs, relative to static IMRT and conventional VMAT plans. The dose conformity to the target is significantly enhanced at the small expense of treatment time, compared with VMAT plan. Its estimated treatment time, however, is still much faster than IMRT. Conclusion: The fusionARC treatment with adaptive beam selection method could improve the plan quality with insignificant damage in the treatment time, relative to the conventional VMAT.

  19. Automatic multirate methods for ordinary differential equations. [Adaptive time steps

    SciTech Connect

    Gear, C.W.

    1980-01-01

    A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.

  20. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  1. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  2. Method for designing gas tag compositions

    DOEpatents

    Gross, K.C.

    1995-04-11

    For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node No. 1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node No. 2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred. 5 figures.

  3. Geometric methods for optimal sensor design.

    PubMed

    Belabbas, M-A

    2016-01-01

    The Kalman-Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design.

  4. Research and Design of Rootkit Detection Method

    NASA Astrophysics Data System (ADS)

    Liu, Leian; Yin, Zuanxing; Shen, Yuli; Lin, Haitao; Wang, Hongjiang

    Rootkit is one of the most important issues of network communication systems, which is related to the security and privacy of Internet users. Because of the existence of the back door of the operating system, a hacker can use rootkit to attack and invade other people's computers and thus he can capture passwords and message traffic to and from these computers easily. With the development of the rootkit technology, its applications are more and more extensive and it becomes increasingly difficult to detect it. In addition, for various reasons such as trade secrets, being difficult to be developed, and so on, the rootkit detection technology information and effective tools are still relatively scarce. In this paper, based on the in-depth analysis of the rootkit detection technology, a new kind of the rootkit detection structure is designed and a new method (software), X-Anti, is proposed. Test results show that software designed based on structure proposed is much more efficient than any other rootkit detection software.

  5. Geometric methods for optimal sensor design

    PubMed Central

    Belabbas, M.-A.

    2016-01-01

    The Kalman–Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design. PMID:26997885

  6. Neural method of spatiotemporal filter design

    NASA Astrophysics Data System (ADS)

    Szostakowski, Jaroslaw

    1997-10-01

    There is a lot of applications in medical imaging, computer vision, and the communications, where the video processing is critical. Although many techniques have been successfully developed for the filtering of the still-images, significantly fewer techniques have been proposed for the filtering of noisy image sequences. In this paper the novel approach to spatio- temporal filtering design is proposed. The multilayer perceptrons and functional-link nets are used for the 3D filtering. The spatio-temporal patterns are creating from real motion video images. The neural networks learn these patterns. The perceptrons with different number of layers and neurons in each layer are tested. Also, the different input functions in functional- link net are searched. The practical examples of the filtering are shown and compared with traditional (non-neural) spatio-temporal methods. The results are very interesting and the neural spatio-temporal filters seems to be very efficient tool for video noise reduction.

  7. Quantitative Evaluation of Tissue Surface Adaption of CAD-Designed and 3D Printed Wax Pattern of Maxillary Complete Denture

    PubMed Central

    Chen, Hu; Wang, Han; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objective. To quantitatively evaluate the tissue surface adaption of a maxillary complete denture wax pattern produced by CAD and 3DP. Methods. A standard edentulous maxilla plaster cast model was used, for which a wax pattern of complete denture was designed using CAD software developed in our previous study and printed using a 3D wax printer, while another wax pattern was manufactured by the traditional manual method. The cast model and the two wax patterns were scanned in the 3D scanner as “DataModel,” “DataWaxRP,” and “DataWaxManual.” After setting each wax pattern on the plaster cast, the whole model was scanned for registration. After registration, the deviations of tissue surface between “DataModel” and “DataWaxRP” and between “DataModel” and “DataWaxManual” were measured. The data was analyzed by paired t-test. Results. For both wax patterns produced by the CAD&RP method and the manual method, scanning data of tissue surface and cast surface showed a good fit in the majority. No statistically significant (P > 0.05) difference was observed between the CAD&RP method and the manual method. Conclusions. Wax pattern of maxillary complete denture produced by the CAD&3DP method is comparable with traditional manual method in the adaption to the edentulous cast model. PMID:26583108

  8. Design Framework for an Adaptive MOOC Enhanced by Blended Learning: Supplementary Training and Personalized Learning for Teacher Professional Development

    ERIC Educational Resources Information Center

    Gynther, Karsten

    2016-01-01

    The research project has developed a design framework for an adaptive MOOC that complements the MOOC format with blended learning. The design framework consists of a design model and a series of learning design principles which can be used to design in-service courses for teacher professional development. The framework has been evaluated by…

  9. Educating Instructional Designers: Different Methods for Different Outcomes.

    ERIC Educational Resources Information Center

    Rowland, Gordon; And Others

    1994-01-01

    Suggests new methods of teaching instructional design based on literature reviews of other design fields including engineering, architecture, interior design, media design, and medicine. Methods discussed include public presentations, visiting experts, competitions, artifacts, case studies, design studios, and internships and apprenticeships.…

  10. Design Process of Flight Vehicle Structures for a Common Bulkhead and an MPCV Spacecraft Adapter

    NASA Technical Reports Server (NTRS)

    Aggarwal, Pravin; Hull, Patrick V.

    2015-01-01

    Design and manufacturing space flight vehicle structures is a skillset that has grown considerably at NASA during that last several years. Beginning with the Ares program and followed by the Space Launch System (SLS); in-house designs were produced for both the Upper Stage and the SLS Multipurpose crew vehicle (MPCV) spacecraft adapter. Specifically, critical design review (CDR) level analysis and flight production drawing were produced for the above mentioned hardware. In particular, the experience of this in-house design work led to increased manufacturing infrastructure for both Marshal Space Flight Center (MSFC) and Michoud Assembly Facility (MAF), improved skillsets in both analysis and design, and hands on experience in building and testing (MSA) full scale hardware. The hardware design and development processes from initiation to CDR and finally flight; resulted in many challenges and experiences that produced valuable lessons. This paper builds on these experiences of NASA in recent years on designing and fabricating flight hardware and examines the design/development processes used, as well as the challenges and lessons learned, i.e. from the initial design, loads estimation and mass constraints to structural optimization/affordability to release of production drawing to hardware manufacturing. While there are many documented design processes which a design engineer can follow, these unique experiences can offer insight into designing hardware in current program environments and present solutions to many of the challenges experienced by the engineering team.

  11. An implementable digital adaptive flight controller designed using stabilized single stage algorithms

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Alag, G.

    1975-01-01

    Simple mechanical linkages have not solved the many control problems associated with high performance aircraft maneuvering throughout a wide flight envelope. One procedure for retaining uniform handling qualities over such an envelope is to implement a digital adaptive controller. Towards such an implementation an explicit adaptive controller which makes direct use of on-line parameter identification, has been developed and applied to both linearized and nonlinear equations of motion for a typical fighter aircraft. This controller is composed of an on-line weighted least squares parameter identifier, a Kalman state filter, and a model following control law designed using single stage performance indices. Simulation experiments with realistic measurement noise indicate that the proposed adaptive system has the potential for on-board implementation.

  12. Adjoint methods for aerodynamic wing design

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard

    1993-01-01

    A model inverse design problem is used to investigate the effect of flow discontinuities on the optimization process. The optimization involves finding the cross-sectional area distribution of a duct that produces velocities that closely match a targeted velocity distribution. Quasi-one-dimensional flow theory is used, and the target is chosen to have a shock wave in its distribution. The objective function which quantifies the difference between the targeted and calculated velocity distributions may become non-smooth due to the interaction between the shock and the discretization of the flowfield. This paper offers two techniques to resolve the resulting problems for the optimization algorithms. The first, shock-fitting, involves careful integration of the objective function through the shock wave. The second, coordinate straining with shock penalty, uses a coordinate transformation to align the calculated shock with the target and then adds a penalty proportional to the square of the distance between the shocks. The techniques are tested using several popular sensitivity and optimization methods, including finite-differences, and direct and adjoint discrete sensitivity methods. Two optimization strategies, Gauss-Newton and sequential quadratic programming (SQP), are used to drive the objective function to a minimum.

  13. Game Methodology for Design Methods and Tools Selection

    ERIC Educational Resources Information Center

    Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois

    2014-01-01

    Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…

  14. Design and analysis of an adaptive lens that mimics the performance of the crystalline lens in the human eye

    NASA Astrophysics Data System (ADS)

    Santiago-Alvarado, Agustin; Cruz-Félix, Angel S.; Iturbide-Jiménez, F.; Martínez-López, M.; Ramírez-Como, M.; Armengol-Cruz, V.; Vásquez-Báez, I.

    2014-09-01

    Tunable lenses are optical systems that have attracted much attention due to their potential applications in such areas like ophthalmology, machine vision, microscopy and laser processing. In recent years we have been working in the analysis and performance of a liquid-filled variable focal length lens, this is a lens that can modify its focal length by changing the amount of water within it. Nowadays we extend our study to a particular adaptive lens known as solid elastic lens (SEL) that it is formed by an elastic main body made of Polydimethylsiloxane (PDMS Sylgard 184). In this work, we present the design, simulation and analysis of an adaptive solid elastic lens that in principle imitates the accommodation process of the crystalline lens in the human eye. For this work, we have adopted the parameters of the schematic eye model developed in 1985 by Navarro et al.; this model represents the anatomy of the eye as close as possible to reality by predicting an acceptable and accurate quantity of spherical and chromatic aberrations without any shape fitting. An opto-mechanical analysis of the accommodation process of the adaptive lens is presented, by simulating a certain amount of radial force applied onto the SEL using the finite element method with the commercial software SolidWorks®. We also present ray-trace diagrams of the simulated compression process of the adaptive lens using the commercial software OSLO®.

  15. An Integrated Systems Approach to Designing Climate Change Adaptation Policy in Water Resources

    NASA Astrophysics Data System (ADS)

    Ryu, D.; Malano, H. M.; Davidson, B.; George, B.

    2014-12-01

    Climate change projections are characterised by large uncertainties with rainfall variability being the key challenge in designing adaptation policies. Climate change adaptation in water resources shows all the typical characteristics of 'wicked' problems typified by cognitive uncertainty as new scientific knowledge becomes available, problem instability, knowledge imperfection and strategic uncertainty due to institutional changes that inevitably occur over time. Planning that is characterised by uncertainties and instability requires an approach that can accommodate flexibility and adaptive capacity for decision-making. An ability to take corrective measures in the event that scenarios and responses envisaged initially derive into forms at some future stage. We present an integrated-multidisciplinary and comprehensive framework designed to interface and inform science and decision making in the formulation of water resource management strategies to deal with climate change in the Musi Catchment of Andhra Pradesh, India. At the core of this framework is a dialogue between stakeholders, decision makers and scientists to define a set of plausible responses to an ensemble of climate change scenarios derived from global climate modelling. The modelling framework used to evaluate the resulting combination of climate scenarios and adaptation responses includes the surface and groundwater assessment models (SWAT & MODFLOW) and the water allocation modelling (REALM) to determine the water security of each adaptation strategy. Three climate scenarios extracted from downscaled climate models were selected for evaluation together with four agreed responses—changing cropping patterns, increasing watershed development, changing the volume of groundwater extraction and improving irrigation efficiency. Water security in this context is represented by the combination of level of water availability and its associated security of supply for three economic activities (agriculture

  16. Layer-based buffer aware rate adaptation design for SHVC video streaming

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  17. Adaptive entropy-constrained discontinuous Galerkin method for simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Lv, Yu; Ihme, Matthias

    2015-11-01

    A robust and adaptive computational framework will be presented for high-fidelity simulations of turbulent flows based on the discontinuous Galerkin (DG) scheme. For this, an entropy-residual based adaptation indicator is proposed to enable adaptation in polynomial and physical space. The performance and generality of this entropy-residual indicator is evaluated through direct comparisons with classical indicators. In addition, a dynamic load balancing procedure is developed to improve computational efficiency. The adaptive framework is tested by considering a series of turbulent test cases, which include homogeneous isotropic turbulence, channel flow and flow-over-a-cylinder. The accuracy, performance and scalability are assessed, and the benefit of this adaptive high-order method is discussed. The funding from NSF CAREER award is greatly acknowledged.

  18. Adapted RF Pulse Design for SAR Reduction in Parallel Excitation with Experimental Verification at 9.4 Tesla

    PubMed Central

    Wu, Xiaoping; Akgün, Can; Vaughan, J. Thomas; Andersen, Peter; Strupp, John; Uğurbil, Kâmil; Van de Moortele, Pierre-François

    2010-01-01

    Parallel excitation holds strong promises to mitigate the impact of large transmit B1 (B1+) distortion at very high magnetic field. Accelerated RF pulses, however, inherently tend to require larger values in RF peak power which may result in substantial increase in Specific Absorption Rate in tissues, which is a constant concern for patient safety at very high field. In this study, we demonstrate adapted rate RF pulse design allowing for SAR reduction while preserving excitation target accuracy. Compared with other proposed implementations of adapted rate RF pulses, our approach is compatible with any k-space trajectories, does not require an analytical expression of the gradient waveform and can be used for large flip angle excitation. We demonstrate our method with numerical simulations based on electromagnetic modeling and we include an experimental verification of transmit pattern accuracy on an 8 transmit channel 9.4 T system. PMID:20556882

  19. Adaptive Actor-Critic Design-Based Integral Sliding-Mode Control for Partially Unknown Nonlinear Systems With Input Disturbances.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong

    2016-01-01

    This paper is concerned with the problem of integral sliding-mode control for a class of nonlinear systems with input disturbances and unknown nonlinear terms through the adaptive actor-critic (AC) control method. The main objective is to design a sliding-mode control methodology based on the adaptive dynamic programming (ADP) method, so that the closed-loop system with time-varying disturbances is stable and the nearly optimal performance of the sliding-mode dynamics can be guaranteed. In the first step, a neural network (NN)-based observer and a disturbance observer are designed to approximate the unknown nonlinear terms and estimate the input disturbances, respectively. Based on the NN approximations and disturbance estimations, the discontinuous part of the sliding-mode control is constructed to eliminate the effect of the disturbances and attain the expected equivalent sliding-mode dynamics. Then, the ADP method with AC structure is presented to learn the optimal control for the sliding-mode dynamics online. Reconstructed tuning laws are developed to guarantee the stability of the sliding-mode dynamics and the convergence of the weights of critic and actor NNs. Finally, the simulation results are presented to illustrate the effectiveness of the proposed method.

  20. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  1. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  2. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  3. The Pilates method and cardiorespiratory adaptation to training.

    PubMed

    Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen

    2016-01-01

    Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.

  4. Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Chang, Hua-Hua; Huebner, Alan

    2011-01-01

    This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…

  5. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  6. Automatic barcode recognition method based on adaptive edge detection and a mapping model

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Chen, Lianzheng; Chen, Yifan; Lee, Yong; Yin, Zhouping

    2016-09-01

    An adaptive edge detection and mapping (AEDM) algorithm to address the challenging one-dimensional barcode recognition task with the existence of both image degradation and barcode shape deformation is presented. AEDM is an edge detection-based method that has three consecutive phases. The first phase extracts the scan lines from a cropped image. The second phase involves detecting the edge points in a scan line. The edge positions are assumed to be the intersecting points between a scan line and a corresponding well-designed reference line. The third phase involves adjusting the preliminary edge positions to more reasonable positions by employing prior information of the coding rules. Thus, a universal edge mapping model is established to obtain the coding positions of each edge in this phase, followed by a decoding procedure. The Levenberg-Marquardt method is utilized to solve this nonlinear model. The computational complexity and convergence analysis of AEDM are also provided. Several experiments were implemented to evaluate the performance of AEDM algorithm. The results indicate that the efficient AEDM algorithm outperforms state-of-the-art methods and adequately addresses multiple issues, such as out-of-focus blur, nonlinear distortion, noise, nonlinear optical illumination, and situations that involve the combinations of these issues.

  7. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  8. Evaluating monitoring methods to guide adaptive management of a threatened amphibian (Litoria aurea)

    PubMed Central

    Bower, Deborah S; Pickett, Evan J; Stockwell, Michelle P; Pollard, Carla J; Garnham, James I; Sanders, Madeleine R; Clulow, John; Mahony, Michael J

    2014-01-01

    Prompt detection of declines in abundance or distribution of populations is critical when managing threatened species that have high population turnover. Population monitoring programs provide the tools necessary to identify and detect decreases in abundance that will threaten the persistence of key populations and should occur in an adaptive management framework which designs monitoring to maximize detection and minimize effort. We monitored a population of Litoria aurea at Sydney Olympic Park over 5 years using mark–recapture, capture encounter, noncapture encounter, auditory, tadpole trapping, and dip-net surveys. The methods differed in the cost, time, and ability to detect changes in the population. Only capture encounter surveys were able to simultaneously detect a decline in the occupancy, relative abundance, and recruitment of frogs during the surveys. The relative abundance of L. aurea during encounter surveys correlated with the population size obtained from mark–recapture surveys, and the methods were therefore useful for detecting a change in the population. Tadpole trapping and auditory surveys did not predict overall abundance and were therefore not useful in detecting declines. Monitoring regimes should determine optimal survey times to identify periods where populations have the highest detectability. Once this has been achieved, capture encounter surveys provide a cost-effective method of effectively monitoring trends in occupancy, changes in relative abundance, and detecting recruitment in populations. PMID:24834332

  9. Opportunities and Challenges for Drug Development: Public-Private Partnerships, Adaptive Designs and Big Data.

    PubMed

    Yildirim, Oktay; Gottwald, Matthias; Schüler, Peter; Michel, Martin C

    2016-01-01

    Drug development faces the double challenge of increasing costs and increasing pressure on pricing. To avoid that lack of perceived commercial perspective will leave existing medical needs unmet, pharmaceutical companies and many other stakeholders are discussing ways to improve the efficiency of drug Research and Development. Based on an international symposium organized by the Medical School of the University of Duisburg-Essen (Germany) and held in January 2016, we discuss the opportunities and challenges of three specific areas, i.e., public-private partnerships, adaptive designs and big data. Public-private partnerships come in many different forms with regard to scope, duration and type and number of participants. They range from project-specific collaborations to strategic alliances to large multi-party consortia. Each of them offers specific opportunities and faces distinct challenges. Among types of collaboration, investigator-initiated studies are becoming increasingly popular but have legal, ethical, and financial implications. Adaptive trial designs are also increasingly discussed. However, adaptive should not be used as euphemism for the repurposing of a failed trial; rather it requires carefully planning and specification before a trial starts. Adaptive licensing can be a counter-part of adaptive trial design. The use of Big Data is another opportunity to leverage existing information into knowledge useable for drug discovery and development. Respecting limitations of informed consent and privacy is a key challenge in the use of Big Data. Speakers and participants at the symposium were convinced that appropriate use of the above new options may indeed help to increase the efficiency of future drug development.

  10. Opportunities and Challenges for Drug Development: Public–Private Partnerships, Adaptive Designs and Big Data

    PubMed Central

    Yildirim, Oktay; Gottwald, Matthias; Schüler, Peter; Michel, Martin C.

    2016-01-01

    Drug development faces the double challenge of increasing costs and increasing pressure on pricing. To avoid that lack of perceived commercial perspective will leave existing medical needs unmet, pharmaceutical companies and many other stakeholders are discussing ways to improve the efficiency of drug Research and Development. Based on an international symposium organized by the Medical School of the University of Duisburg-Essen (Germany) and held in January 2016, we discuss the opportunities and challenges of three specific areas, i.e., public–private partnerships, adaptive designs and big data. Public–private partnerships come in many different forms with regard to scope, duration and type and number of participants. They range from project-specific collaborations to strategic alliances to large multi-party consortia. Each of them offers specific opportunities and faces distinct challenges. Among types of collaboration, investigator-initiated studies are becoming increasingly popular but have legal, ethical, and financial implications. Adaptive trial designs are also increasingly discussed. However, adaptive should not be used as euphemism for the repurposing of a failed trial; rather it requires carefully planning and specification before a trial starts. Adaptive licensing can be a counter-part of adaptive trial design. The use of Big Data is another opportunity to leverage existing information into knowledge useable for drug discovery and development. Respecting limitations of informed consent and privacy is a key challenge in the use of Big Data. Speakers and participants at the symposium were convinced that appropriate use of the above new options may indeed help to increase the efficiency of future drug development. PMID:27999543

  11. Use of physiological constraints to identify quantitative design principles for gene expression in yeast adaptation to heat shock

    PubMed Central

    Vilaprinyo, Ester; Alves, Rui; Sorribas, Albert

    2006-01-01

    Background Understanding the relationship between gene expression changes, enzyme activity shifts, and the corresponding physiological adaptive response of organisms to environmental cues is crucial in explaining how cells cope with stress. For example, adaptation of yeast to heat shock involves a characteristic profile of changes to the expression levels of genes coding for enzymes of the glycolytic pathway and some of its branches. The experimental determination of changes in gene expression profiles provides a descriptive picture of the adaptive response to stress. However, it does not explain why a particular profile is selected for any given response. Results We used mathematical models and analysis of in silico gene expression profiles (GEPs) to understand how changes in gene expression correlate to an efficient response of yeast cells to heat shock. An exhaustive set of GEPs, matched with the corresponding set of enzyme activities, was simulated and analyzed. The effectiveness of each profile in the response to heat shock was evaluated according to relevant physiological and functional criteria. The small subset of GEPs that lead to effective physiological responses after heat shock was identified as the result of the tuning of several evolutionary criteria. The experimentally observed transcriptional changes in response to heat shock belong to this set and can be explained by quantitative design principles at the physiological level that ultimately constrain changes in gene expression. Conclusion Our theoretical approach suggests a method for understanding the combined effect of changes in the expression of multiple genes on the activity of metabolic pathways, and consequently on the adaptation of cellular metabolism to heat shock. This method identifies quantitative design principles that facilitate understating the response of the cell to stress. PMID:16584550

  12. Novel method to form adaptive internal impedance profiles in walkers.

    PubMed

    Escudero Morland, Maximilano F; Althoefer, Kaspar; Nanayakkara, Thrishantha

    2015-01-01

    This paper proposes a novel approach to improve walking in prosthetics, orthotics and robotics without closed loop controllers. The approach requires impedance profiles to be formed in a walker and uses state feedback to update the profiles in real-time via a simple policy. This approach is open loop and inherently copes with the challenge of uncertain environment. In application it could be used either online for a walker to adjust its impedance profiles in real-time to compensate for environmental changes, or offline to learn suitable profiles for specific environments. So far we have conducted simulations and experiments to investigate the transient and steady state gaits obtained using two simple update policies to form damping profiles in a passive dynamic walker known as the rimless wheel (RW). The damping profiles are formed in the motor that moves the RW vertically along a rail, analogous to a knee joint, and the two update equations were designed to a) control the angular velocity profile and b) minimise peak collision forces. Simulation results show that the velocity update equation works within limits and can cope with varying ground conditions. Experiment results show the angular velocity average reaching the target as well as the peak force update equation reducing peak collision forces in real-time.

  13. Translating Vision into Design: A Method for Conceptual Design Development

    NASA Technical Reports Server (NTRS)

    Carpenter, Joyce E.

    2003-01-01

    One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.

  14. Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Ming; Chen, Xiao-Fei

    2001-03-01

    Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.

  15. Using Software Design Methods in CALL

    ERIC Educational Resources Information Center

    Ward, Monica

    2006-01-01

    The phrase "software design" is not one that arouses the interest of many CALL practitioners, particularly those from a humanities background. However, software design essentials are simply logical ways of going about designing a system. The fundamentals include modularity, anticipation of change, generality and an incremental approach. While CALL…

  16. Design of an effective energy receiving adapter for microwave wireless power transmission application

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Wang, Shen-Yun; Geyi, Wen

    2016-10-01

    In this paper, we demonstrate the viability of an energy receiving adapter in a 8×8 array form with high power reception efficiency with the resonator of artificial electromagnetic absorber being used as the element. Unlike the conventional reported rectifying antenna resonators, both the size of the element and the separations between the elements are electrically small in our design. The energy collecting process is explained with an equivalent circuit model, and a RF combining network is designed to combine the captured AC power from each element to one main terminal for AC-to-DC conversion. The energy receiving adapter yields a total reception efficiency of 67% (including the wave capture efficiency of 86% and the AC-to-DC conversion efficiency of 78%), which is quite promising for microwave wireless power transmission.

  17. An Efficient Inverse Aerodynamic Design Method For Subsonic Flows

    NASA Technical Reports Server (NTRS)

    Milholen, William E., II

    2000-01-01

    Computational Fluid Dynamics based design methods are maturing to the point that they are beginning to be used in the aircraft design process. Many design methods however have demonstrated deficiencies in the leading edge region of airfoil sections. The objective of the present research is to develop an efficient inverse design method which is valid in the leading edge region. The new design method is a streamline curvature method, and a new technique is presented for modeling the variation of the streamline curvature normal to the surface. The new design method allows the surface coordinates to move normal to the surface, and has been incorporated into the Constrained Direct Iterative Surface Curvature (CDISC) design method. The accuracy and efficiency of the design method is demonstrated using both two-dimensional and three-dimensional design cases.

  18. Search Control Algorithm Based on Random Step Size Hill-Climbing Method for Adaptive PMD Compensation

    NASA Astrophysics Data System (ADS)

    Tanizawa, Ken; Hirose, Akira

    Adaptive polarization mode dispersion (PMD) compensation is required for the speed-up and advancement of the present optical communications. The combination of a tunable PMD compensator and its adaptive control method achieves adaptive PMD compensation. In this paper, we report an effective search control algorithm for the feedback control of the PMD compensator. The algorithm is based on the hill-climbing method. However, the step size changes randomly to prevent the convergence from being trapped at a local maximum or a flat, unlike the conventional hill-climbing method. The randomness depends on the Gaussian probability density functions. We conducted transmission simulations at 160Gb/s and the results show that the proposed method provides more optimal compensator control than the conventional hill-climbing method.

  19. A Monte Carlo Approach to the Design, Assembly, and Evaluation of Multistage Adaptive Tests

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2008-01-01

    This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…

  20. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.